id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
256923961 | pes2o/s2orc | v3-fos-license | Hepatic Transcriptome Analysis Reveals Genes, Polymorphisms, and Molecules Related to Lamb Tenderness
Simple Summary Tenderness influences repurchase decisions of sheep meat because it is a significant factor contributing to eating satisfaction and consumer acceptance. This study analyzed the transcriptome of five high- and five low-lamb tenderness samples. The result showed potential candidate hepatic genes and polymorphisms affecting lamb tenderness. These potential candidate genes and genetic markers could be used in lamb tenderness selection programs. Abstract Tenderness is a key meat quality trait that determines the public acceptance of lamb consumption, so genetic improvement toward lamb with higher tenderness is pivotal for a sustainable sheep industry. However, unravelling the genomics controlling the tenderness is the first step. Therefore, this study aimed to identify the transcriptome signatures and polymorphisms related to divergent lamb tenderness using RNA deep sequencing. Since the molecules and enzymes that control muscle growth and tenderness are metabolized and synthesized in the liver, hepatic tissues of ten sheep with divergent phenotypes: five high- and five low-lamb tenderness samples were applied for deep sequencing. Sequence analysis identified the number of reads ranged from 21.37 to 25.37 million bases with a mean value of 22.90 million bases. In total, 328 genes are detected as differentially expressed (DEGs) including 110 and 218 genes that were up- and down-regulated, respectively. Pathway analysis showed steroid hormone biosynthesis as the dominant pathway behind the lamb tenderness. Gene expression analysis identified the top high (such as TP53INP1, CYP2E1, HSD17B13, ADH1C, and LPIN1) and low (such as ANGPTL2, IGFBP7, FABP5, OLFML3, and THOC5) expressed candidate genes. Polymorphism and association analysis revealed that mutation in OLFML3, ANGPTL2, and THOC5 genes could be potential candidate markers for tenderness in sheep. The genes and pathways identified in this study cause variation in tenderness, thus could be potential genetic markers to improve meat quality in sheep. However, further validation is needed to confirm the effect of these markers in different sheep populations so that these could be used in a selection program for lamb with high tenderness.
Introduction
Meat quality refers to a variety of meat characteristics such as compositional quality (lean to fat ratio, drip loss, pH) and palatability factors such as visual appearance, smell, liver [21,22]. A relationship between glycogen depots in liver and muscle and ultimate muscle pH values has been described postulating that higher glycogen storages contribute to the lower ultimate pH in the muscle [27,28]. Furthermore, all dietary supplements and muscle component-associated molecules are being metabolized in the liver; thus it is pivotal to unravel the hepatic transcriptome affecting meat quality traits including meat tenderness. However, application of RNA-Seq to identify DEGs and polymorphisms affecting lamb tenderness is limited. Hence, the objective of this study is to decipher the transcriptome and polymorphisms within the liver with divergent tenderness in Indonesian sheep using Illumina Hiseq 2500. Several candidate genes and genetic markers related to the tenderness of sheep meat were identified which could contribute to a sustainable sheep industry by improving meat quality.
Animals and Phenotype
Tissue samples (longissimus dorsi, semitendinosus, liver tissues) and phenotypes were collected from the male Garut composite sheep (GCS) with an average liveweight of 30 kg and age of 12 months. GCS sheep are 50% indigenous Garut sheep, 25% St. Croix sheep from the Virgin Islands, and 25% Moulton Charolais sheep from France [29] (Supplementary Figure S1). All the sheep (n = 140) were slaughtered at PT Pramana Pangan Utama, IPB University. The Animal Ethics Commission of the IPB University approved all procedures involving animals (approval no. 117-2018 IPB). Phenotypes were measured for meat quality traits including tenderness (shear force), water holding capacity (WHC), pH, and cooking loss. Meat tenderness was measured using Warner-Bratzler shear force (WBSF) [30]. The WHC was measured by measuring the amount of water lost (mgH 2 O). WHC is the percentage of weight lost from 5 g meat samples after being pressurized at 2250 g for 5 min [31]. The pH value was measured with a pH meter after carcass being stored for 24 h postmortem (final pH). Cooking loss was measured by deducting the initial weight of the sample meat after being cooked in a water bath at a temperature of 80 • C for 1 h [30].
For the RNA sequencing analysis, ten GCS were selected from the pool of 140 sheep with extreme divergent tenderness phenotypes. The average shear force value for high (HT) and low (LT) tenderness groups were 3.14 ± 0.09 and 4.69 ± 0.67, respectively (Table 1). RNA was extracted from the livers of 5 sheep with extremely high (HT) and 5 sheep with extremely low (LT) tenderness levels using the RNeasy Mini Kit (Qiagen, Hilden, Germany).
Library Construction and Sequencing
A Nanodrop 2000 Spectrophotometer was used to measure the quantity of RNA and an Agilent 2100 Bioanalyzer was used to measure the quality of RNA (Agilent Technologies, Sangta Clara, CA, USA). TruSeq RNA Library Prep Kit v2 was used to prepare libraries from RNA samples of acceptable quality and quantity with a minimum RNA Integrity Number (RIN) > 7.0 (Illumina, San Diego, CA, USA). A total of 101 base paired-end sequencing of the 10 samples were performed using Illumina HiSeq-2500 Platform (Illumina) and sequencing reads were mapped to the sheep (Ovis aries) reference genome Oar_v4.0. The sequencing information was delivered to NCBI (Accession: PRJNA847713, ID: 847713).
Differential Gene Expression and Pathway Analysis
We performed differential expression analysis to assess the gene expression differences between two distinct sample conditions. To discover significant DEGs, the absolute value of the log2 (fold change) > 1.5 was used as the threshold. The R package DESeq was used to identify DEGs with a false discovery rate (FDR) of 0.05 compared to the sheep group (criteria: fold change > 1.5 and p ≤ 0.05) [32]. DESeq then includes a Generalized Linear Model (GLM) function for calculating both within and between group deviations. DAVID [33] was used to perform functional annotation and pathway enrichment of DEGs. The DEGs were used for gene ontology (GO) enrichment [34] and pathway analyses in the Kyoto Encyclopedia of Genes and Genomes (KEGG) [35]. The GO terms and pathways with p ≤ 0.05 were considered significantly enriched, and only the genes significant in the tests (p ≤ 0.05) were chosen for further investigation.
Network Enrichment Analysis
The network enrichment analysis was carried out with the help of the open-source online tool NetworkAnalyst [36]. The literature-curated PPI database imported from InnateDB was used to build the PPI network with human orthologs of the differentially expressed genes [37]. The standard network algorithm generated 1 larger subnetwork called "continent" and 7 smaller subnetworks called "island". Due to all the islands having only one seed gene with 3-9 nodes connected by 2-8 edges, they were given additional consideration. The continent was further modified for better visualization by using the tool's 'minimize network' function. The PPI was represented as nodes (circles representing regulatory genes) linked by edges (lines representing the direct molecular interactions). For detecting the network's highly interconnected genes (hub genes), two network centrality measures were used: degree (number of connections to other nodes) and betweenness (number of shortest paths passing through the node). Higher degree and betweenness nodes were regarded as potentially more important network hubs in cellular signal trafficking. Furthermore, liverspecific co-expression networks were created by incorporating the TCSBN database [38] into the NetworkAnalyst tool.
Analysis of Quantitative Real-Time PCR (qRT-PCR) Validation
Reverse transcriptase PCR was performed by transcribing extracted RNA into complementary DNA (cDNA) using a First Strand cDNA Trancriptor Synthesis (Thermo Scientific, Vilnius, Lithuania) kit based on the manufacturer's protocol. Quantification of cDNA was performed by a qRT-PCR method with AG qTower 4 channel (Analytic Jena engine, Jena, Germany). The online tool Primer3 software (https://primer3.ut.ee/, accessed on 1 March 2021) [39] was used to design gene specific primers for qRT-PCR ( Table 2). The 96-well microtiter plate in each run contained one cDNA sample and no template control. Each sample was examined twice (technical replication), and the geometric mean of the Ct values was applied to profile mRNA expression. For normalization of the target genes, the geometric mean of two housekeeping genes, β-Actin and GAPDH was used. Ct values were calculated by subtracting the targeted gene from the geometric mean of the reference genes: ∆Ct = Ct target − Ct housekeeping genes [40].
Analysis of Gene Variation
SNP calls were made on the mapping files generated by the TopHat algorithm for gene variation analysis using the 'samtools mpileup' command and associated algorithms [41]. For further analysis, we chose variants with a minimum Root Mean Square (RMS) mapping quality of 20 and a minimum read depth of 100. The selected variants were compared to the dbSNP database to identify previously studied polymorphisms. In order to identify differentially expressed genes with sequence polymorphisms, we cross-checked and filtered these variants based on their chromosomal positions against DEGs, retaining only those variants that mapped to DEG chromosomal positions. In this way, we could pick a few mutations that mapped to DEGs among thousands of identified potential sequence polymorphisms. Furthermore, we calculated the read/coverage depth of these polymorphisms in all samples to determine whether they were segregated in only one sample group (high or low tenderness) or in both groups (high and low tenderness). Using the GeneWise software (http://www.ebi.ac.uk/Tools/psa/genewise/, accessed on 20 April 2021), the identified SNPs were grouped as synonymous or non-synonymous by comparing protein sequences and nucleotide incorporated SNP position [42].
SNP Validation and Association Study
A SNP in each of three highly polymorphic DEGs, and genes known to play roles in tenderness (OLFML3, ANGPTL2, and THOC5) was chosen for validation ( Table 2). The muscle (Longissimus dorsi) samples from 140 sheep were collected for DNA extraction until a final concentration of 50 ng/mL DNA was obtained. For genotyping, the PCR-RFLP (Polymerase Chain Reaction-Restriction Fragment Length Polymorphism) method was used. The PCR was carried out in a 15 µL volume with 1 µL of genomic DNA, 0.4 µL of primers, 6.1 µL of MyTaq HS Red Mix, and 7.5 µL of nuclease water. A 1.5% agarose gel (Fischer Scientific Ltd., Meridian, MS, USA) was used to examine the PCR product, which was then digested with the appropriate restriction enzyme. The digested PCR-RFLP products were resolved in 2% agarose gels. PROC GLM in SAS 9.2 was used to calculate the effect of genotypes on meat quality traits (SAS Institute Inc, Cary, NC, USA). One-way analysis of variance (ANOVA) followed by Duncan's test was used to compare the loci genotypes' least square mean values.
where: Y i = the meat quality trait; µ = the population mean; genotype i = the fixed effect of i-th genotype; e i = the residual error.
Phenotype of Meat Quality Traits in Sheep
The phenotypic traits of meat quality measured were: pH, shear force (for tenderness), cooking loss, and WHC. Descriptive parameters for meat quality traits' data are given in Table 1. The average pH, shear force, cooking loss, and WHC values were 5.98, 3.66, 46.46, and 28.09, respectively. Sheep having a 'shear force value' greater and less than 4.6 kg/cm 2 were defined as low-and high-meat tenderness, respectively [4].
Overview of the RNA Deep Sequencing Data
In this study, cDNA libraries have been sequenced from ten liver tissues collected from phenotypically divergent (five HT, and five LT) sheep using Illumina HiSeq 2500. The sequencing produced sequence read clusters with a maximum of 100 bp. The total number of reads after quality control and filtering ranged from 20.02 to 21.90 million. The total number of reads in each group, as well as the number of reads that have been mapped to reference sequences are given in Table 3. In the LT group, 87.26% to 88.78% of total reads were aligned to reference sequences, whereas 83.85% to 88.80% of total reads were aligned to reference sequences in case of HT group (Table 3).
Differential Gene Expression Analysis
The raw reads of DEGs in the liver tissues of sheep with HT and LT levels were calculated using the R package DESeq. To identify DEGs in the liver with divergent (high and low) tenderness, a negative binomial distribution-based method implemented in DESeq was used. The differential expression analysis yielded 328 hepatic DEGs using the criteria p adjusted 0.05 and log2 fold change > 1.5 ( Figure 1). A total of 110 and 218 genes were identified as up-and down-regulated, respectively, in the HT and LT groups (Supplementary Table S1). The log2 fold change values for DEGs ranged from 4.09 to 4.80. Heatmaps ( Figure 2) depicted the top 30 up-and down-regulated hepatic genes found in sheep with high and low tenderness. The top 30 up-and down-regulated genes, along with log FC and p-values are presented in Supplementary Table S2. The differential expression analysis of these data indicated both novel transcripts and genes previously reported in other gene expression studies (Supplementary Table S1).
were identified as up-and down-regulated, respectively, in the HT and LT groups (Supplementary Table S1). The log2 fold change values for DEGs ranged from 4.09 to 4.80. Heatmaps ( Figure 2) depicted the top 30 up-and down-regulated hepatic genes found in sheep with high and low tenderness. The top 30 up-and down-regulated genes, along with log FC and p-values are presented in Supplementary Table S2. The differential expression analysis of these data indicated both novel transcripts and genes previously reported in other gene expression studies (Supplementary Table S1).
Functional Analysis
Cellular components, molecular functions, and biological processes were the most
Functional Analysis
Cellular components, molecular functions, and biological processes were the most important GO terms discovered ( Figure 3). Calcium ion binding and iron ion binding were the molecular functions that controlled the metabolism of tenderness-related molecules. The cellular processes identified were mainly related to extracellular exosome and extracellular space. The biological mechanisms revealed were correlated to heart development and defense response to Gram-negative bacteria ( Table 4). The DAVID tool identified KEGG pathways that were overrepresented for DEG. The dominant pathway for differences in lamb tenderness level was steroid hormone biosynthesis ( Figure 4). The hepatic genes defined in these pathways with high and low tenderness levels are shown in Table 5.
The Hepatic Transcriptome Network's Regulatory Hub Genes
A PPI network with 117 seed genes and 944 nodes connected by 1138 edges was built to identify potential regulatory hub genes in the hepatic transcriptional network. The potential hub genes were identified using network centrality measures, with ACTN2, SOD1, TPM2, THOC5, PLAT, TRIM9, FKBP10, MEIS1, CACNA1C, SPRY1, and GAAR1 upregulated, and GRIP1, PFN2, NOL3, NR2F1, MARCKS, MAP2K6, E2F2, ENG, and PRMT2 down-regulated ( Figure 5A,B). In addition, a liver-specific gene co-expression network was developed to identify additional hub genes that may have been missing in the PPI network. The co-expression network revealed that the majority of the potential hub genes, including COL6A1, AEBP1, PRELP, ANGPTL2, EEEMP1, SCARF2, ENG, LOX1, and SSC5D, were downregulated, while only four hub genes (FKBP10, IGFBP7, GABBR1, and SPRY1) among the top twenty were upregulated in the liver tissue obtained from GCS ( Figure 6A,B). Surprisingly, the common hub genes in both the PPI and the co-expression network were FKBP10, GABBR1, ENG, NR2F1, and SPRY1 (Supplementary Tables S3 and S4).
Quantitative Real-Time PCR Validation of Selected DEGs (qRT-PCR)
To validate the RNA-Seq results, a total of ten genes (HSD17B13, ANGPLT2, IGFBP7, TP53INP1, ADH1C, OLFML3, THOC5, CYP2E1, LPIN1, and FABP5) were chosen and quantified using qRT-PCR. The same samples that were used for deep sequencing were used for this purpose. A comparison of qRT-PCR data for ten selected genes revealed quantitative expression concordance with RNA-Seq results (Figure 7). The qRT-PCR gene expression values were normalized using two housekeeping genes (GAPDH and β-Actin). Detailed GenBank accession numbers, primer sequences, and annealing temperatures for qRT-PCR used in this study are provided in Table 2.
Quantitative Real-Time PCR Validation of Selected DEGs (qRT-PCR)
To validate the RNA-Seq results, a total of ten genes (HSD17B13, ANGPLT2, IGFBP7, TP53INP1, ADH1C, OLFML3, THOC5, CYP2E1, LPIN1, and FABP5) were chosen and quantified using qRT-PCR. The same samples that were used for deep sequencing were used for this purpose. A comparison of qRT-PCR data for ten selected genes revealed quantitative expression concordance with RNA-Seq results (Figure 7). The qRT-PCR gene expression values were normalized using two housekeeping genes (GAPDH and β-Actin). Detailed GenBank accession numbers, primer sequences, and annealing temperatures for qRT-PCR used in this study are provided in Table 2.
Analysis of Gene Variation and an Association Study
In 54 DEGs with high and low tenderness, 334 single nucleotide polymorphisms (SNPs) were found (Supplementary Table S5). The selected polymorphisms identified in hepatic DEGs are listed in Supplementary Table S6. Figure 8A,B show the distribution of the number of genes with SNPs and the selected SNPs used for validation. Furthermore, three SNPs were chosen for association analysis based on gene functions related to ten-
Analysis of Gene Variation and an Association Study
In 54 DEGs with high and low tenderness, 334 single nucleotide polymorphisms (SNPs) were found (Supplementary Table S5). The selected polymorphisms identified in hepatic DEGs are listed in Supplementary Table S6. Figure 8A,B show the distribution of the number of genes with SNPs and the selected SNPs used for validation. Furthermore, three SNPs were chosen for association analysis based on gene functions related to tenderness ( Figure 8B and Supplementary Table S6). The selected SNPs were found in the genes OLFML3, ANGPTL2, and THOC5. Note, the SNP in OLFML3 and ANGPTL2 genes are located in intron 4 and 5, respectively, whereas the SNP in THOC5 gene is located in exon 17. The segregation and association of these SNPs in the sheep population (n = 140) used in this study were validated. The association analysis suggested that the polymorphisms in OLFML3, ANGPTL2, and THOC5 were associated (p < 0.05) with tenderness ( Table 6. Genotypes and association studies of selected candidate meat quality markers.
Analysis of RNA Seq Data
Transcriptome profiling sheds light on the genetics underlying tenderness in sheep. Hence, this comparative RNA-Seq study involving divergent (high vs. low) tenderness in GCS was performed. The identified DEGs determine the functional complexity of tenderness and provide important information on phenotypic and functional differences in tenderness in lamb. The mapping results showed that the average number of reads was 20.74 Table 6. Genotypes and association studies of selected candidate meat quality markers.
Analysis of RNA Seq Data
Transcriptome profiling sheds light on the genetics underlying tenderness in sheep. Hence, this comparative RNA-Seq study involving divergent (high vs. low) tenderness in GCS was performed. The identified DEGs determine the functional complexity of tenderness and provide important information on phenotypic and functional differences in tenderness in lamb. The mapping results showed that the average number of reads was 20.74 million, with 87.27% of the reads classified as mapped reads corresponding to exon reads ( Table 3). The percentage of mapped reads was higher than the previous study by Gunawan et al. [17] (85.73%) and Gunawan et al. [18] (85.89%) in Indonesian Javanese fat-tailed sheep. The percentage of mapped reads is an indicator of the overall sequencing accuracy and absence of contaminating DNA [43].
Differentially Expressed Gene Analysis
Among 328 DEGs, the differences in gene expression were more clearly shown using the top 30 genes that were highly expressed and the top 30 genes that were expressed the lowest in liver tissue with different levels of tenderness, along with FC log values and pvalues (Supplementary Table S2). Potential candidate genes that were upregulated include TP53INP1, APOA5, CYP2E1, HSD17B13, ADH1C, and LPIN1. The TP53INP1 gene belongs to the p53 tumor protein family which has been shown to be associated with skeletal muscle growth, myocytes' division and maturation in pigs [44]. The APOA5 (Apolipoprotein A5) gene is reported to have an influence on FA metabolism in Indonesian sheep [10]. The CYP2E1 gene (Cytochrome P450 2E1) plays a key role in the enzymes' metabolism in the liver that affects the meat flavor in pig [45]. Note, the CYP2A6 gene was previously found to be associated with lamb flavor and odor in sheep [9]. The HSD17β13 is a 17β-HSD family gene that mediates the physiological functions of reproductive hormones, and the HSD17β gene family was reported to be associated with meat quality traits in pigs [46]. It has been reported that the ADH1C (Alcohol Dehydrogenase 1C) gene is associated with vitamin A content and muscle tenderness in Korean cattle [47]. The Lipin 1 (LPIN-1) gene is a key factor regulating lipid, dietary glucose, and polyunsaturated FA metabolism [48], and thus may regulate muscle tenderness.
The top down-regulated candidate genes found to influence the metabolism of muscle tenderness-related molecules were ANGPTL2, IGFBP7, FABP5, CH25H, LOXL3, OLFML3, THOC5, and AEBP1. The ANGPTL2 gene is a member of angiopoietin-like proteins family that was reported to be associated with fat deposition in cattle [49]. Both the AEBP1 and IGFBP7 genes were reported to influence muscle development in pigs [50]. The IGFBP7 (Insulin-like growth factor binding-protein 7) binds to IGF and regulates IGF-signaling pathways. Overexpression of the IGFBP7 gene is reported to inhibit lipid accumulation in tissues [51], thus downregulation may positively affect the lipid accumulation and muscle tenderness because the IMF content or marbling is positively correlated with tenderness. The FABP5 gene is a member of FABPs (fatty acid-binding proteins) family that controls lipid metabolism [52] and thus may affect muscle tenderness. Furthermore, CH25H (cholesterol 25-monooxygenase) inhibits the cholesterol biosynthetic enzymes and has a defense function [53], but its association with muscle tenderness is yet to be deciphered. LOXL3 (Lysyl oxidase-like 3) is a member of the lysyl oxidase family that play roles in extracellular matrix maturation and are involved in bone development [54]. The OLFML3 (Olfactomedin-like 3) gene has been reported to be differentially expressed during muscle development in pigs [55]. The THOC5 gene is well known for playing a key role in lipid and FA metabolism in cattle [56]. The positive effect of lipids on meat tenderness might be due to the presence of lipids in the perimysium, that separates muscle fiber bundles [56]. Note, some of the important DEGs have been studied in cattle and pigs [49,50,55,56], but very little or no study has been performed in sheep with regards to meat quality traits, including tenderness.
Biological Function Analysis for DEGs
This study enriched the GO categories of biological processes, cellular components, and molecular functions ( Figure 3 and Table 4). The enriched biological processes identified were mostly related to heart development, defense response to bacteria, positive regulation of vasculogenesis, negative regulation of muscle cell apoptotic processes, and negative regulation of the oxidative stress-induced intrinsic apoptotic signaling pathway. Oxidative and apoptotic processes are involved in metabolism of molecules that affect meat tenderness. The majority of the oxidative metabolism-related proteins are found to play a role in stress regulation too [57]. Muscles are usually exposed to a variety of reactive oxygen species resulting from oxidative stress, thus, increased antioxidant activities may regulate apoptosis and influence meat tenderness [58]. Several studies reported that biological pathways related to meat tenderness usually include proteolysis, muscular structure and contraction, oxidative stress, heat shock proteins, and apoptosis [59,60].
Cellular components identified consist of extracellular matrix, extracellular space, proteinaceous extracellular matrix, extracellular exosomes, and sarcolemma ( Figure 3). The extracellular matrix provides biomechanical strength to the intramuscular connective tissues and regulates the structural properties of myocytes. Decorin and laminin are two extracellular matrix molecules that modulate the activity of myostatin, which regulates skeletal muscle mass. Furthermore, decorin has been shown to activate the insulin-like growth factor-I receptor (IGF-IR) and myogenic cell differentiation, and thus functions as a signaling molecule for myogenic cells. The structural integrity of the intramuscular connective tissues increases with animal growth. The collagen fibrils within the endomysium get connected, and the collagen fibers in the perimysium become increasingly thick and their wavy pattern becomes more regular during muscle development. These modifications increase the mechanical strength of the intramuscular connective tissues which contributes to meat toughening [61]. The molecular functions controlling the tenderness-related molecules' metabolism were related to oxidoreductase activity, acting on paired donors with incorporation or reduction of molecular oxygen, reduced flavin or flavoprotein, oxidoreductase activity acting on the CH-NH2 group of oxygen, calcium ion binding, iron ion binding, copper ion binding, glycosaminoglycan binding, and scavenger receptor activity (Figure 3). Oxidoreductase activity that controls the muscle mass and strength, and calcium ion binding that regulates the muscle contractile properties along with hormones were previously identified in meat quality traits analysis in Duroc pigs [62]. Pathway analysis showed that steroid hormone biosynthesis, the PPAR signaling pathway, metabolism of xenobiotics by cytochrome P450, chemical carcinogenesis, cGMP-PKG signaling pathway, and drug metabolism-cytochrome P450 were the dominant pathways for differences in tenderness in lamb ( Figure 4). Steroid hormones play critical roles during myogenesis by influencing cell differentiation [63]. The PPAR signaling pathway, which is involved in lipid metabolism, has long been recognized as an important biological pathway controlling meat quality in animals. The primary transcription regulator in PPAR signaling, peroxisome proliferator-activated receptor gamma has been reported to be a key factor in controlling the transcription of many genes involved in adipogenesis pathways [64]. PPAR signaling pathway genes influence muscle tenderness by causing phenotypic differences in marbling in livestock [65].
The Hepatic Transcriptome Network's Regulatory Hub Genes
Muscle tenderness traits, like many other quantitative traits, are most likely regulated by multiple genes that interact with one another via an interconnected network. As a result, network-based approaches are thought to be more sensitive in identifying regulatory gene molecules for global transcriptome alterations [66]. Herein, PPI network and coexpression analysis was performed to scrutinize the regulatory hepatic genes in GCS with divergent tenderness. The hepatic transcriptome network's regulatory hub genes identified several key genes, including ACTN2, SOD1, TPM2, THOC5, PLAT, TRIM9, FKBP10, MEIS1, CACNA1C, SPRY1, and GAAR1, which were upregulated in the liver tissue ( Figure 5A,B). The ACTN2 gene is involved in muscle fiber composition and muscle contraction [67].
TPM2 is involved in muscle contraction, muscle development, and lipid accumulation [68]. THOC5 gene is reported to influence lipid and FA metabolism, as well as affecting meat tenderness [56]. The potential down-regulated hub genes were identified including GRIP1, PFN2, NOL3, NR2F1, MARCKS, MAP2K6, E2F2, ENG, and PRMT2 ( Figure 5A,B). The GRIP1 gene was previously reported to be associated with marbling [69]. SNPs in the MAP2K6 gene are associated with marbling score, back fat thickness, and carcass weight in Hanwoo cattle [70]. MAP2K6 belongs to the protein kinase family and regulates the mitogenactivated protein kinase pathway that controls muscle growth [71]. The E2F2 gene plays an important role in skeletal muscle development by activating transcription factor-2 [72]. The advent of transcriptional network analyses has proved that functionally related genes are usually co-expressed in various tissue and organism. Constructing a co-expression network from transcriptome datasets has become a widely used alternative to conventional analysis method for searching highly relevant genes of complex biological function. The co-expression network identified several downregulated hub genes including COL6A1, AEBP1, PRELP, ANGPTL2, EEEMP1, SCARF2, ENG, LOX1, and SSC5D, whereas only four upregulated hub genes, namely FKBP10, IGFBP7, GABBR1, and SPRY1 were identified in the liver tissue obtained from GCS with the divergent meat tenderness trait ( Figure 6A,6B). The AEBP1 and IGFBP7 genes influence muscle development in pigs [50]. The ANGPTL2 gene is reported to be associated with the fat deposition process in cattle [49]. The IGFBP7 gene is considered a candidate gene associated with meat quality traits according to results of function and pathway analysis in crossbred sheep [73]. The GABBR1 gene is also reported to be a candidate gene for fat deposition in the sheep tail [74].
Association between Candidate Markers and Phenotypes
In this study, selected polymorphisms in the OLFML3, ANGPTL2, and THOC5 genes were revealed to be associated with meat quality traits ( Table 6). The polymorphism in the OLFML3 (C > T, g.90317673) gene was significantly (p < 0.05) associated with tenderness and cooking loss. The percentage of cooking loss is proportional to the shear force value. The higher the percentage of cooking loss, the higher the shear force value [75]. The polymorphism in the ANGPTL2 (G > A, g.8930776) and THOC5 (C > T, g.68234589) genes were significantly (p < 0.05) associated with tenderness. Meat tenderness is affected by the biochemical properties of muscle fibers and the connective tissue matrix, as well as by age, primarily due to cytoskeletal protein degradation. Tenderness is a key trait influencing repurchase decisions because it is a major factor that contributes to eating satisfaction and consumer acceptance [76]. The OLFML3 gene was previously reported to be influencing meat tenderness in cattle [56]. The THOC5 gene was also reported to affect meat tenderness [56]. A recent study has identified that the OLFML3 gene is associated with meat quality traits including tenderness [77], however, due to fewer association studies in sheep, the scope of comparing the results is limited.
Conclusions
This transcriptome analysis using RNA deep sequencing revealed potential candidate hepatic molecules, genes, and polymorphisms affecting lamb tenderness. This study suggests several candidate genes such as TP53INP1, CYP2E1, HSD17B13, ADH1C, LPIN1, ANGPTL2, IGFBP7, FABP5, OLFML3, and THOC5 that might control the metabolism of molecules involved in lamb tenderness. Furthermore, several SNPs were detected in the hepatic DEGs and associations of selected markers with tenderness were validated, such as polymorphisms in the OLFML3, ANGPTL2, and THOC5 genes that could be potential markers for meat tenderness in sheep. However, further validation is needed to confirm the effect of these genetic markers in other sheep populations, so that they can be considered in selection for sheep with higher meat tenderness.
Supplementary Materials:
The following supporting information can be downloaded at: https:// www.mdpi.com/article/10.3390/ani13040674/s1, Figure S1: Garut composite sheep (GCS) mating scheme. M = Moulton Charollais, G = Garut, H = St. Croix; Table S1: Differentially expressed hepatic genes in Garut composite sheep with divergent meat tenderness; Table S2: Top 30 up and down regulated hepatic genes obtained from sheep with high and low meat tenderness; Table S3: List of seed genes of the hepatic PPI network of hepatic transcriptomes from Garut composite sheep with their centrality measures; Table S4: List of seed genes for co-expression network; Table S5: Total SNPs detected by RNA-Seq in hepatic genes in Garut composite sheep divergent meat tenderness; Table S6: Polymorphisms detected in the highly polymorphic hepatic DEGs; Table S7: Genotype, allele frequencies and the chi-squared test of selected SNPs validated using RFLP. | 2023-02-17T16:03:44.139Z | 2023-02-01T00:00:00.000 | {
"year": 2023,
"sha1": "3e406b4cdd05a7393b197de7986ee62718627c51",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "b33f15169010dc76d36c9c2878582da2da25fcf9",
"s2fieldsofstudy": [
"Agricultural And Food Sciences",
"Biology"
],
"extfieldsofstudy": []
} |
247011637 | pes2o/s2orc | v3-fos-license | Coverage-Guided Tensor Compiler Fuzzing with Joint IR-Pass Mutation
In the past decade, Deep Learning (DL) systems have been widely deployed in various domains to facilitate our daily life. Meanwhile, it is extremely challenging to ensure the correctness of DL systems (e.g., due to their intrinsic nondeterminism), and bugs in DL systems can cause serious consequences and may even threaten human lives. In the literature, researchers have explored various techniques to test, analyze, and verify DL models, since their quality directly affects the corresponding system behaviors. Recently, researchers have also proposed novel techniques for testing the underlying operator-level DL libraries (such as TensorFlow and PyTorch), which provide general binary implementations for each high-level DL operator for running various DL models on many platforms. However, there is still limited work targeting the reliability of the emerging tensor compilers, which aim to directly compile high-level tensor computation graphs into high-performance binaries for better efficiency, portability, and scalability. In this paper, we target the important problem of tensor compiler testing, and have proposed Tzer, a practical fuzzing technique for the widely used TVM tensor compiler. Tzer focuses on mutating the low-level Intermediate Representation (IR) for TVM due to the limited mutation space for the high-level IR. More specifically, Tzer leverages both general-purpose and tensor-compiler-specific mutators guided by coverage feedback for evolutionary IR mutation; furthermore, Tzer also performs pass mutation in tandem with IR mutation for more effective fuzzing. Our results show that Tzer substantially outperforms existing fuzzing techniques on tensor compiler testing, with 75% higher coverage and 50% more valuable tests than the 2nd-best technique. To date, Tzer has detected 49 previously unknown bugs for TVM, with 37 bugs confirmed and 25 bugs fixed (PR merged).
INTRODUCTION
With the recent advance of deep learning (DL), DL systems have been pervasively deployed in various application domains to facilitate our daily life, including natural language processing [Devlin et al. 2018;Vaswani et al. 2017;Young et al. 2018], healthcare [Esteva et al. 2019;Miotto et al. 2018], activity recognition [Cao et al. 2019;Guo et al. 2021;Kreiss et al. 2019], and autonomous driving [Grigorescu et al. 2020;Rao and Frtunikj 2018]. Meanwhile, it is extremely challenging to ensure the correctness of DL systems (e.g., due to their intrinsic nondeterminism), and any bug in such decision-making systems can potentially bring serious consequences or accidents (e.g., the life-threatening autonomous-driving failures [Garcia et al. 2020]).
To date, a large body of prior work has been dedicated to testing, analyzing, and verifying DL models since their quality directly affects the behaviors of DL systems. For example, various techniques have been designed to generate adversarial or edge-case model inputs for testing DL models, including DeepXplore [Pei et al. 2017], DeepTest [Tian et al. 2018], DeepRoad , TensorFuzz [Odena et al. 2019], and DeepBillboard [Zhou et al. 2020]. In recent years, in addition to the algorithmic/model aspect, researchers also realized the importance of ensuring the correctness of the underlying DL infrastructure supports, and have proposed novel techniques [Pham et al. 2019;Wang et al. 2020] specifically targeting operator-level DL libraries, such as TensorFlow [Abadi et al. 2016] and PyTorch [Paszke et al. 2019]. Meanwhile, computationintensive DL models are being developed everywhere nowadays; early operator-level libraries, which usually only provide a fixed binary for a limited number of platforms, are hardly generalizable and scalable. Therefore, DL engineers and researchers have been building an ultimate solution, tensor compilers [Chen et al. 2018;Lattner et al. 2020;Ragan-Kelley et al. 2013;Rotem et al. 2018] (also known as DL compilers), to essentially tackle the challenges in performance, portability, and flexibility. However, to our best knowledge, there is limited work specifically targeting the reliability of the emerging tensor compilers.
Ensuring the correctness and reliability of tensor compilers is essential for the rise of compilationbased DL infrastructure. Nonetheless, the complicated software stack of tensor compilers makes it non-trivial for writing hand-crafted unit tests. For example, in TVM [Chen et al. 2018] (one of the biggest and most widely used tensor compiler projects), there are over 117k lines of Python code specifically targeting unit testing!Designing automated testing techniques for tensor compilers is important but also quite challenging. First, the compiler stack is deep, meaning that an input model needs to be compiled through various phases (including numerous parsing, lowering, and optimization passes) to produce the final target code. Second, the compiler stack is wide, meaning that there are innumerable possibilities for composing a single intermediate representation (IR) file or an optimization sequence, let alone their combinations if taking various targets and execution backends into account.
Although some existing fuzzing techniques can potentially be adopted for testing tensor compilers, they are not able to handle the complex compiler infrastructure well. For example, generalpurpose binary fuzzers [Serebryany 2016;Zalewski 2018] can hardly generate syntactically-and semantically-valid inputs, wasting the majority of time fuzzing the lexical parsing components. Prior operator-level DL-library testing techniques [Wang et al. 2020] systematically mutate on the input model seeds to generate diverse model architectures, and can potentially be generalized to most DL infrastructures; however, they are not tailored for tensor compiler testing as they do not consider triggering different optimizations and are also too coarse-grained to generate light-weight yet valuable inputs (as demonstrated by our experimental results in § 5.1). To our best knowledge, the only existing work specifically targeting tensor compiler fuzzing, TVMFuzz [Pankratz 2020], employs a generation-based approachto automatically generate arbitrary low-level IRs for fuzzing TVM.However, it suffers from the common limitations of generation-based fuzzing techniques [Holler et al. 2012;Yang et al. 2011], e.g., it is challenging to simulate realistic programs to cover deep code paths and the fuzzing process lacks valid guidance; also, it fails to consider the rich search space of possible optimization pass sequences for tensor compilers. As a result, it could only find out very shallow front-end bugs and its coverage growth converges at an early stage (as also confirmed by our experimental results).
In this paper, we focus on practical tensor compiler fuzzing and have made the following design choices. First, we target low-level IR mutation due to the coarse-grained and limited mutation space for high-level IR mutation [Wang et al. 2020]. Second, we propose the first coverage-guided fuzzing approach for testing tensor compilers, as coverage feedback has been demonstrated to be powerful for exploring deep code paths efficiently in general [Li et al. 2018]. Following traditional coverage-guided fuzzers [Serebryany 2016;Zalewski 2018], in each iteration, we randomly choose an IR file from a seed pool for mutation and add the newly mutated IR file into the pool only when it triggers new coverage. Meanwhile, instead of relying on the bit-level mutators widely adopted in traditional fuzzers, we develop a set of general-purpose and tensor-compiler-specific mutators for more targeted and effective IR mutation. Third, since a large number of optimization passes can form a pass sequence and potentially be applied to the same IR file to trigger different compiler , behaviors, we further build a novel coverage-guided fuzzing strategy to perform joint mutations of both IR and optimization passes for more exhaustive tensor compiler testing. Although our design is general for different tensor compilers, in this paper, we mainly focus on the TVM compiler and have implemented a practical TVM fuzzing technique named Tzer. To evaluate the effectiveness of Tzer, we have performed an extensive study to compare Tzer against LibFuzzer [Serebryany 2016] (a state-of-the-art general-purpose fuzzer), LEMON [Wang et al. 2020] (a state-of-the-art high-level IR fuzzer for DL libraries), and TVMFuzz [Pankratz 2020] (the only existing low-level IR fuzzer for TVM). Furthermore, we have rigorously evaluated the importance and necessity for all the design choices of Tzer. In summary, the primary contributions of this work go as follows: • Novelty: This paper presents the first coverage-guided fuzzing technique specifically targeting tensor compilers. More specifically, we have designed various general-purpose and tensor-compiler-specific mutators as well as the joint mutation of both IR and optimization passes for effective tensor compiler fuzzing. The computation of deep learning models can be logically described in the dataflow model [Wongsuphasawat et al. 2017], which is commonly called the computation graph [Jia et al. 2019]. A computation graph consists of a number of operators (e.g., convolution, max pooling, and many other tensor operations), each of which transforms one or multiple input tensors (i.e., multi-dimensional arrays) into a series of output tensors. Given the computation graph description, there are mainly two approaches for existing DL software to compute it. Previously, for fast software delivery, ML engineers implemented various operator-level DL libraries, such as TensorFlow [Abadi et al. 2016] and PyTorch [Paszke et al. 2019], whose operators are implemented with fixed and hand-optimized kernel functions. However, hand-crafted optimization is time-consuming in the long run and a fixed binary cannot meet the ultimate performance requirements for all hardware vendors. Therefore, to fundamentally resolve those challenges, recently DL infrastructures have been focusing on developing tensor compilers [Chen et al. 2018;Google 2016;Intel 2017;Jin et al. 2020;Rotem et al. 2018;Tillet et al. 2019;Zhao et al. 2021] to automatically generate best-in-class target code for different vendors or even architectures. Figure 1 illustrates the compilation flow of TVM [Chen et al. 2018], one of the most widelyused and advanced tensor compilers (other tensor compilers including XLA [Google 2016] and Glow [Rotem et al. 2018] also follow such logical flow). First, tensor compilers will transform 3rd-party model files into their own graph representation (i.e., Relay IR in TVM). Furthermore, a sequence of optimizations (known as passes or transformations) is applied for either high-level graph IRs and low-level Tensor IRs (TIR). Within a pass sequence, each pass iteratively transforms an IR to a new IR to either optimize the computation or propagate valuable information for upcoming optimizations. Once the low-level IR is ultimately optimized, the code generation component will produce corresponding binaries for different targets (i.e., NVIDIA GPU, X86 CPU, etc.) Existing work on DL-library testing [Pham et al. 2019;Wang et al. 2020] mainly focuses on generation/mutation at the graph level. Contrastingly, for tensor compilers, we target the low-level representation since there are many limitations if the input files are simply constructed via such graph-level abstraction. First, low-level IRs are closer to code generation and optimization which can guide the fuzzers to find deeper compiler bugs. Second, there is a limited search space for graph-level construction since deep learning operators are too coarse-grained and it suffers from various shape constraints. Furthermore, graph-level representation can be lowered to concrete low-level IR but not vice versa. In this work, we have empirically compared our Tzer technique that operates on the low-level IRs with state-of-the-art DL-library fuzzer, LEMON [Wang et al. 2020], which performs graph-level model mutation. The evaluation results also confirm that LEMON generates 7.7x less valuable tests (i.e., the tests that are compilable and can trigger new compiler coverage) compared with Tzer.
Fuzzing
Fuzzing [Böhme et al. 2017;Fioraldi et al. 2020;Lemieux and Sen 2018;Serebryany 2016;Zalewski 2018], known as an advanced automatic testing technique, has been widely employed to efficiently detect software bugs in the wild. The key features of fuzzing, is the extreme 1) efficiency: no heavyweight analysis is required, and 2) simplicity: fuzzers are mostly general-purpose and could be as easily employed as compiling a program and then executing it.
The big idea of fuzzing, is to generate randomized inputs in sharp and explore unexpected behaviors (e.g., crashes) of the program under test. One of the most effective techniques of fuzzing is called the coverage-guided fuzzing (CGF), which is a mutation-based approach that leverages coverage feedback to focus on test inputs (known as seeds) that have achieved new coverage, instead of doing so in a randomized fashion.
The idea of CGF has led to many existing general-purpose binary fuzzers both in industry and in research [Böhme et al. 2017;Fioraldi et al. 2020;Lemieux and Sen 2018;Serebryany 2016;Zalewski 2018]. AFL [Zalewski 2018] is one of the pioneers among CGF tools that have found numerous vulnerabilities in diverse applications. The development of AFL has inspired many further enhancements and extensions. AFLFast [Böhme et al. 2017] further leverages the Markov chain to model CGF as a systematic exploration of its state space and develops a set of power schedules and search strategies to focus on low-frequency paths. FairFuzz [Lemieux and Sen 2018], which outperforms AFLFast in its evaluation, prioritizes seeds that hit rare branches, instead of rare paths, and develops a mutation mask algorithm to bias mutation towards producing inputs that hit such rare branches. AFL++ [Fioraldi et al. 2020] further incorporates state-of-the-art fuzzing research ideas into one useful tool, which is prospective to be a new baseline tool for future research in Fuzzing. LibFuzzer [Serebryany 2016] has been widely recognized as one of the most representative coverage-guided fuzzers that builds in-process fuzzing loop and powerful evolutionary fuzzing engine with its integration with the LLVM infrastructure [Lattner 2002]. It has been under active development and keeping adopting the most recent and influential research ideas [Böhme et al. 2020].
In addition to general-purpose fuzzers, CGF has also inspired many domain-specific fuzzers. Die [Park et al. 2020], an aspect-preserving evolutionary fuzzing technique for JavaScript, has been shown to outperform state-of-the-art JavaScript fuzzers in terms of both bug discovery and valid test input generation. Sqirrel [Zhong et al. 2020] is a database management system (DBMS) fuzzer that takes language validity into consideration during fuzzing, which has found numerous bugs in DBMSs including SQLite, MySQL, PostgreSQL, and MariaDB. FuzzChick [Lampropoulos et al. 2019], an extension of QuickChick [Dénès et al. 2014], incorporates coverage guidance to perform property-based testing for Coq programs, and has been shown to perform far better than the vanilla QuickChick with the help of coverage guidance.
The existing general-purpose fuzzers cannot be simply applied here for tensor compilers like TVM because tensor compilers require structural IRs in specific form as input, which does not have a direct correspondence to the binary stream. Furthermore, many traditional compiler fuzzing techniques [Le et al. 2014;Yang et al. 2011;Zhang et al. 2017], though also theoretically general and applicable, are insufficient for tensor compiler fuzzing as they are not tailored for such purposes. For instance, the well-known EMI [Le et al. 2014] is general for any compilers supporting control flows. However, it is not suitable for DL computation as most existing DL models are static graphs (i.e., no control flows) mainly except for some RNN models. In addition, in TVM the de facto compilation mode (i.e., the "graph" mode) requires constant input tensor shape so that any control flows related to shape sizes can be statically inferred to allow maximum optimization (e.g., unrolling loops in an optimal way), making it unsuitable for applying EMI. To date, there are very few domainspecific fuzzers for tensor compilers, with TVMFuzz [Pankratz 2020] being the only existing fuzzer specifically targeting TVM to our knowledge. Therefore, this paper aims to build a practical fuzzing technique specifically targeting modern tensor compilers.
APPROACH
In this section, we present the detailed design of Tzer, a practical tensor compiler fuzzer via coverage-guided joint IR-Pass mutation. Figure 2 illustrates the overview of Tzer. As shown in the figure, like traditional coverage-guided fuzzing work [Li et al. 2018], Tzer maintains a seed pool to store interesting seeds (i.e., the test inputs that can trigger new coverage) for further mutations. Different from prior work that mainly maintains the input files within the seed pool, Tzer maintains two dimensions of information in the seed pool (i.e., both IR files and their corresponding optimization pass sequences) for effective joint IR-pass mutation.
During the fuzzing process, for each pair of IR and pass sequence from the seed pool, Tzer will apply the corresponding mutation strategies to generate a new input pair in each iteration. For example, Tzer applies both general-purpose and tensor-compiler-specific mutators on IR files to generate new IR files, and applies pass mutation to randomly generate a new pass sequence. Then, for each newly generated IR-pass pair, Tzer leverages the tensor compiler under test (i.e., TVM in this work) to compile IR with the corresponding pass sequence and collect the compiler coverage information. Any input pairs that violate the test oracles are reported, while any input pairs that can help trigger new compiler coverage are further fed back to the seed pool for generating more valuable inputs. In this way, the generated inputs can cover more and more code for tensor compilers, and can detect more and more potential bugs. The fuzzing loop will terminate until the allowed time/resource budget runs out.
In the remainder of this section, we will first present the detailed algorithm design for our fuzzing loop ( § 3.1). Then, we will present the details for our general-purpose mutators ( § 3.2) and tensor-compiler-specific mutators ( § 3.3). Finally, we will briefly discuss the test oracle information used in this work ( § 3.4).
Fuzzing Loop
Algorithm 1 presents the detailed design of our main Tzer fuzzing loop. The algorithm only takes three inputs, including the initial seed pool ( 0 ), the time budget ( ), and the parameter for controlling the interleaving of IR and pass mutations ( ). Different from all prior work on evolutionary coverage-guided fuzzing [Serebryany 2016;Zalewski 2018], the seed pool of Tzer maintains two dimensions of information for effective tensor-compiler fuzzing, i.e., both the IR files and their corresponding pass sequences. Thus, we can denote each input for Tzer as a pair ⟨ , ⟩, where represents an IR file while represents the corresponding pass sequence for the IR. In the algorithm, we further extend ⟨ , ⟩ into ⟨ , , ⟩ to additionally consider the interleaving control for the join IR-pass mutation. With the interleaving control , for each seed input IR file , Tzer can 1) keep mutating with if such mutations were rewarding, and 2) also occasionally (controlled by ) seek a better ′ to pair with when the current get stuck in local minima.
The main algorithm of Tzer is similar to traditional evolutionary fuzzers, except for the additional code logic added to handle the additional pass mutation (highlighted in colored boxes). Basically, Tzer first initializes the seed pool with pairs of IR files and pass sequences, as well as setting =0 for all pairs (Line 2). For example, in this work, the initial seed pool consists of all possible model architectures in the TVM model zoo [Community 2020] with randomly generated pass sequences. The coverage achieved by the initial seed inputs is also collected to evaluate newly generated inputs (Line 3). Then, Tzer will go through the main loop for generating new inputs (Lines 4-26).
In each iteration, Tzer will randomly fetch an input tuple from the seed pool. If the current ⟨ , ⟩ pair cannot trigger any new coverage during the past consecutive IR mutations (Line 6), Tzer will try to mutate the pass sequence into another random sequence ′ in the hope that ′ will bring this input pair to a better state for further mutations (Line 7). The coverage and error information will be recorded when compiling the input pair ⟨ , ′ ⟩ with the compiler under test (Line 8). In case of any error, the input pair will be reported to the developers. If ′ does help trigger new coverage, the total coverage information will be updated (Line 12); the input pair ⟨ , , ⟩ in the seed pool will also be updated to ⟨ , ′ , 0⟩ since it is more promising to go with ′ in future runs on mutating (Lines 13). If ′ does not help trigger new coverage, Tzer simply clears the interleaving control counter to 0 to allow more file mutations with the current (Line 15). This indicates that we do not perform consecutive pass mutations for any seed input regardless of the coverage outcome. The reason is that mutating pass sequences is not as rewarding as mutating IR files in general, and we only need occasional pass mutation (controlled via ) to guide the evolutionary process to more promising states to avoid local minima.
On the other hand, if the fetched input tuple has not failed to trigger new coverage for consecutive IR mutations, Tzer will go ahead to further mutate the IR file following a very similar process to traditional fuzzers. Tzer first mutates the IR into ′ by selecting one mutator among the mutator pool (including 3 general-purpose and 3 domain-specific mutators), and then collects the result information for compiling the pair ⟨ ′ , ⟩ (Lines 17 and 18). In case of any error, the input pair will be reported. If ′ does help trigger new coverage, the new IR file with the current will be inserted into the seed pool for future runs (Line 22). The total coverage information will Algorithm 1: Tzer Fuzzing Loop 1 Function Fuzz(set of initial seeds 0 , time budget , pass mutation frequency control ): And Replacement (c) IR mutants Fig. 3. The process of IR mutation. Node types are differentiated using background colors, while expression types are differentiated using border colors. Each label around the node denotes a node type or an expression type. Each label on the node denotes a constructor or a primitive. The VarInjection constructor in Table 1 is illustrated by switching the background color of the Var node.
also be updated (Line 23). Different from prior fuzzers, Tzer also needs to update the original seed pair to ⟨ , , 0⟩ since it helped trigger new coverage (Line 24); also, if ′ did not help achieve new coverage, the original seed pair will be updated to ⟨ , , + 1⟩ to record the current attempt that failed to trigger new coverage (Line 26). Theoretically, some specific ⟨ , ⟩ might fail due to 1) lack of pass dependency, or 2) pass/IR incompatibility, resulting in waste of time for compiling invalid ⟨ , ⟩. Executing too many invalid compilations will make fuzzing process less efficient. The evolutionary joint IR-Pass mutation (Algorithm 1) can easily avoid such frequent invalid compilation by design. As is shown in Line 22, only valid ⟨ , ⟩ with new coverage will be added into the seed pool S, whereas the invalid and ineffective ones will be ignored to keep seed pool filled with compilable samples during evolutionary fuzzing process.
In this way, after being launched, the algorithm can then continuously generate valuable IR and pass sequence pairs for triggering tensor-compiler bugs.
General-Purpose Mutation
Following prior work on fuzzing programming languages [Holler et al. 2012;Lampropoulos et al. 2019;Zhong et al. 2020], we design a general-purpose IR mutation approach.In addition, program analysis techniques are integrated into mutation to ensure syntax correctness and to mitigate semantic errors. This is because, to dig high-quality bugs in the code generation and optimization phases of a compiler, the produced IRs should be able to pass standard pre-condition checks (e.g., syntax checks and semantics checks). In the remainder of this section, we first introduce our definition of the low-level Tensor IR (TIR) of TVMand then elaborate on the the mutation details.
We first discuss the abstract syntax tree (AST) of TIR since it is the entry point of TVM's compilation. Figure 3a depicts a simplified TIR AST sample. The AST tree contains different types of nodes, with the root node representing the input IR to the compiler. As is shown Figure 3a, the type of the root node is PrimFunc which stands for the basic function type in TIR. The While node is of type Stmt while the EQ and GT nodes are of type PrimExpr. The corpus of all node types can be defined as We assume these types are disjoint (e.g., no subtype relation), but our implementation leverages the concept of constructor to simulate the original subtype relations. For each type, there could be multiple constructors, which are functions/operators from one or more node types to one return node type, generally having the signature Some types also have primitives, which are values that cannot be broken down into subparts (i.e., leaf nodes). Table 1 shows a detailed list of common TIR AST node types, constructors, and primitives. Note that VarInjection, one constructor of PrimExpr, is added by us to switch the variable from type Var to PrimExpr without changing its internal value.This is required because Var is a subtype of PrimExpr in the implementation of TIR by TVM, which means each Var is implicitly a PrimExpr, but in our definition, we assume no subtype relation. By using injective constructors, this could be easily expressed.
Each AST node can be recursively defined as either a primitive (leaf node) or an application of some constructor to other nodes (e.g., branch node). For simplicity, in our implementation of mutation approaches, some trivial branch nodes are treated as leaf nodes, including Var, IntImm, FloatImm, etc.As an example, the root node of the input IR in Figure 3a can be formally defined as PrimFunc( [a, b], While(And(EQ(a, 5), GT(b, 3)), ...)) : PrimFunc. (3) The first step of our mutation approach is to randomly pick out one of the AST nodes of the given IR and regard it as a hole, which can then be filled up to produce an IR mutant. We call an IR with a hole at some position a context. For instance, in Figure 3b, we pick out the And node as a hole (denoted by □) so that the corresponding context is Based on the context, we can derive the constraints to be satisfied (e.g., accessible variables of the hole) when filling the hole so that the filled IR could be correct. Formally, the constraints are a tuple of necessary information that helps determine the requirements when constructing a sub-expression in the hole. Specifically, for TIR, we consider the following information: • Desired AST node type (e.g., PrimExpr, Stmt, Var).
• Accessible variables under the current scope.
• Declared buffers. TIR uses the notion of "buffer" to store and load data. When we access a buffer, we should ensure it is already declared. • A boolean indicating whether the variables need to be bound. TIR only allows a commented expression to have free variables. As an example, for the context in Equation 4, the hole represents a condition check for the While node. Hence, in order to fill the hole, at least a boolean expression is needed. Also, any variable used should be bound to some binding occurrence (e.g., parameter a and b). Therefore, the constraints should be Based on the derived constraints and the picked node, we perform a series of mutations using the corresponding mutator on the node following the constraints. Formally, each mutator has the signature (AnyNodeType, Constraints) → AnyNodeType, where AnyNodeType is a disjoint union of all possible NodeType ∈ NodeTypes, i.e.,
AnyNodeType =
NodeTypes. Table 1. Example node types, constructors, and primitives of the AST of TIR for Figure 3. Some constructor could be overloaded or have the same name as their node types. The asterisk '*' means a list type (e.g., Var * in the two PrimFunc constructors means a list of Var, which serves as parameters of a function). We also put several auxiliary labels in front of some parameter types to help understand the meaning of the parameter (e.g., "name : String" in the Var constructor signature).
We use a disjoint union here because our mutator is designed to operate on nodes of any node type and different node types should not overlap with each other in our definition. Basically, we designed the following three general-purpose mutators, namely Insertion, Deletion, and Replacement: Insertion. Regardless of the input node, Tzer simply returns a new node generated from scratch that satisfies the given constraints. This is done by Tzer's generator, which is inspired by the prior generators in the random testing community [Claessen et al. 2015;Lampropoulos et al. 2017]. The functionality of the generator is to produce IR ingredients/snippets based on the constraints and a , size parameter which indicates the node size of generated sub-IR, as is described in Figure 3c.In the figure, Tzer generates a new boolean Let node of type PrimExpr, and ensures that all the variable references have their corresponding binding occurrences (e.g., in the node LT(b, c), b is introduced by the parameter list, and c is introduced by Let). Deletion. Tzer checks the child nodes of the input node, filters out those satisfying the constraints, and randomly returns one of them. For example, in Figure 3c, we perform deletion on the And node by returning its right-hand side GT(b, 3), the 'greater than' node, which is a boolean expression with all variable references bound. Replacement. For a primitive node, Tzer simply modifies its value, or returns another primitive based on the constraints. For a node constructed by some constructor, in the simplest case, Tzer randomly selects a constructor to substitute the existing one, in the restriction that after the substitution the node should satisfy the constraints given. More generally, Tzer randomly selects a constructor, trying to use the child nodes of the input node as components to fill the parameter list of the selected constructor; if there are parameters unable to fill, Tzer randomly generates one using the generator. This strategy is inspired by the mutate constructor of FuzzChick [Lampropoulos et al. 2019] for testing Coq programs except that Tzer considers different constraints. Figure 3c gives the simplest form of replacement, which just replaces the And constructor with the Or constructor.
Domain-Specific Mutation
Tensor compilers focus on optimizing domain-specific programs, e.g., programs with dense loops in particular. To optimize those hot spot program structures, existing tensor compilers [Chen et al. 2018;Google 2016;Intel 2017;Jin et al. 2020] leverage the concept of pass to optimize the given IR or insert annotations containing valuable information for further optimization. To trigger the complex logic behind those optimization passes, general-purpose mutators, though versatile to handle different types of expressions, are still inefficient and not tailored to the specific domain that tensor compilers are built for.
For domain-specific compiler testing, in addition to the general-purpose mutators, we argue that it is also important to navigate the mutation towards the core components that the compilers specifically target (e.g., loop-oriented optimization, memory allocation, memory latency hiding, and parallelization ). For example, deep and wide nested loops can be optimized with tiling [Park et al. 2003], multi-threading [Smith et al. 2014], and vectorization [Bjørstad et al. 1992] by a series of related passes (e.g., UnrollLoop and LoopPartition). Those passes have complex optimization rules for different domain-specific code structures (e.g., big loops, large buffer allocation, and thread scheduling) that general-purpose mutators can hardly target. Hence, according to the hot spot program patterns targeted by existing tensor compilers [Chen et al. 2018;Ragan-Kelley et al. 2013;Tillet et al. 2019;Zhao et al. 2021], Tzer specifically designed 3 types of mutators: 1) loop-nesting mutator for creating multifarious dense loop structures; 2) memory-operation mutator for various memory allocation/store/load patterns at the index level; and 3) thread-binding mutator for diversifying the parallel computation flows to generate interesting code patterns that tensor compilers particularly care about. Loop Nesting. Tensor computation usually consists of a large number of nested loops. Even for the simplest element-wise expression, e.g., C=C+1 with broadcasting, the loop structure of a common image tensor (whose dimensions are [height, width, channels]) will consist of 3 nested loops. To mimic such dense loops, we introduce the loop-nesting mutator to transform IRs with different loop structures.
First, Tzer randomly picks an AST node as the innermost loop body. Tzer then selects one out of the five TVM loop types (serial, vectorize, unroll, etc.), where each of them represents different Figure 4, 2 nested loops of type unrolled are inserted after mutation. Furthermore, according to loop variables under the current context, a random expression will be used to form the indices ([i*16+j]). Notably, TVM also annotates loop attributes for concrete optimization in code generation. e.g., unroll_max_steps, and further tunes those integer attributes to trigger different optimization paths. Therefore, Tzer also mutates those attributes when creating/replacing the target loops. Memory Operations. Apart from multifarious loop structures, another dimension to increasing the complexity of tensor computation is to introduce various memory operations, including memory store/load and allocation. Tzer's memory-operation mutator mimics complex memory patterns by inserting memory operations into existing IRs. Given a randomly selected node , Tzer first analyzes accessible memory buffers (represented with pointers) under the current scope. Next, Tzer randomly constructs a memory operation (i.e., a sub-expression) and inserts it into the target AST node. As is shown in Figure 4 (step ❷), Tzer inserts a sub-expression (i.e., ... = buf[i+j*16]) to original IR so that a new memory access is created and the dataflow related to buf is changed. Thread Binding. One thing that differentiates traditional compilers and tensor compilers is that tensor compilers leverage multiple threads (either CPU threads or threads of parallel hardware like NVIDIA GPU) to automatically parallelize the program. The thread scheduling, however, could have many different settings, as operations could be executed by different thread groups at different stages (manipulated by attributes, e.g., thread numbers and thread tags).
To explore the impact of different thread scheduling patterns, Tzer creates various threadbinding patterns and leverages them to mutate the multi-thread planning of given IRs. Precisely, as is shown in Figure 4 (step ❸), Tzer first selects an AST node (i.e., the 2 nested loops wrapped by the scope of launch_thread) and then initializes its threading parameters, e.g., virtual thread number (virtual_thread in TVM). In this way, virtual_thread is initialized by 2 which means this node will be executed by 2 virtual threads.
Test Oracle
Test oracles are important for detecting potential bugs with fuzzing. In this paper, we consider the following ways to resolve the test oracle problem for finding bugs in tensor compilers: Result Inconsistency. Tzer holds the hypothesis that an IR, whether it is optimized or not, should keep the output result consistent. For each generated IR, Tzer will compile it twice, where it first compiles the IR with the lowest optimization and then compiles it with given optimization passes. In this way, Tzer compares the output results by feeding 2 model binaries the same input data. We identify it as an inconsistency bug if the absolute or relative error exceeds the expectation.
Performance Degradation. The second hypothesis by Tzer is that after a series of optimization passes, the performance should not be degraded. Therefore, Tzer would instrument the running time of optimized and non-optimized executions. If the optimized code runs even slower than the non-optimized one, we consider it as a potential performance bug. Notably, to avoid false-positives, we set clear performance margin in the differential testing setting. The non-optimized version is compiled with lowest optimization level (opt_level=0) while the optimized one is compiled with highest optimization level (opt_level=4). Note that higher optimization level allows better and more aggressive optimization than lower levels given the same pass sequences. For example, level-3 graph fusion (i.e., FuseOps) allows more operator fusion patterns than the low-level one. Crash and Unexpected Exception. Like most Python applications, throwing an exception is the default behavior of errors. Hence, Python/C++ projects (e.g., most tensor compilers) need to convert C++ exceptions into Python ones. For example, in TVM's C++ codebase, any unexpected behavior (e.g., assertion failure) will result in C++ exceptions, where the top-level foreign function interface (FFI) handler will catch such C++ exceptions and pack the error message using the type TVMError for Python front-end. Therefore, though errors might occur, the symptom should be uncaught exceptions rather than crash.The compilation and execution phase of Tzer is done by forking a sub-process, Tzer observes such crash by checking the return code of sub-processes. Tzer also monitors exceptions thrown during compilation as potential bugs. To avoid false alarms, Tzer has made the best effort on constructing legal IRs and pass sequences.
Research Questions
In this paper, we study the following research questions to thoroughly evaluate Tzer: • RQ1: How is the effectiveness of Tzer compared with state-of-the-art fuzzing techniques on testing the TVM tensor compiler? • RQ2: Are all components of Tzer contributing positive improvements to its final effectiveness? • RQ3: How do different parameter settings and experimental setups affect Tzer's effectiveness? • RQ4: How effective is Tzer in detecting previously unknown bugs?
The consideration of our experiment design largely follows suggestions made by Klees et al. [2018]. The main differences are caused by the fuzzing targets, i.e., Klees et al. [2018] mainly studied binary fuzzing while we are working on tensor compiler fuzzing. For example, the paper suggested a 24-hour timeout, while we evaluate Tzer with a default 4h timeout since existing techniques tend to saturate within 4 hour. Meanwhile, we do evaluate Tzer with a 24-hour budget as well in RQ3.
Implementation
Tzer has been mainly implemented in 8.7k lines of Python code and ∼150 lines of C++ code for coverage extension with the following main components: Mutators. We implemented all the 3 general-purpose mutators and 3 domain-specific mutators via directly operating on TIR in-memory objects (i.e., tir.PrimFunc) for fast mutation. More specifically, the mutation procedure is implemented by extending the visitor pattern of TIR's recursive post-order traversal interface. In addition, the utility generator used by replacement and insertion is capable of constructing various sub-expressions based on 89 TIR operator APIs. When inserting/replacing sub-expressions into an existing TIR, we consider the syntactic/semantic correctness by maintaining IR constraints during the visiting process (e.g., preventing the use of variables that are undeclared or out of the scope). We further utilize casting nodes when generating intrinsic function calls. Although casting is not necessary theoretically due to our constraints-based approach, TVM provides more than 30 intrinsics whose detailed function signatures may vary and are not documented (e.g., tir.cos returns float whereas tir.clz returns int). To save manual efforts, we simply regard those intrinsics as opaque ones and cast them to satisfy the constraints. Executor. Once Tzer generates a TIR file and pass sequence pair, they are sent to a sub-process for compilation and execution. The sub-process mechanism is to provide process-level isolation so that the fuzzing loop continues even though the TIR file and pass sequence make the sub-process crash. Coverage Collector. We implemented memcov, our in-memory coverage instrumentation tool, by extending LLVM's Coverage Sanitizer (i.e., injecting a customized function when entering each of CFG edges in the target program). Once a program is compiled along with memcov, we maintain a bit vector whose size is exactly the number of CFG edges of the instrumented program (i.e., TVM). When entering one edge, its corresponding position on the bit vector is set to True. As we implemented Tzer's core components in Python, we also provide a Python interface to get the coverage state at that point by invoking C++ functions through ctypes [Foundation 2021] (a Python-C++ FFI tool).
Reporter. Once a test violates our test oracle, the reporter would record necessary contextual data to reproduce the failure and debugging.
Consistent with Algorithm 1, the Tzer implementation takes three inputs, i.e., 0 , , and . For the initial seed pool 0 , by default Tzer uses 629 TIR functions converted from all possible official models from TVM's model zoo (tvm.relay.testing); for the time budget , by default Tzer sets it to 4 hours; for the IR-pass mutation control , by default Tzer sets it to 5. We use such default setting for Tzer unless explicitly specified, e.g., we will present the detailed impacts of different parameter settings on Tzer in RQ2 ( § 5.2).
The main techniques behind Tzer are general to other tensor and even traditional compilers which model low-level IRs and optimization passes. To implement our approaches for a new compiler, one needs to implement language mutators following rules described in § 3.2 and § 3.3, as well as figuring out corresponding optimization passes. The syntactic and semantic correctness of mutated IRs and passes should also be maintained. After that, the main algorithm and skeleton of Tzer shall directly apply.
Compared Work
To faithfully evaluate the effectiveness of Tzer, we compare Tzer with both the state-of-the-art general-purpose fuzzers and domain-specific fuzzers that can be applied/adapted for TVM fuzzing. More specifically, we include the following representative techniques in our evaluation: • TVMFuzz [Pankratz 2020]: This is the only existing fuzzer specifically targeting TVM to our knowledge. It follows a pure generation-based approach, which randomly generates TIR expressions by crafting valid expression ASTs of TIR. The generation approach is based on a user-defined probability table for different TIR nodes, while the validity is achieved by casting the input expressions to the parameter types of the operator. • LibFuzzer [Serebryany 2016]: This is one of the state-of-the-art bit-level general-purpose binary fuzzers. It has been adopted as the first fuzzer supported by the famous Google OSS-Fuzz project [Serebryany 2017], which has found thousands of security vulnerabilities and stability bugs; furthermore, it is also the officially used fuzzer for many popular projects including Chrome [Blog 2016] and glibc [Wiki 2016]. In this work, for a fair comparison with Tzer, we also run LibFuzzer with the TVM official model files (exported in JSON) as seeds for fuzzing TVM. • LEMON [Wang et al. 2020
Metrics
We use the following metrics to evaluate the performance of Tzer and the compared techniques: Code Coverage. Code coverage has been widely recognized as one of the most widely used metrics to evaluate software testing techniques [Gopinath et al. 2014]. The reason is that it is impossible for testing techniques to detect bugs in a code portion without actually executing it. Surprisingly, although existing work on testing deep learning libraries [Pham et al. 2019;Wang et al. 2020] claimed to cover more library code, they failed to present the detailed code coverage information.
In this work, we instrument the entire TVM code base by extending LLVM's Coverage Sanitizer and collect the detailed code coverage information at the edge level for the studied techniques to thoroughly evaluate their test effectiveness. Note that since we are comparing techniques for fuzzing the TVM compilation process, to make the comparison fair, we omit the coverage brought by other irrelevant modules at the initialization phase (e.g., constructing TIR functions by converting input models). Number of Valuable Tests. Following prior work on fuzzing [Park et al. 2020], for each compared technique, we also present the number of generated valuable tests, i.e., the tests that are not only valid (i.e., compilable) but also contribute new coverage during the fuzzing process. This metric is essential since the number of syntactically/semantically valid tests with new coverage can largely indicate the number of unique system behaviors/paths covered/tested. Also, this metric can largely complement code coverage, because techniques that mostly generate invalid inputs can still achieve high coverage for the error-handling code but that is clearly not what we want. Number of Detected Bugs. Following almost all prior work on software testing and fuzzing [Li et al. 2018;Manès et al. 2019], we further present the number of previously unknown bugs detected by all the studied techniques since bug detection is the ultimate goal for such techniques. In this work, we distinguish different bugs based on how they are fundamentally fixed. For instance, we found that 21 TIR operator functions (such as tir.op.clz(None)) will crash when given NULL inputs on a specific TVM version, but we only count this as 1 bug since all the crashes can be fixed by changing only one C++ macro statement.
Experimental Procedure
For a fair comparison, we collect coverage of all compared techniques with the default 4-hour time budget using the same in-memory coverage collector that we implemented based on LLVM Coverage Sanitizer. Note that for TVMFuzz and other baselines requiring no coverage feedback, we first run them on non-instrumented TVM binary for 4 hours to prevent unnecessary overhead introduced by coverage tracing. Then, we collect the generated TIR files and passes (if any) from them, and run compilation for them on instrumented TVM binary for offline coverage analysis. Notably, for LEMON, we collect the Keras [Google 2015] models generated in 4 hours, and convert them to TIR functions. We then run the TIR functions on instrumented TVM to mimic the effectiveness of LEMON's graph-level construction for fuzzing TVM. Of course, for those studied techniques requiring coverage feedback, we directly record the coverage within one run on instrumented TVM. We conducted experiments on: 1) GPU test-bed: a test-bed with Intel i9-9900X CPU (10 physical cores), GeForce RTX 2080 Ti GPU, and 128GB RAM, running 64-bit Ubuntu 18.04 as the operating system; and 2) CPU test-bed: a virtual cloud server (Alibaba Cloud ecs.c6e instance) with 4 CPU cores and 8GB RAM, running 64-bit Ubuntu 20.04. Since one of the baselines, LEMON, requires a GPU environment, we did RQ1 (comparison with existing work) on the GPU test-bed and all other RQs on the CPU test-bed. To ensure performance fairness, we made the system environment exclusive to the benchmarks so that the system average load is always around 1 during the process. For instrumentation, we compiled TVM v0.8-dev (9b034d7) with LLVM-12 and leveraged Coverage Sanitizer to trace edge coverage. TVM is compiled under optimization level O2 and other configurations are set by the default value. Since TVM contains as many as 17 targets, 4 executors, and many other irrelevant utilities (e.g., debuggers and profilers), in our evaluation, we focused on the LLVM-X86 target and the graph executor as they are widely adopted in TVM's tutorials and in practice. Figure 5 presents the coverage trends for both Tzer and the compared existing work within the default 4-hour budget. To be specific, the axis presents the time costs and the axis shows the basic block coverage achieved. More powerful techniques are expected to achieve higher coverage at the same timestamp. As the figure shows, Tzer is able to beat other compared techniques at the very beginning and eventually achieves 75% higher coverage than the 2nd-best baseline (i.e., TVMFuzz). Notably, Tzer is able to keep visible coverage increase even at the late stage of the 4-hour budget while other techniques tend to converge very quickly. Another interesting observation is that Tzer with the same seeds as LEMON even achieves slightly higher coverage than the default Tzer, demonstrating the robustness of Tzer. Table 2 further presents the number of valuable tests (i.e., the tests that are both compilable and able to trigger new coverage) generated by all the compared techniques within 4 hours. Regarding the comparison of graph-level and low-level IR mutations, Tzer is able to generate 7.7x more valuable tests than the state-of-the-art graph-level mutator LEMON. Specifically, LEMON only generates 63 valuable tests when the models are lowered to TIR functions (one model can be lowered to multiple TIR functions); if we had considered valuable tests at its original model level, the number of valuable tests is merely 20 out of all the 2.6k models generated by LEMON (i.e., 0.7%). We can also observe that LibFuzzer can hardly generate valid tests since it is a bit-level fuzzer, not aware of the grammar and semantics behind. Lastly, among the low-level IR fuzzers, Tzer is still able to outperform TVMFuzz by 50% in terms of valuable tests. The main reason is that TVMFuzz follows a pure generation-based approach (which lacks coverage guidance and makes it challenging to simulate realistic IRs) and does not consider the mutual effect of IR and pass combinations.
RQ2: Ablation Study of Tzer
In this RQ, we further study the effectiveness of Tzer's individual components: (1) RQ2.1: Is coverage feedback helpful for tensor compiler fuzzing?
(3) RQ2.3: Are pass mutations necessary for tensor compiler fuzzing? (4) RQ2.4: Can our evolutionary joint IR-pass mutation (described in § 3.1) outperform a baseline joint IR-pass mutation that mutates both IR and pass sequences simultaneously? To answer the above questions, we first build a simplistic variant of Tzer that only applies general-purpose mutation (i.e., without coverage feedback, domain-specific mutation, or joint IRpass mutation). Then, we incrementally add more components to the simplistic variant in the order of coverage feedback, domain-specific mutation, random joint IR-pass mutation, and evolutionary joint IR-pass mutation. Curves (1) to (5) in Figure 6 represent the coverage trends after adding each component progressively. From curves (1) and (2), we can see that coverage feedback has positive effects on tensor compiler fuzzing. Curves (2) and (3) confirm the effectiveness of domain-specific IR mutation in addition to general-purpose IR mutation. RQ2.3 can be answered by comparing curve (3) against curves (4) or (5), as extended pass sequence mutation could help trigger more interesting behaviors. Lastly, comparing curves (4) and (5), it can be shown that our evolutionary joint IR-Pass mutation is superior to the random joint IR-pass mutation, which performs coverage-guided fuzzing on IR files and supplies a randomly mutated pass sequence to each generated IR file. Hence, we can draw a conclusion that all the main components of Tzer contribute to tensor compiler fuzzing.
RQ3: Parameter Sensitivity
Sensitivity to Seeds ( 0 ) The first sub-figure in Figure 7 shows how Tzer performs with and without the default initial seed pool. Surprisingly, the non-seed version has comparable (and even slightly better at some time stamps) effectiveness to the default Tzer with 629 TIR seeds in terms of the coverage trend. This is because, though each iteration Tzer with seeds could generate higherquality tests (the yellow curve is higher than the blue one in the 2nd sub-figure of Figure 7), the non-seed version runs 24% faster than that with seeds on average (as shown in the 3rd sub- figure). The rationale behind is that if initial seeds are not given, Tzer has to start IR mutation from an empty TIR function (i.e., PrimFunc([]) {0}) so that mutated variant IR files are similarly simple. Hence, the overall compilation time of simple IRs will be smaller than the complex ones derived from real models. To study the impact of , we conducted the experiment using different values from 1 to 10 for . Figure 8 presents the final 4-hour coverage of different settings, while Figure 9 presents the corresponding detailed coverage trends. From the figures, we can see that = 4 demonstrates the best effectiveness. In addition, = 1 performs the worse in terms of the peak coverage and overall trend. This is because the coverage is mainly contributed by testing different IRs and the coverage growth will slow down if we frequently "freeze" the newly found IRs and mutate the pass sequences instead. We can also observe that the coverage does not keep growing if we keep increasing (i.e., decreasing the probability of pass mutation). The rationale behind is that though pass mutation contributes less than IR mutation in the early stage, it is still important to mutate the pass sequence for an "old" IR that is not very likely to derive new interesting IRs anymore with its current pass sequence. In conclusion, it is important to distribute the frequency of pass mutation reasonably. Sensitivity to Fuzzing Time ( ) Figure 10 shows the overall coverage trend achieved by the default Tzer across 24 hours. While the existing techniques already saturate within 4 hours (shown in RQ1), Tzer is able to successively keep coverage growth for the entire 24-hour period. Specifically, the first 4-hour window contributes the most coverage, i.e., 91.6%, while later 5 4-hour windows are still able to contribute 2.1%, 2.8%, 1.9%, 1.1%, and 0.5% coverage respectively, demonstrating the effectiveness of Tzer.
In terms of the total code coverage of TVM, at the source code level, Tzer can at best achieve 36.9% line coverage and 28% branch coverage with 4 CPU hours by only tracing the source files used in normal compilation. At the LLVM bitcode level, there are 482k CFG edges in total for our target, and Tzer achieves about 6% coverage within 4 CPU hours. This is because LLVM coverage sanitizer takes code bloating into account (C++ headers, templates and inlined functions are repeatedly considered) and thus can present underestimated coverage rates. Also, please note that modest overall coverage rates are quite common for fuzz testing of complicated software systems. For example, existing state-of-the-art Linux kernel fuzzers implement coverage collection with LLVM as well. Although they do not suffer from template code bloating in C++ as Linux is mostly implemented in C, the fuzzers can only achieve 0.8∼10.5% coverage after 50-hour fuzzing by fully utilizing a 32-core high-end CPU [Kim et al. 2020].
RQ4: Bug Detection Effectiveness
To date, Tzer has found 49 previously unknown unique bugs. Table 3 shows the detailed information about the 37 bugs that have been confirmed by TVM's developers, where 25 of them have already been fixed and merged to the main branch of TVM. Tzer generates tests through pass mutation, IR mutation, and their combination. It is important to understand the necessity and effectiveness of each part. Table 4 further presents the overall statistics for the bugs and bug types (categorized based on bug root causes) found by different studied techniques. In terms of confirmed bugs, we can find that in addition to only mutating IRs (i.e., Column "Tzer-IR"), modelling IR/Pass jointly (i.e., Column "Tzer-Full") could help detect 2.17x more bugs and 1.6x more bug types. Existing fuzzers for compilers, not limited to tensor compilers, only consider the compiler under test as a black box ingesting input source language texts and ignore the mutual effect of IRs and pass sequences internally applied together. Tzer demonstrates for the first time that it could be beneficial to perform evolutionary joint IR-Pass mutation for better and deeper bug detection.
From Table 4, we can also observe that bugs detected by Tzer can hardly be detected by other compared techniques, e.g., Tzer detects 6.16x more confirmed bugs compared with the 2nd-best technique, TVMFuzz. This is mainly because Tzer has a more complete modelling for both IR and pass sequences, as well as having a better fuzzing efficiency to quickly harness the large well-modelled search space (with coverage guidance). In addition, according to Figure 5, Tzer is able to consistently find uncovered CFG edge while other techniques converge at a very early stage, which explains why existing techniques fail to help discover more potential bugs.
Bug Root Causes and Case Study
To demonstrate the versatility of Tzer, we study all the 10 possible root causes of confirmed bugs detected by Tzer as shown in Table 3, and discuss representative bugs for each category: API-I: API Inconsistency ( § 5.5.5). API-M: API Misuse ( § 5.5.4). AE: Arithmetic Error ( § 5.5.9). DL: Driver Lifetime ( § 5.5.7). FFI: Foreign Function interface ( § 5.5.2). IMA: Invalid Memory Access ( § 5.5.1). N/A: ( § 5.5.10). OOM: Out Of Memory ( § 5.5.8 In addition to an out-of-bound access to containers, a crash occurs if a NULL pointer is dereferenced. According to the TVM design, objects could be nullable (an optional type containing a NULL state) or non-nullable. In TVM, an object accesses its data members or member functions by the -> operator in C++, which assumes any objects using operator -> are not NULL objects. However, even for nullable objects, Tzer found over 44 functions (categorized as 3 unique bugs) do not check if a receiving nullable object is NULL or not, resulting in immediate crashes in case of NULL objects. 5.5.2 Python-C++ FFI Handling. Same as most other deep learning software, TVM and most other tensor compilers provide a Python interface, i.e., Foreign Function Interface (FFI), to bind Python functions and objects to C++ functions and objects through the ctypes standard library [Foundation 2021] and cython [Behnel et al. 2010]. The motivation is that most deep learning practitioners are familiar with Python instead of C++. However, python requires objects to support numerous built-in functions. For example, Tzer found the StringImm object in TVM failed to provide a __hash__ implementation and threw an unexpected exception when put into a map container.
Pass-Module
Immutability. TVM's passes mark the input IR module as const object (i.e., const IRModule), meaning that member functions that mutate data members cannot be called by such objects. However, Tzer found a pass, i.e., ToBasicBlockNormalForm, violating this contract by permitting the input const IR object to call non-const methods using pointers (C++ codebase), resulting in inconsistency issues in Python front-end. We fixed this bug by forcing a copy at the beginning of the transformation. A simplified bug fix is shown in Listing 2.
API Misuse.
Tzer also surprisingly detected that sometimes O4 optimization performs even worse than O2 (default optimization). This is actually because we followed TVM's official tutorial when building Tzer while their tutorial misused the API which failed to invoke the desired optimization. In TVM's Python API, optimization level can be be identified within a scope called PassContext (Line 1 in Listing 3). In Listing 3, old tutorial code calls .evaluate() outside the PassContext scope. The evaluate() function, nevertheless, is where the optimizations are applied. Therefore, when calling evaluate out of the O4 scope, the default optimization (O2) will be applied so that when comparing with another O2-optimized binary (they are all equally optimized), it is possible to see one is slower than the other due to uncertainty. ....
5
-tvm_out = executor .evaluate() (tvm.nd.array(data.astype(dtype)) , **params ) 6 + tvm_out = executor(tvm.nd.array(data.astype(dtype))) 5.5.5 API Inconsistency. Inconsistency in API happens when a program does not act as what the API is specified. For example, when running programs on heterogeneous devices (e.g., run a program that requires both GPU and CPU), TVM splits the functions into either the host side or the device side.
There is a parameter controlling the calling convention (i.e., calling_conv) for the heterogeneous compilation, which is set to kDefault by default. kDefault generally means that both host and device targets are CPUs (e.g., LLVM as the target). However, a pass called DecorateDeviceScope violates the calling convention by implicitly change kDefault into kDeviceKernelLaunch which is built for non-CPU device targets (i.e., the DecorateDeviceScope is not desired to change the calling convention). Such an inconsistency leads to a crash at runtime.
5.5.6 Type Error. Tzer found an issue regarding TVM's constant folding in integer conversion. For example, the expression assert tir.const(1) == tir.const(True) would throw unexpected exceptions, whereas we expected it to be a True after evaluation. The root cause is that conversion for signed/unsigned integers (int64 and boolean) is not well handled. Theoretically, since the range of boolean type is the subset of int64's, we can convert the boolean value to an int64 value. We fundamentally fixed the issue by refining TVM's type conversion for signed and unsigned integers.
5.5.7 Driver Lifetime Error. Tzer found that when enabling CuDNN [Chetlur et al. 2014] as the target backend, TVM crashes after being stuck for a while when the program exits. This is because TVM made the CuDNN device handler of a whole-process lifetime by marking it with thread_local (a specifier in C++). Thus, according to the RAII rule [Stroustrup 2017] of C++, the deconstructor to release the handler will be called during program exit. However, the CuDNN library context might have already been exited when such release handlers are being called, causing segmentation fault after a long suspension. We further proposed 2 fixes to this problem: (1) we register the handler release function at exit time using atexit, and make sure that the destroyer of library context will be called after it; (2) we simply remove the handler release code and let it leak since we do not need to do recycling when a program is going to exit. The community finally accepted proposal (2) since proposal (1) is more advanced and complex, increasing the maintenance cost.
5.5.8 Out-of-Memory. Tzer found an interesting out-of-memory (OOM) bug when using the virtual machine (VM) as TVM's executor. The cause is that the previous VM memory allocator never releases occupied memory in the memory pool and leverages no memory defragmentation strategies. It only re-uses memory blocks in the pool if the incoming request size is smaller than the existing one. When the memory requests follow a monotonic pattern, it fails eventually since it cannot release previous memory blocks in the pool.
For example, as shown in Table 5, on a GPU of 8 GB memory, if for each time, we release GB memory and allocate + 1 GB memory ( starts from 0), it will fail in the 4th step. The reason is that each time, after releasing GB memory, the released memory chunk is returned to the free list; when requesting + 1 GB next time, all chunks in the pool cannot be used since they are smaller than + 1 GB. Hence, in the 4th step, even though the GPU has 8 GB physical memory, it cannot allocate a 4 GB memory chunk.
We fixed this issue by simply releasing all cache blocks and re-attempting allocation if any OOM exceptions are caught. 5.5.9 Arithmetic Error. Tzer also found some functions in TVM fail to check the legality of arithmetic operations, such as division by zero. This bug lies in an optimization that simplifies the calculation of TIR. Specifically, when TVM tries to simplify a division expression whose two operands are of type Ramp and Broadcast, it will directly modulo two numbers without checking the divisor. This causes the program to crash when the divisor is 0.
5.5.10 Unknown Failure. Among the bugs found by Tzer, there is one whose root cause is still unknown to date. The phenomenon of the bug is exception fleeing. For example, when the device is running out of memory, it is expected to throw an OOM exception throughout the call stack to form a failure trace logging. However, we found that sometimes, though thrown, the exception disappears during stack unwinding and allows the error program to continue the execution. Since the continued program is incorrect, it eventually fails elsewhere with another exception instead of the root exception. When debugging it with GNU Debugger [Gatliff 1999] or LLVM Debugger [Lee 2013] to monitor the exception path, it strangely skipped __cxa_catch [CodeSourcery et al. 2021] and just fled away. Though this strange bug can be reproduced on different operating systems and compilers and has been confirmed by TVM's developers, it still remains unfixed at this moment.
CONCLUSION
The evolution of tensor compilers requires automated testing to achieve high maintainability and reliability. We demonstrate that existing fuzzing techniques are not tailored or effective enough to fulfill this mission. To this end, we present Tzer, a practical coverage-guided tensor compiler fuzzer with joint IR-Pass mutation. Unlike traditional compiler fuzzers, Tzer performs joint IR and pass mutation to explore various program states and introduces coverage guidance to navigate the mutation process. Specifically, in addition to general-purpose mutators, Tzer also leverages tailored domain-specific mutators to target the hotspot logics behind tensor compilers. The evaluation shows that Tzer substantially outperforms the state-of-the-art fuzzers including a general-purpose fuzzer (i.e., LibFuzzer), a graph-level DL model fuzzer (i.e., LEMON), and the only domain-specific fuzzer for TVM (i.e., TVMFuzz). As one of the practical contributions of Tzer, to date, we have helped the TVM community find 49 new unique bugs, with 37 confirmed and 25 of them already fixed in the current TVM version. Our effort has been highly recognized by the TVM community, and the leading author of Tzer has been nominated as a community reviewer for TVM. | 2022-02-22T06:47:21.936Z | 2022-02-21T00:00:00.000 | {
"year": 2022,
"sha1": "8a02e3747123a605174beac0d622dff386d2a8db",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "8a02e3747123a605174beac0d622dff386d2a8db",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
235826220 | pes2o/s2orc | v3-fos-license | Blood Markers in Detection of Autism
The current paper gives a brief description of the relationship of biomarkers with autistic spectrum disorder, we discuss autism, study the biomarkers which can be traced by medical tests, their association with the triggering of autistic behaviors, and finally we discuss how the regulation and balance of these biomarkers could reduce the autistic symptoms. Keywords—Autism, biomarkers, enzymes, omega-3 fatty acids, folic acid, vitamin B1, probiotics oxidative stress, toxins, heavy metals, mercury, pesticides, maternal infections, mitochondrial dysfunction, gluten, neurodevelopment
Introduction
Autism is a lifelong neurodevelopmental disorder characterized by deficits in oral verbal communication, as well as the presence of stereotyped behaviors. According to researches and multi-year studies, scientists have concluded that biomarkers can show if a child has ASD.
Biomarkers are substances that are detected in the blood and urine. In children with autism biomarkers usually are at lower levels, such as antioxidant enzymes, omega-3 fatty acids, folic acid, vitamin B1 and probiotics. Another biomarker is oxidative stress, which is caused by environmental toxins and toxic substances, such as heavy metals, mercury, pesticides, maternal infections, as well as mitochondrial dysfunction, has been involved in the etiology of autism.
Other studies in children with autism have identified clear phenotypes (gastrointestinal problems, bacterial neurodevelopment, as well as excess gluten). Abnormalities in serotonin levels have yet to be observed.
From all these studies it was conducted that, in addition to heredity, an important role in the occurrence of autism plays exogenous factors, which if they disappear (oxidative stress, gluten) or if various substances such as omega-3, vitamin B1, folic acid are given, will there has been significant improvement in behavior.
Genomes and Chromosomes
Jianjun et al [2] report that analysis of genotype 333 in families with two affected individuals with autism, showed that, it is linked to chromosome 5X, 15 and 19. They also mention that autism disorders result from multiple allergic interactions, sensitive to environmental factors.
The methods used to diagnose autism, were a questionnaire by pediatricians (Diagnostic Interview, ADOS), brain imagining, urine and blood analysis, as well as psychological tests.
Serotonin
Autism is a childhood neurodevelopment disorder characterized by a weakening of social interactions and communication with appearance of behavioral stereotypes. According to Subhranghu and other teams, [5], [6], [7], [8] et al. many studies have shown an increase in blood serotonin (5-HT) levels in autistic individuals, but also of neuropsychiatric disorders. This proves abnormalities of serotonergic neurotransmission.
The methods used for the above investigations, were whole blood tests, urine tests, as well as positron emission tomography.
The result of these studies, [5], [6], [7], [8] is that except for genes, a significant role in the development of autism, also plays elevated blood serotonin levels.
Inflammations
Theocharis C Theocharides [9] et al. report that studies have shown different immunological abnormalities in children with autism. However, to find evidence of immune activation in the central nervous system in children with autism, inflammation markers were measured in the cerebrospinal fluid, such as quinolinic acid, neopterin, biopterin and levels of the inflammatory cytokine.
The methods were applied, were a quinolinic test in the cerebrospinal fluid by modification of gas chromatography.
The result of all of these measurements was that both quinolinic acid and neopterin, as well as increased cytokines, are sensitive inflammatory markers.
The conclusion was that reduced levels of neopterin [9], [10] were found in the cerebrospinal fluid, as well as quinolinic acid, but increased biopterin.
Fatty Acids
A human brain contains a high concentration of lipids that contribute to the synthesis of neuronal phospholipid membranes (PL). For its development, polyunsaturated fatty acids omega -3 and omega-6 are essential.
Stephen et al. [11] report that in various studies in children with autism, there is a lack of this supplement.
Besides, each of the autism scales, such as irritability, social withdrawal, stereotypes, inappropriate speech, and especially hyperactivity, showed improvement with the administration of fatty acids.
The methods used, were plasma testing of children with autism, using chromatographic methods to quantitatively measure many classes of lipids and fatty acids.
The conclusion drawn from these investigations [11], [12], [13] was that hyperactivity, was significantly reduced by the administration of omega-3 fatty acids.
Neurotrophins Neuropeptides
Autism spectrum disorder (ASD) characterized by social interaction disorders, verbal and non-verbal communication, as well as stereotyped behaviors.
According to Lisa A. Groen et al. [14] the increase in autism pushed for an intense effort to identify early biological markers. This research has shown that BDNF (brainderived neurotrophic factor), which originates in the brain and is located in the entire central nervous system and in the peripheral blood, it is involved in the survival of neuromuscular nervous in the developing brain. BDNF is trophic for serotonergic neurons and serotonin levels are the most common in autism. A high concentration of NT-4 (neurotrophin) was also observed in people with autism. [14], [15].
The methods used were the use of ELISA technology, as well as the use of Luminex technology was increased levels of BDNF were found in the plasma of children in autism.
Oxidative Stress
Autism spectrum disorders (ASD) are a group of neurodevelopmental disorders characterized by social deficits, as well as repetitive behaviors.
Both genetic predisposition as well as environmental toxins and toxic substances, such as heavy metals, mercury, pesticides, polychlorinated diphenyls as well as maternal infection, have been implicated in the etiology of autism. The impact of these environmental behaviors is associated with increased oxidative stress, [16], [17].
Oxidative stress causes molecular damage manifested by lipid peroxidation, DNA, and protein modification (3-NT). A 3-NT increase in the cerebellum has been observed in autism, Alzheimer's, and Parkinson's disease.
Oxidative stress can be detected by studying various markers in urine and erythrocytes, such as protein and polyunsaturated fatty acids (PUFA).
From all of these studies on erythrocytes membranes come to a conclusion that in causes of dyslexia, hyperactivity, and attention deficit, there is a reduced concentration of PUFA (ω3 -ω6).
Intestinal Flora
Autism syndrome is a complex neurobiological disorder characterized by impaired communication, as well as repetitive stereotyped behaviors.
Studies in people with autism have shown that these people suffer from gastrointestinal problems. This is mainly due to administration during the first years of life, which eliminate the beneficial bacteria and help the pathogenic microbes to fall in the intestinal flora. These microbes interact with the central nervous system and show ASD behavioral symptoms.
Maria Florentino et al [18] discovered that children with autism were found with small amounts of Prevotella polysaccharides (beneficial germ) in the gut, which have the key genes for the biosynthesis of vitamin B1, as well as fish oil supplementation and Coptoccocus.
The methods used were stool tests, digestion markers, fatty acids, PH, and the presence of blood. Gastrointestinal problems were assessed with a modified questionnaire.
The conclusion from these studies was that children with autism have more gastrointestinal problems due to taking antibiotics, reduced consumption of fish, fish oil, probiotics, and B1, [18], [19] , [20], [ 21].
Immune Reactions
Autism spectrum disorders (ASD) are complex childhood developmental disorders characterized by impairment in social interaction, deficits in oral and verbal communication, as well as in various studies in children with ASD clear phenotypes have been identified (regression and profile of biochemical and immunological markers, gastrointestinal problems, as well as bacterial overgrowth, increased severity of the syndrome, and the beneficial effect of nutritional blockade of gluten).
Studies [21], [22], observed that in children with autism, there is the presence of inflammation in the brain and increased plasma cytokine levels, such as interleukin (1α), -1β, IL-6, and IL-8. Other studies have also shown an increase in NK cells in children with autism.
One method used is ileocolonoscopy to assess mucosal inflammation in children with autism, as well as lymphocytes (PH of the peripheral blood of the mucosa showed significant immune regulation of the immune system.
These studies conclude that children with autism have brain inflammation, as well as gastrointestinal problems due to intestinal permeability. That's why many children on a gluten-free diet have been shown behavioral stereotypes. [23].
Cytokines
Autism is a neurodevelopment disorder characterized by social deficits, impairment oral communication, as well as the presence of stereotyped behaviors.
People with ASD have increased neuroinflammation in brain tissues, imbalance in immunoglobulins, including plasma levels of Ig G4, decreased levels of IgG, and imbalance in cytokines/chemokines levels.
Cytokines and chemokines are proteins involved in regulating hematopoiesis of inflammation and the proliferation of immune cells. Besides, they play an important role in normal neurodevelopment.
Qusseny Zerbo et al. [23] and Japanese researchers reported that children with autism and mental retardation have higher BDNF (brain-derived neurotrophic factor) concentrations.
Other studies [24] have shown that there is an association between maternal infections and inflammation during pregnancy and ASD and could be achieved through a disturbance of balance levels of cytokines/chemokines.
Methods used in these studies were a plasma test and amniotic fluid in children with autism.
The conclusion drawn from these researchers was that children with ASD are more likely to have reduced levels of MCP-1 and reduced levels of RANTES.
Biomarkers in General
Autism is a lifelong neurodevelopment disorder characterized by deficits in oral and verbal communication, as well as the presence of stereotyped behaviors.
Mr. Andre A.S Coldani et al. [25] in studies have reported that biomarkers in autism may reflect genetic and neurological changes or epigenetic processes that may be active at certain times.
Genetic biomarkers: As autism is one of the most well-known physical disorders, researches have been conducted on possible biomarkers. In these studies, the heredity of ASD was verified.
Epigenetic biomarkers: ASD epigenetic changes occur through tissue modification methylation, RWA color modulation, and silencing. Processes in the environmental impact of the X gene that affect gene expression metabolic processes such as oxidative stress, chronic brain inflammation, diabetes, mitochondrial function, hyperglycemia, methylation, immune function, and stress, the maternal and paternal immune system, toxic to the environment and diet.
The many promising biomarkers include those for mitochondrial function, oxidative stress, and immune function. Some children with ASD have a spectrum of mitochondrial function. 80% of patients with ASD may be acquired rather than inherited.
Oxidative stress is detected by studying the antioxidant enzymes in the blood, the peroxidation of fats, and the oxidation of proteins that are increased.
Examinations of biomarkers may lead to the treatment of metabolic abnormalities such as phenylketonuria, cerebral acid deficiency, and creatinine deficiency, as well as an excess of prionic acid.
Conclusion
In conclusion, according to researches on autistic people, scientists have concluded that autism, which is a neurodevelopmental disorder, is due to both heredity and environmental factors (oxidative stress, heavy metals, gluten etc.). Also, in the absence of omega-3 fatty acids, vitamin B1, folic acid, and serotonin we observe manifestations of autistic behaviors. Finally, studies approved that when regulation and balance of the Markers guide to a new homeostasis, there will be an improvement in the behavior of these individuals. | 2021-07-15T13:28:03.296Z | 2021-06-30T00:00:00.000 | {
"year": 2021,
"sha1": "ea4b12ae4daa273008d27f93506cafd91ad494b9",
"oa_license": "CCBY",
"oa_url": "https://online-journals.org/index.php/i-jes/article/download/21283/9483",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "5697415202edf5dc402a6b514ad9da018fbeeb6f",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
} |
247995034 | pes2o/s2orc | v3-fos-license | The effect of ultrafast heating rate on the elemental distribution between phases in a low carbon steel
Abstract This work focuses on the effect of heating rate, i.e. conventional heating (CH) and ultrafast heating (UFH) rates on the elemental distribution between phases in a low carbon steel. Microstructural characterization was carried out using an Electron Backscatter Diffraction (EBSD) and Atom Probe Tomography (APT) technique. Nanohardness of individual microconstituents was measured. It is shown that the applied heat treatments result in the formation of multiphase microstructures consisting of a ferritic matrix with embedded martensite and retained austenite. The ferritic matrix of the CH material was fully recrystallized, whereas both recrystallized (coarser) and non-recrystallized (finer) ferritic grains were present in the matrix of the UFH material. APT analysis indirectly confirmed that recrystallized grains after both heat treatments have a lower carbon content, when the non-recrystallized grains after UFH have a higher carbon content. It correlates with the nanohardness results, i.e. non-recrystallized grains show higher hardness, while recrystallized grains have a lower hardness. The segregations of C and Mn atoms at the martensite/ferrite interface were observed after both treatments. It is hypothesized that the segregations are formed under negligible partitioning local equilibrium condition during CH, whereas the solute drag effect results in the formation of interface segregation during UFH.
Introduction
Nowadays, steel manufacturers and the automotive industry face significant pressure to reduce energy consumption and carbon emission. Rapid manufacturing of advanced high-strength steels (AHSSs) showing enhanced mechanical performance can address both problems. Rapid manufacturing can dramatically reduce the energy costs during steel processing, while applying these AHSSs in cars will lightweight the parts, thus decreasing the CO 2 emissions. Ultrafast heating (UFH) of carbon steels has recently attracted significant attention from the steel research community, as it allows to process multiphase AHSSs with improved mechanical strength and performance within seconds (Castro Cerda et al., 2016;Lolla, Cola, Narayanan, Alexandrov, & Babu, 2011;Matlock, Kang, De Moor, & Speer, 2020;Petrov, Kestens, Kaluba, & Houbaert, 2003). These steels typically have a complex multiphase microstructure consisting of ferritic matrix with embedded martensitic grains and low fraction of retained austenite (Valdes-Tabernero, Celada-Casero, Sabirov, Kumar, & Petrov, 2019;Valdes-Tabernero et al., 2020). There are numerous reports focused on the microstructure evolution during UFH processing of AHSSs (Castro Cerda et al., 2016Valdes-Tabernero, Celada-Casero, et al., 2019;Valdes-Tabernero et al., 2020). It was shown that the microstructure evolution during UFH process is a very complex phenomenon involving phase transformations, recovery, recrystallization, grain growth, and diffusion of alloying elements. Their kinetics strongly depend on the heat treatment parameters, where heating rate (Castro Cerda et al., 2016Lolla et al., 2011;Matlock et al., 2020;Petrov et al., 2003), soaking time (Valdes-Tabernero, Celada-Casero, et al., 2019), and peak temperature (Valdes-Tabernero et al., 2020) play the key role. It was established that the UFH process of carbon steels (at ≥800 °C/s) with a short soaking time (≤1.5 s) results in a finer grain size and, therefore, in increased mechanical strength (Castro Cerda et al., 2016;Valdes-Tabernero, Celada-Casero, et al., 2019). This was related to shifting of the A 3 and recrystallization temperatures to higher values, promoting nucleation rate of recrystallized nuclei and suppressing grain growth during such a rapid treatment (Castro Cerda et al., 2016;Valdes-Tabernero, Celada-Casero, et al., 2019).
It is well known that segregations of alloying elements can affect significantly the microstructure evolution during heat or thermo-mechanical treatment of steels, thus providing an extra tool for microstructural design (Raabe et al., 2013). Xie, Shang, Subramanian, Ma, and Misra (2017) studied the elemental distribution in a Fe-0.08C-0.5Si-2.4Mn-0.5Ni-0.42(Mo + Nb + Ti) (wt.%) steel subjected to a twostep thermal treatment, resulting in the formation of a multiphase microstructure with 10% of film-type retained austenite. The atom probe tomography (APT) analysis at the austenite-ferrite interface revealed an enhanced concentration of C and Mn atoms in austenite, and Mn segregations were detected at the austenite/ferrite interface. Authors also employed kinetic calculations using the simulation software DICTRA to explain the phenomenon (Xie et al., 2017). According to the results, the austenite formation was divided into three main stages. In the first stage, the thickness of the austenite grains grew rapidly (within 0.1 s) under negligible partitioning local equilibrium (NPLE) conditions, as this stage was controlled only by diffusion of C. The latter rapidly diffused into austenite, and its concentration reached an equilibrium content of 0.02 wt.% in ferrite, and austenite was enriched by up to 0.7 wt.% of C. On the other hand, Mn could only segregate at the interface. In stage II, the austenite growth was slower due to the Mn partitioning from the matrix to the austenite, resulting in a concentration gradient in the interior of the austenite phase, whereas C concentration was nearly uniform in both phases. In stage III, Mn diffused along the austenite homogeneously distributing in the grain interior, accompanied by the shrinkage of the austenite grain size. Similar results were reported in Wei, Enomoto, Hadian, Zurob, and Purdy (2013) by Wei and co-authors during intercritical annealing of a quaternary system (Fe-0.1C-3Mn-1.5Si, wt.%) with initial martensite microstructure. In the latter study, the authors showed that the second stage could also be controlled by the diffusion of Si atoms. In Li et al. (2017), a pipeline Fe-0.07C-0.25Si-1.94Mn-0.081Nb-0.28Cr-0.26Mo-0.014Ti (wt.%) steel was analyzed by APT after welding. Partitioning of C and Mn into austenite/martensite from the surrounding ferritic matrix was observed, whereas other alloying elements, such as Cr or Mo, segregated only at the interface. While the distribution of Mn in the interior of the austenite/martensite and ferrite was homogeneous, C showed a large variation in concentration along the grain interior. These experimental works clearly demonstrate that the partitioning of alloying elements plays a critical role in the microstructure evolution during steel heat treatments. However, this phenomenon has never been studied in the steels subjected to heat treatments with ultrafast heating rates. The main objective of the present work is to explore the effect of alloying elements and their segregations (if present) during UFH processing of a low carbon steel on microstructural refinement, volume fraction and nanohardness of the phases.
Material and processing
A low carbon steel Fe-0.19C-1.61Mn-1.06Al-0.50Si (wt.%) or Fe-0.9C-1.6Mn-2Al-1Si (at. %) was chosen for this study. The material was supplied in the form of 1 mm cold rolled (50% reduction ratio) strips with a ferritic-pearlitic microstructure. Flat specimens having a length of 90 mm and a width of 10 mm were machined along the rolling direction. A Gleeble 3800 thermo-mechanical simulator was used to simulate the heat treatments. In order to control the temperature, a K-type thermocouple was spot-welded to the midsection of each specimen. Ultrafast heating (UFH) treatments with a heating rate of 800 °C/s and conventional heating (CH) treatments with a heating rate of 10 °C/s were performed ( Figure 1). In both cases, the samples were heated to the intercritical temperature of 860 °C and rapidly cooled to room temperature at ~160 °C/s ( Figure 1). The soaking time at the peak temperature did not exceed 0.2 s, so the effect of annealing time on the microstructure was suppressed.
Microstructural characterization
Quantitative microstructural characterization of the heat-treated samples was performed through electron backscatter diffraction (EBSD) analysis. Specimens were ground and polished to a mirror-like surface applying standard metallographic techniques with final polishing using OP-U (colloidal silica). The EBSD studies were performed using an FEI Quanta™ Helios NanoLab 600i equipped with a NordlysNano detector controlled by the AZtec Oxford Instruments Nanoanalysis (version 2.4) software. The data were acquired at an accelerating voltage of 18 kV, a working distance of 8 mm, a tilt angle of 70°, and a step size of 65 nm in a hexagonal scan grid. The orientation data were post-processed using HKL Post-processing Oxford Instruments Nanotechnology (version 5.1©) software and TSL Data analysis version 7.3 software. Grains were defined as a minimum of 4 pixels with a misorientation ≥ 5°. Grain boundaries having a misorientation ≥ 15° were defined as high-angle grain boundaries (HAGBs), whereas low-angle grain boundaries (LAGBs) had a misorientation < 15°. The volume fractions of martensite, retained austenite, and recrystallized/recovered ferritic grains were determined by a two-step partitioning procedure described in (Petrov & Kestens, 2015). In this procedure, grains with high and low grain average image qualities are separated in a first step, allowing to distinguish between transformed (martensite) and untransformed (ferrite) fractions. In the second step, recrystallized and non-recrystallized ferritic grains are separated using the grain orientation spread criterion: Grains with an orientation spread below 1° are defined as the recrystallized grains, while grains with an orientation spread above 1° are defined as the non-recrystallized ones (Cao et al., 2013). Grain size was evaluated via the calculated average grain diameter. The microstructure was observed on the plane perpendicular to the sample transverse direction (the RD-ND plane).
The specimens for atom probe tomography (APT) analysis of ferrite/ martensite interfaces and ferrite and martensite matrixes were prepared by the focused ion beam (FIB) in situ lift-out procedure using an FEI Quanta 3D FEG-SEM (Miller & Forbes, 2014) (Figure 2). The APT study was performed on a CAMECA LEAP-4000HR in the voltage-pulsed mode with 20% pulse fraction, and 200 kHz pulse rate at -213 °C. IVAS 3.6.12 software was used to reconstruct the APT data.
Nanomechanical characterization
A HysitronTI950 Triboindenter with a Berkovich tip was employed for nanoindentation testing on the individual microconstituents. First, square areas having a size of ~10 × 10 µm 2 were analyzed by EBSD, and individual microconstituents were determined. At least ten areas were tested for each material's condition. Nanoindentation tests were carried out in displacement control mode at a constant strain rate (ε the penetration rate of the indenter. At least 20 indents were performed on each phase at an imposed maximum depth of 150 nm. The nanohardness was determined from the analysis of the load-displacement curves using the Oliver and Pharr method (Oliver & Pharr, 1992). Figure 3 illustrates the microstructure of the steel after CH (a,c) and UFH (b,d) treatments. It is seen that both heat treatments lead to a complex multiphase microstructure consisting of a ferritic matrix with embedded martensite and retained austenite. Table 1 outlines the average grain size, and the volume fraction of each microconstituent. The fraction of martensite decreases with increasing heating rate from 10.6% to 6.9%. The volume fraction of retained austenite is low after both treatments (2.2-3.1%). The UFH treatment results in a finer grain size for both ferrite and martensite. The ferrite in the CH samples has equiaxed recrystallized morphology with homogenous distribution of ferrite grain size from 4 to 8 µm ( Figure 3). In contrast, ferrite in the UFH samples presents two morphologies: recrystallized grains and non-recrystallized (i.e. recovered) grains having a wide grain size distribution (Figure 3e). The volume fraction of recrystallized ferrite is nearly 50%. The effect of heating rate on microstructure evolution of the present steel was thoroughly discussed in our recent article (Valdes-Tabernero, Celada-Casero, et al., 2019). The APT samples were taken across ferrite/martensite interfaces and from the ferrite and martensite matrixes. The phases were identified based on the content of C and Mn obtained from the APT atom maps and matrix composition analyses. The distribution of solutes across different interfaces was analyzed by 1-D concertation profiles for both CH and UFH conditions.
Results
There are three phases identified after CH treatment: ferrite, bainitic ferrite and martensite/RA (it is not possible to discriminate RA from martensite by APT since RA transforms to martensite at the temperature of probing, -213 °C) ( Table 2; Figures 4 and 5). It should be noted that bainitic ferrite could not be identified by SEM/EBSD in the earlier works due to its very low volume fraction and nanosize. Ferrite has the equilibrium C content of 0.05 ± 0.01 at. % and Mn content of 1.4 at. % (Table 2). Martensite/RA exposes the average C content of 3.3 ± 0.3 at. % and is enriched by Mn (3 ± 0.5 at. %), which was expected since the latter is an austenite stabilizer ( Table 2). The APT also indicates the presence of bainitic ferrite with an average C content of 0.1 at. % located between ferrite and martensite/RA (Table 2; Figure 4). Its formation could be explained by the slow heating rate during the CH treatment allowing a long distance for C diffusion from austenite/ferrite interface into austenite. This led to the formation of the remaining austenite with different carbon contents. The remaining austenite with higher carbon content transformed to martensite during final quenching, while the remaining austenite with lower carbon content was decomposed with formation of bainitic ferrite with carbon content of 0.1 ± 0.02 at. %. at the temperature higher than martensite transformation temperature during final cooling. Moreover, Mn does not segregate at bainitic ferrite/martensite interfaces, whereas C shows a prominent spike of 5 at. % at the interface (Figure 4). The C maps of martensite confirmed its two types of C distribution: (i) homogenous ( Figure 5a) and (ii) inhomogeneous ( Figure 4a) where C decorated microstructural features that are believed to be dislocations. Segregation of C at the dislocations observed in some of martensite grains could occur in austenite grains near ferrite grains, which were formed at the beginning of austenite to ferrite transformation. It enables carbon to diffuse into remaining austenite grain interior and segregate at the dislocations. The concentration profile across martensite/ferrite interfaces after CH treatment exposed distinct 5.5 at. % Mn peak at the interface and another C peak of 4.4 at. % inside the martensite grain at the distance of 15 nm from the interface ( Figure 5).
APT analysis of the sample after UFH treatment confirms existence of two types of ferrite grains with higher (~0.08 at. %) and lower (~0.02 at. %) C contents ( Table 2). The ferrite grains with lower C content may correspond to the recrystallized ferrite grains detected by EBSD analysis, whereas ferrite grains with higher C content can be linked with the non-recrystallized ferrite grains (Figure 3). Martensite after UFH has slightly higher C content of 3.8 at. % compared to the one after CH, most likely due to the absence of C segregations at dislocations. Nearest C neighbour distribution APT analysis of martensite islands demonstrated that the distribution of C does not deviate from the corresponding random distribution that indicates the absence of carbon clusters in martensite (Figure 6c).
The composition profiles across ferrite/martensite interfaces of the UFH sample showed the distinct peaks of Mn and C at the interface (Figure 6b). The spikes have a sharper shape compared to the peaks after CH treatments. Moreover, the Mn segregation at the ferrite/martensite interface after CH treatment (5.5 ± 0.1 at. %) was slightly higher than after UFH (5 ± 0.2 at. %), whereas the amount of C atoms segregated at the interface after CH was lower (4.4 ± 0.2 at. %) than after UFH (5 ± 0.2 at. %) ( Table 2; Figures 4-6). The segregation of other alloying elements, such as Si and Al, was negligible.
The nanohardness values for the recrystallized and non-recrystallized ferrite grains after CH and UFH heat treatments are summarized in Table 3. The nanohardness of recrystallized ferrite was similar after both treatments, whereas the nanohardness of non-recrystallized ferrite was higher than the nanohardness of recrystallized ferrite. This can be associated with the higher C content measured in recovered ferrite, as well as its enhanced dislocation density (Valdes-Tabernero, Celada-Casero, et al., 2019).
The average nanohardness value of martensite formed after CH treatment was higher than after UFH treatment, most likely due to the segregation of C atoms at the dislocations present in the martensite of the CH treated samples leading to the hardening effect of this martensite.
Discussion
From these experimental results, it is clearly seen that the heating rate has a significant effect on the microstructure (on micro-and atomic levels) and properties of the individual microconstituents formed in the studied alloy. During the CH treatment, the slow heating controlled the phase transformations, resulting in a more homogenous C distribution over microconstituents. The ferrite grains after the CH treatment were fully recrystallized, and the C content of ferrite in these samples was homogenous and at the equilibrium composition (0.05 ± 0.01 at. %) ( Table 2). All ferrite/martensite interfaces analysed by APT in the CH samples showed segregations of C and Mn. Since at slow heating, the ferrite to austenite transformation is fully accomplished by diffusion, the segregation at the interfaces could be explained by the mode of phase equilibria at the migrating interface, such as negligible partitioning local equilibrium (NPLE) and para-equilibrium (PE) (Agren, Brechet, Hutchinson, Philibert, & Purdy, 2016). The PE model represents a limiting case, where the substitutional atoms are completely immobile (unlike interstitials), and the compositional ratio of substitutional solute and solvent elements remains constant across the interface (Agren et al., 2016). In the NPLE model, even though long-range diffusion is not possible, the substitutional atoms can still move in the vicinity of the interface and build up a solute spike in front of the interface as the interface passes through (Agren et al., 2016). The NPLE condition could occur at the slow interfacial velocity, which appeared to happen during CH treatment, when the transformation was controlled by slow heating, and the diffusion and segregation of C (up to 4.4 at. %) and Mn (up to 5.5 at. %) atoms were enabled by the slow mobility of the interface during phase transformation (Table 2; Figures 4 and 5). Moreover, slow heating stimulated the diffusion of carbon in the remaining austenite and its segregation at the dislocations ( Figure 4). The SEM-EBSD analysis indicates the formation of two types of ferrite grains after UFH treatment: recrystallized with the average nano-hardness of 2.6 GPa and non-recrystallized with the average nano-hardness of 3.2 GPa (Tables 1 and 3). It is well correlated with the C content of ferrite grains analysed by APT, i.e. some grains have a C content of 0.08 ± 0.01 at. % (non-recrystallized), and the others of 0.02 ± 0.01 at. % (recrystallized) ( Table 2). The difference in the ferrite grain morphologies could be associated with inhomogeneous redistribution of carbon in austenite due to the UFH treatment of ferrite/pearlite initial microstructure. Interestingly, the recrystallized grains formed after CH showed a similar range of grain size, C content (0.05 ± 0.01 at. %) and nanohardness level (2.7 GPa) (Tables 1-3). APT analyses of the interfaces formed after UFH treatment also showed the segregation of C and Mn at the ferrite/martensite interfaces. However, the C peak was more prominent (5 at. %), and the segregation of Mn was less prominent (5 at. %) compared to the interfaces formed after CH treatment (Table 2). These observations cannot be explained based on the equilibrium state at the interface (PE, NPLE) during the UFH process, since (unlike in the CH case) the diffusion of Mn is suppressed. Instead, we can use the concept of the solute drag (Hillert, 1999). The solute drag model is based on the dissipation of the Gibbs free energy by substitutional solute atoms that move by short-range diffusion across the interface and the possible interaction between substitutional atoms and the migrating interface (Clark, Janik, Lan, & Sridhar, 2017;Hillert, 1999). Potentially the solute drag effect can explain the segregation of Mn, which can alter the interface mobility.
Since the phase transformations occurred at a very high rate and Mn was inhomogeniously distributed between the parent phases (ferrite and pearlite), different amounts of Mn can segregate at the different interfaces changing their mobility, which leads to the formation of two types of ferrite grains (i.e. recrystallized and non-recrystallized) with different size, carbon content and nanohardness (Tables 1-3). The amount of elements (C, Mn) segregated at the interfaces after CH and UFH mentioned above can indirectly confirm this hypothesis. It should also be noted that C did not show any segregation at dislocations or C clustering in martensite, which can occur due to the pipe diffusion along the dislocation core. This can also be correlated with the inhibition of the carbon diffusion during UFH process.
Conclusions
The effect of heating rate (ultrafast (UFH) vs. conventional (CH)) on the microstructure of a Fe-0.19C-1.61Mn-1.06Al-0.50Si (wt.%) steel was studied on the micro-and atomic scales and was correlated with the nanohardness of individual microconstituents. The following conclusions can be drawn based on the analysis of the obtained experimental results.
1. A complex multiphase microstructure consisting of ferritic matrix with embedded martensite and retained austenite grains is formed after both CH and UFH treatments. The ferritic matrix of the CH treated material is fully recrystallized, whereas both recrystallized and non-recrystallized ferrite grains are observed in the UFH treated material. Additionally, APT reveals nanosized areas of bainitic ferrite at the martensite phase boundaries in the CH material. The formation of the latter is related to the slow heating rate allowing a long distance for C diffusion from austenite/ferrite interface into austenite. UFH process results in a finer microstructure compared to the CH treatment. 2. There are segregations of C and Mn atoms at the martensite/ferrite interface after both CH and UFH treatments. The Mn atoms do not segregate at martensite/bainitic ferrite interfaces, whereas C shows a prominent spike. These segregations are formed under NPLE condition during CH treatment, whereas the solute drag effect controls their formation during UFH treatment. 3. It appeared that the higher nanohardness of the non-recrystallized ferritic grains after UFH compared to the recrystallized ferritic grains for both UFH and CH can be correlated with their higher C content and, in turns, higher dislocation density. | 2022-04-07T15:18:02.082Z | 2022-04-05T00:00:00.000 | {
"year": 2022,
"sha1": "4727acd1facc76ab538740f843437cb0576eae59",
"oa_license": "CCBYNC",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/26889277.2022.2060761?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "23fdc45d83f4660c98054faf461a8f0eb6d19bad",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": []
} |
154733936 | pes2o/s2orc | v3-fos-license | WORK DURING VACATION : NOT SO BAD AFTER ALL
IP: 194.171.178.174 On: Fri, 13 Oct 2017 13:06:46 Article(s) and/or figure(s) cannot be used for resale. Please use proper citation format when citing this article including the DOI, publisher reference, volume number and page location. 759 Tourism Analysis, Vol. 19, pp. 759–767 1083-5423/14 $60.00 + .00 Printed in the USA. All rights reserved. DOI: http://dx.doi.org/10.3727/108354214X14146846679565 Copyright Ó 2014 Cognizant Comm. Corp. E-ISSN 1943-3999 www.cognizantcommunication.com
Introduction
Vacations constitute a break from everyday life. While there used to be a very clear distinction between everyday life and vacation (MacCannell, 1976), this is no longer the case in contemporary society where the boundaries between work and leisure seem to fade (Bauman, 2000;Blackshaw, 2010). Although the differences between everyday life and vacation have diminished (McCabe, 2002), both domains still have different characteristics. Vacations often allow for a much needed break from work. Recent research revealed that tourists feel better during vacation than they do in their everyday lives (Nawijn, 2011a). Positive emotions are of particular importance (Mitas, Yarnal, Adams, NAWIJN AND DAMEN effective psychological functioning and life satisfaction (Sanjuán, 2011). The latter also holds true for emotions experienced through vacationing (Sirgy et al., 2011). Positive emotions are of particular importance as they have the power to "broaden and build" (Fredrickson, 1998(Fredrickson, , 2001(Fredrickson, , 2004. In other words, positive emotions have potential long-term consequences by broadening the scope of thoughtaction repertoires and attention through which they positively affect overall quality of life (Fredrickson & Branigan, 2005).
According to Maslow (1970), quality of life depends on the fulfillment of needs. Maslow distinguished two basic types of human needs: deficiency needs and growth needs. Empirical testing supported Maslow's distinction between deficiency and growth needs (see Sheldon, Elliot, Kim, & Kasser, 2001). Deficiency needs relate to "survival." Individuals should feel safe and they need good nutrition to survive. In developed countries those basic needs are generally met. Individuals in such affluent societies therefore aim to "self-actualize." Selfactualization is a term coined by Goldstein (1939) and concerns the need for individuals to be the best they can be-using their full potential (Rogers, 1963). Maslow theorized mostly about self-actualization and he viewed self-actualizing people as autonomous, having meaningful personal relationships and peak experiences (Maslow, 1968). Maslow's interpretation of self-actualization was mostly based on his interpretation of a small sample of self-actualizing people, which he had selected himself.
Although Maslow's (1970) distinction between deficiency needs and growth needs was empirically supported, a more current view on selfactualization-also known as self-determination theory-withstood empirical testing best (Sheldon et al., 2001). Self-determination theory suggests that three psychological needs motivate the self to engage in behavior that is beneficial to personal growth. Three innate psychological needs are distinguished: the needs for competence, relatedness, and autonomy (Deci & Ryan, 1985Ryan & Deci, 2000b). These three needs are considered universal innate psychological needs (Deci & Ryan, 2002). Self-determination theory has received much empirical support in several areas of life, such as parenting, education, and work (Vallerand, Pelletier, & Koestner, 2008). calls). Still it is unclear how this affects tourists' quality of life. The popular media would like us to believe that work during vacation is bad for one's well-being (e.g., McMahan, 2013;Richtel, 2012). Theoretically, the effort-recovery model (Meijman & Mulder, 1998) also suggests that workload during vacation would hamper recovery mechanisms. In contrast, need theory (Deci & Ryan, 2008) would suggest the opposite effect to take place, namely that when human needs-such as work needsare fulfilled, this would be beneficial to a person's well-being. The limited scientific evidence that is available shows that work during vacation has a negative effect on posttrip benefits on health and well-being (De Bloom et al., 2012).
However, nothing is known of the effect of work on quality of life during vacation. This information is important as work during vacation may interfere with the restorative properties of a vacation. Furthermore, knowledge of the role of work during vacation is vital for the tourism industry as their product offer may have to be adjusted to facilitate work opportunities. Finally, insight into how work influences emotions is important as emotions have potential long-term effects (Fredrickson, 2001) and shape overall quality of life and life satisfaction (cf. Sanjuán, 2011;Sirgy, Kruger, Lee, & Yu, 2011).
Quality of Life
Quality of life is understood in different ways. Veenhoven (2000) distinguishes between chances and results that life offers, and between outer and inner qualities of a person. In this current study we approach quality of life as an inner quality of the person. Specifically, we focus on fulfillment of needs and intensity of felt emotions. Emotions are regarded as a reflection of the extent to which certain needs are met (Veenhoven, 2009).
Sample
Field work took place in the Netherlands from late February 2012 until early May 2012. There were 374 international tourists who participated in a street survey. This convenience sample of 374 individuals was reached through randomly approaching international leisure travelers at 10 different tourist spots throughout the Netherlands (e.g., Amsterdam, Volendam, Kinderdijk, Keukenhof). These spots were based on a list of the most popular tourist locations of 2010 as published by the Dutch Bureau of Statistics (Centraal Bureau voor de Statistiek, 2011). Most respondents (n = 353) were traveling with a travel party. Most of them traveled in pairs. In some cases, the travel companion also filled out a survey. This caused some dependency in the withingroup data. Therefore, the analyses focus only on between-group differences. The net sample was made up of 53 nationalities. Most respondents were married (48%) and between 23 and 40 years of age. The majority of the sample was female (56%).
Variables
The questionnaire contained 17 emotions. These emotions were taken from the modified Differential Emotions Scale (mDES), which was initially developed by Fredrickson, Tugade, Waugh, and Larkin (2003), adjusted for general use by Cohn, Fredrickson, Brown, Mikels, and Conway (2009) and applied to vacations by Nawijn, Mitas, Lin,
Needs and Vacations
Vacations contribute to the quality of life of people (e.g., Dolnicar, Yanamandram, & Cliff, 2011;Neal & Sirgy, 2004) and are generally considered an important means to foster social relationships (McCabe, 2002). Empirical evidence supports this notion that the need for relatedness is fulfilled during vacation. For example, Nawijn's (2011a) study showed that a person's travel companions are an important determinant of happiness in terms of affect balance (Veenhoven, 1984) during vacation. Furthermore, De Bloom et al. (2012) found that there is high-quality contact between partners on vacation; they talk extensively and positively. Empirical evidence also supports the perception that there is fulfillment of tourists' need for autonomy during vacation. For instance, Nawijn and Peeters (2010) found that freedom in destination choice correlates positively with tourists' life satisfaction. Additionally, McCabe (2002) argued that vacations provide an excellent opportunity for improving social relations. Recently, McCabe and Johnson (2013) found empirical support for this assumption by concluding that social well-being significantly improves for low-income individuals through vacationing. Empirical studies on the need for personal growth through competence in relation to vacations and quality of life are limited in scope. There is some evidence for adventure tourists, such as white-water rafter enthusiasts (Taylor, 1988) and mountaineers (Pomfrett, 2004). These studies suggest that the more competent a tourist is, the more enjoyment and contentment they derive from participating in such activities. A lack of perceived competence, on the other hand, can serve as a barrier to participate in leisure travel-for instance, for people with disabilities (Prayag, Hosany, & Odeh, 2013).
Problem Analysis
This study assesses the extent to which work during vacation potentially affects tourists' quality of life-in terms of intensity of needs fulfillment and felt emotions during vacation. It aims to (1)
NAWIJN AND DAMEN
An "other" option was also provided, but was hardly used and therefore not reported in the results section. For each of the two main questions, respondents were given the opportunity to further explain their wishes in two separate open response boxes.
Analysis
Two groups were distinguished for analysis: (1) either the respondent and/or his/her travel companion(s) spent time on work during vacation or (2) neither the respondent nor his/her travel companion(s) spent time on work during vacation. The first group contained 174 respondents and the second group 200. We purposely chose to place all vacationers that dealt with work issues into one group. Respondents who did not work could be affected by their travel companion(s) having to work. Adjusting to work demands of others likely reduces fulfillment of needs (e.g., autonomy and relatedness) and consequently affects emotional experience during vacation. Thus, both/all tourists in a travel party would likely suffer from the work demands of the working individual. We chose to divide the two groups based on the distinction of whether one individual in a travel party worked or none worked. We favored this criterion over another criterion, such as time spent on work. The reason for this is that vacationers may have wanted to work more, but could not. Additionally, vacationers may work only for a limited time per day, but they could have to contemplate about work regularly during the day.
Group 1 contained individuals that belong to either one of these subgroups: (1) both the respondent and his/her travel companion(s) spent time on work during vacation, (2) the respondent spent time on work during vacation, but the travel companion(s) did not, or (3) the respondent did not spend time on work during vacation, but the travel companion(s) did. The number of respondents in these subgroups is 49, 65, and 60 respectively. Analyses for RQ1 and RQ2 consisted of analyzing group differences through independent samples t tests. As mentioned earlier in this section, due to some dependency in the within-group data, the analyses focus on between-group differences only. RQ3 was answered through basic univariate descriptives of the survey questions that dealt with and Kerstetter (2013) and Lin, Kerstetter, Nawijn, and Mitas (2014). The original mDES contains 19 emotion items. In terms of valence, the emotions of the mDES are positive, negative, or neutral. The two neutrally valenced emotions (i.e., surprise and compassion) were omitted from the mDES for this study. These neutral emotions can be interpreted positively or negatively by respondents, which prohibits meaningful interpretation (cf. Lin et al., 2014). Individuals were asked to rate the intensity of each felt emotion using a 5-point Likert-type scale (i.e., 1 = very slight or not at all, 2 = little, 3 = moderate, 4 = quite a bit, 5 = extreme). The mDES, as we used it, contained nine positively valenced emotions (joy, gratitude, amusement, contentment, pride, awe, love, hope, interest) and eight negatively valenced (anger, sadness, fear, shame, contempt, embarrassment, guilt, disgust) emotions.
For the purpose of answering RQ2, single-item measurements of the three main components of self-determination theory were included in the survey. Respondents were asked to provide an answer to the general question "To what extent were the following needs satisfied during this vacation?" with the items autonomy ("autonomy"), relatedness ("connectedness with others"), and competence ("personal growth"). We purposely avoided the word "competence" due to its ambiguous meaning (Taylor, 1988). We preferred to use the term "personal growth" instead. Respondents could provide answers to each item via a 5-point Likert scale with anchors 1 = not at all and 5 = very much.
In order to answer RQ3, respondents who worked during vacation were asked how much time they spent on specific work tasks during vacations, if applicable. Furthermore, they were asked to indicate whether they considered the balance between work and leisure time during vacation to be good, if applicable. Additionally, two specific questions were included that probed respondents to specify whether accommodation facilities and transport hubs (i.e., airports, bus stations, train stations) might offer them better facilities to accommodate their potential work needs. Only respondents in the group of workers answered this set of questions. Respondents were asked specifically for their needs and wants in terms of making phone calls, email access, video conferencing, phone conferencing, and networking. somewhat lower (39%). Video conferencing and phone conferencing were probably seldom used as most working respondents ticked the boxes for "not applicable" or failed to answer the question completely. Inspection of the open response boxes added to this observation. Respondents indicated that they were "mostly left alone by work other than few emails" and they were "very happy about that." Similarly, others commented that they tried "to make it a point to not do too much work when enjoying holidays." The questions dealing with satisfaction of work facilities (RQ3) generated low response rates. The number of respondents for the five main facility questions, excluding the "other" option, varied between 25 and 93. Respondents indicated that work needs and through additional information supplied by respondents via the open end response boxes that were part of the questionnaire.
Results
To answer RQ1, 17 independent samples t tests were performed. We tested whether felt emotions were significantly different between working and nonworking vacationers. An overview of the t tests for all emotions is presented in Table 1. The findings indicate that none of the emotions were felt differently between working and nonworking vacationers.
We further tested whether workers and nonworkers differed in terms of need fulfillment during vacation (RQ2). The findings are presented in Table 2. The findings in Table 2 show that workers and nonworkers did not differ significantly in terms of needs fulfillment during vacation. Practically all workers (97%) indicated that they were satisfied with the perceived balance between work and leisure time on vacation. The vast majority of respondents, or their travel companion, did not spend more than 15 minutes per day on workrelated tasks. On a daily basis, 63% of workers spent less than 15 minutes on phone calls. Use of email was . Both works suggest that in contemporary society people are constantly choosing between activities to participate in. The finding that most workers on vacation are satisfied with the balance between work and leisure time implies that most people in present day society are capable of successfully combining work with leisure time, even while on vacation.
Despite the overall positive experience of work during vacation, the working respondents were less satisfied with the opportunity to take part in video conferencing or phone conferencing. An additional point of concern is the cost issue of being able to work during vacation. The open-ended response comments indicated that most dissatisfaction originated from the costs that came along with Internet access. Thus, although workers experience similar levels of emotional intensity compared to nonworkers, our study suggests that the setting to fulfill needs for competence through work (Ryan & Deci, 2000b) may not be sufficiently provided on vacation.
Implications
The findings of the study imply that current customer satisfaction research performed by the tourism industry (e.g., airlines, hotels) may benefit from including work aspects. The respondents in this study were dissatisfied with opportunities to work during vacation. Although actual time spent on work was limited, companies could include questions on work facilities and opportunities in their surveys.
The tourism industry could provide opportunities for tourists to self-actualize by fulfilling the need for competence through offering work facilities. While earlier studies found that vacations can serve as relatedness-supportive and autonomysupportive social environments (cf. Cini, Kruger, & Ellis, 2013;De Bloom et al., 2012;Nawijn, 2011a), our study suggests that vacations could also be a more competence-supportive environment. For instance, additional cost for Wi-Fi access is viewed negatively by working vacationers. Furthermore, opportunities to participate in video conferencing or phone conferencing are limited, both at accommodation facilities and transportation hubs. The tourism industry could better cater to those needs accommodation facilities generally satisfied any work needs they had in terms of making work calls (71%) and email (94%). Some complaints were made about the costs: "Some hotels don't provide free and convenient Wi-Fi" and "Too expensive email." Sixty-seven percent were satisfied with networking opportunities. Few respondents were content with the opportunities to participate in video conferencing (30%) or phone conferencing (35%). Most respondents were also satisfied with possibilities provided by transportation hubs to make phone calls (67%) and to check emails (69%), although the latter percentage was not as high as it was for accommodations. Respondents were less pleased with opportunities provided by transport hubs to use phone conferencing (33%), video conferencing (24%), or to network (48%). One respondent explained that transport hubs often lack quiet working spaces. Again, most suggestions for improvement addressed cost issues: "No free wireless." Many of the working respondents did not provide an answer to the work facility questions for transport hubs, which may indicate they simply do not have the need to work at these hubs.
Discussion
Despite public belief (e.g., McMahan, 2013;Richtel, 2012) and theoretical predictions (Meijman & Mulder, 1998), the findings of this study indicate that work during vacation is not detrimental to vacationers' well-being. Quality of life-in terms of emotional experience during vacation-is similar for those who work during vacation versus those who do not. Similarly, tourists who work and those who do not work during vacation are equally capable of fulfilling their needs. Thus, although work during vacation may reduce posttrip benefits in terms of health and well-being (De Bloom et al., 2012) and life satisfaction (Sirgy et al., 2011), our study finds that emotional experience during vacation is not affected. Additionally, our findings indicate that almost all workers are satisfied with the perceived balance between work and leisure time on vacation. Most of the working tourists in our sample spent only a limited amount of time per vacation day on work activities. This finding supports the views of the multiple choice society (Schwartz, 2004) WORK DURING VACATION 765 motivated behavior, which is conducive to quality of life as it is fully self-determined (Ryan & Deci, 2000a, 2000b. Our study did not distinguish between types of extrinsic and intrinsic motivated behavior. Future research should assess whether work during vacation represents more extrinsically or intrinsically motivated behavior and how these motives affect vacationers' quality of life.
by offering quiet work spaces for video conferencing or phone conferencing. Additionally, more free Wi-Fi spots throughout transportation hubs and in hotel rooms would greatly enhance the ability of tourists to briefly work during vacation via their smartphone or tablet.
Suggestions for Future Research
Our study showed that working and nonworking tourists do not differ significantly in terms of felt emotional intensity and needs fulfillment during vacation. Unfortunately, due to some dependency in the within-group data, we were unable to test whether need fulfillment is associated with emotional response, as is argued by Veenhoven (2009). Future research should assess the relation between needs and emotions in a vacation context.
The working vacationers in our sample were a mix of subgroups consisting of travel parties who all work and travel parties where one person works. Future studies could focus on more specific groups to test whether these groups differ in emotional response and needs fulfillment. Also, the majority of the sample of working vacationers in this study did not spend more than 15 minutes per day on work-related tasks. This makes generalization of the findings to specific types of workers difficult. For instance, workaholics may be more tempted to work during vacation, possibly affecting their travel companion's well-being during vacation. Also, our group of workers is rather varied. Thus, additional research is required for specific groups of workers.
Furthermore, levels of competence should be assessed in future studies at different settings within a vacation experience. Earlier research has shown that competence levels may differ between settings (Prebensen & Foss, 2011), which was not specifically accounted for in this study.
Finally, self-determination theory differentiates between two main types of motivations: extrinsic and intrinsic (Ryan & Deci, 2000a). Represented on a continuum, the extremes signify regulatory styles of behavior. Extrinsic motivation is synonymous for amotivation. In other words, people who are strongly extrinsically motivated engage in behavior for external reasons. Such behavior is not expected to improve quality of life. The other extreme on the continuum represents intrinsically | 2019-05-16T13:04:38.291Z | 2014-12-19T00:00:00.000 | {
"year": 2014,
"sha1": "b5643a5af7506ab30d48454f11267f40660c264b",
"oa_license": "CCBYNC",
"oa_url": "https://pure.buas.nl/ws/files/315435/Nawijn_work_during_vacation_not_so_bad_after_all.pdf",
"oa_status": "GREEN",
"pdf_src": "Anansi",
"pdf_hash": "6162f2ecda640e96e80a5906b58e05a9ea4eea3b",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Psychology"
]
} |
263265154 | pes2o/s2orc | v3-fos-license | Unsolicited Patient Complaints Following the 21st Century Cures Act Information-Blocking Rule
Key Points Question Does the 21st Century Cures Act information-blocking rule (IBR) affect unsolicited patient complaints (UPCs)? Findings In this cohort study conducted at a single institution, 8495 UPCs were identified in an interrupted time-series analysis. There was no difference in the rate of UPCs after IBR implementation in January 2021; however, complaint themes related to the policy were identified in the qualitative review of UPCs. Meaning These findings suggest that opportunities remain for clinicians to prepare patients for the possibility that test and procedure results may be available to them before clinicians are able to review them and respond.
Introduction
The 21st Century Cures Act (hereinafter, the Cures Act), which became law on December 13, 2016, and entered its first compliance phase on April 5, 2021, was designed to increase patient access to their health information and to facilitate innovation in the pharmaceutical and medical device arena. 1,2The Cures Act included the information-blocking rule (IBR), which was intended to remove barriers that limit timely patient access to their own health information.The IBR included provisions that required physicians and health care organizations to provide access to all clinical information to patients upon their request and without delay. 3Under the new rule, patients can access all designated electronic health information, including consultation notes, history and physical examinations, radiology reports, laboratory and pathology reports, discharge summaries, and progress and procedures notes. 35][6][7][8] For example, a patient may experience distress if they read a new anatomic pathology report revealing incident cancer before their health care professional is able to explain the results.Patient distress can lead to increased numbers of unsolicited patient complaints (UPCs). 91][12] In addition, UPCs can be reliably coded to identify types of complaints related to care and treatment, unclear communication, concern for patients and families, accessibility, safety of the environment, and billing concerns. 13We are not aware of any study to date that has evaluated the association of IBR implementation with UPCs overall and by type.
We aimed to evaluate the number of UPCs about physicians and the content of these complaints before and after implementation of the Cures Act IBR at a large academic medical center.
We hypothesized that the numbers of UPCs would increase as a result of IBR implementation.Our secondary hypothesis was that certain types of complaints would reflect patient concerns related to confusion, anxiety, and clinician availability to answer patient questions related to release of information.
Design and Setting
We conducted this retrospective cohort study with interrupted time-series analysis (1) to measure rates of UPCs per patient visit over time and (2) to determine whether there was a change coincident with implementation of the Cures Act IBR.This study was performed at Vanderbilt University Medical Center (VUMC), a large US academic medical center with 1709 beds.On January 20, 2021, the VUMC patient portal, My Health at Vanderbilt (MHAV), was modified to provide immediate access to all designated electronic health information, in advance of the IBR April 2021 compliance date.Prior to January 20, 2021, the institutional policy was to release radiology reports 3 business days after results were available, to release pathology reports 14 calendar days after available, and to never release clinical notes electronically. 14The study period spanned from January 1, 2020, to June 30, 2022, which included 385 days prior to VUMC implementation of immediate access and ended 525 days later.Immediate access to medical records included providing all clinical data (including progress notes, pathology reports, and radiology reports) immediately through MHAV.During the study period, there were 1 115 905 unique patients at VUMC, of whom 653 455 (58.6%) were enrolled in MHAV.Of those enrolled, 447 124 (68.4%) were active on MHAV during the study period.Users of MHAV were not linked to specific UPCs based on prior publication data. 11All data used for the study were deidentified by a data manager (employed by VUMC) not involved in the conduct of the research prior to investigator receipt of the deidentified research data set.The VUMC Institutional Review Board reviewed the study, determined that it did not qualify as human
Study Population
The study population included patients treated at the study site, and the unit of measurement was UPCs per 1000 encounters per month.Unsolicited patient complaints from resident physicians and fellows were not included in the analysis.
Data Sources
The Vanderbilt Center for Patient and Professional Advocacy (CPPA) works with hospitals and medical groups across the US to identify affiliated physicians with increased numbers of UPCs.The CPPA maintains an electronic database housing patient complaint data for more than 60 000 US physicians credentialed at participating health care institutions.The CPPA identifies high-risk physicians as those whose volume of complaints shows them to be outliers locally and nationally among their peers. 10,15Unsolicited patient complaints from Vanderbilt physicians in the CPPA database were used as the source of data for this study.
Data on patient encounters were obtained from the VUMC Enterprise Data Warehouse.Counts included the total number of outpatient encounters per month.
Outcomes
The primary study outcome was monthly rates of UPCs from any setting.These rates were calculated by dividing the total number of UPCs by outpatient encounters and are expressed as UPCs per 1000 encounters.
Qualitative Analysis
A validated coding algorithm for UPC types was used to categorize complaint types before and after IBR implementation, with coders trained to approximately 90% reliability. 13A subset of 1822 post-IBR complaints coded for UPCs pertaining to the categories of communication, documentation, treatment, and diagnosis (CDTD) were evaluated to assess complaint themes attributable to the IBR.
To identify UPCs associated with the patient online health portal, a review of the complaints was performed using the following terms: MHAV, My Health, My Health at Vanderbilt, portal, and app.
Examples from this group of UPCs were independently examined by 2 authors (R.J.D. and W.O.C.) to identify patient concerns potentially related to the IBR.Consensus was used to determine final groupings.
Statistical Analysis
The monthly rate of UPCs among physicians per 1000 patient encounters was compared before and after IBR implementation.Monthly complaint rates were compared using a Wilcoxon rank-sum test and are reported using medians (IQRs).Differences in group medians with bootstrapped 95% CIs using 1000 bootstrap replicates are reported.Proportions of complaints by UPC category were compared between time periods using the Pearson χ 2 test.A cohort study with interrupted timeseries analysis was implemented to analyze the total and monthly rates of any category of complaints about physicians as well as complaints related to CDTD categories before and after January 20, 2021, the date VUMC implemented the changes to comply with the Cures Act IBR. 16The time-series analysis used a segmented regression model with parameters for changes in mean and slope in the post-IBR period compared with the pre-IBR period.The month of implementation (January 2021) included both pre-and post-IBR exposures.This month was treated as a washout period and was excluded from the analysis.To assess whether UPC rates were affected by COVID-19 during the pre-IBR period, a sensitivity analysis was performed using monthly UPCs during 2019 as the pre-IBR period (eFigure in Supplement 1).Two-sided P < .05 was considered statistically significant.All
UPCs Before and After IBR Implementation
We identified 8495 UPCs during the study period: 3022 over 12 months before and 5473 over 18 months after institutional implementation of the IBR.The median monthly rate of complaints per 1000 encounters was 0.81 (IQR, 0.75-0.88) in the pre-IBR period compared with 0.83 (IQR, 0.77-0.89) in the post-IBR period (difference in medians, −0.02 [95% CI, −0.12 to 0.07]; P =.86; [95% CI, −3.5% to 0.4%]; P =.22) was slightly lower in the post-IBR period.However, no association was observed between time period and overall complaint classification by UPC category (Table 2).
Segmented regression analysis for all monthly UPCs per 1000 patient encounters revealed no statistically significant change in average monthly rates of UPCs from the pre-IBR and post-IBR periods (β [SE], 0.03 [0.09]; P =.72) (Figure 1).Statistical models using 2019 as the pre-IBR period were not materially different from models using 2020 (eFigure in Supplement 1).For the CDTD subgroup of UPCs, segmented regression analysis also demonstrated no statistically significant difference in average monthly rates of CDTD UPCs in the pre-IBR and post-IBR groups (β [SE], 0.02 [0.06]; P = .67)(Figure 2).
IBR-Related Complaints
A total of 1822 complaints included references to the MHAV portal and underwent subsequent review.Examples from the post-IBR period included the following: concerns with diagnostic results from radiology or pathology reports, disagreements with documentation in the medical record, anxiety related to an unexplained medical finding, complaints about medical team communication, and unexpected findings found in the medical record (Table 3).
Many of the complaint themes arose in the setting of the patient having access to a report (eg, a radiology or pathology report) before they had an opportunity to speak with their physician.For example, a complaint about a magnetic resonance imaging (MRI) report was as follows: "patient
JAMA Health Forum | Original Investigation
Patient Complaints Following the 21st Century Cures Act Information-Blocking Rule listed that no one told me would be performed… I respectfully request that … an explanation be given for why these tests were performed.I also request … that the test results be purged from my medical record."
Discussion
In this large, single-institution study, we observed that there was no difference in the monthly rate of overall UPCs and CDTD-related UPCs among physicians before and after implementation of the Cures Act IBR when adjusted for numbers of patient encounters.Although there were specific complaints that were directly related to the IBR, the rate of overall complaints did not increase.
A major concern among different physician groups around IBR implementation has been increased patient anxiety and anger toward clinicians as a direct result of patient misunderstanding of components of the medical record. 7,17Furthermore, release of all of the patient's medical information in real time could reveal inaccurate information about their health condition, which could lead to UPCs.Unsolicited patient complaints are a known validated estimator of malpractice activity; however, because of the typical lag time between events and malpractice claim filing and resolution, claims data are not currently available to corroborate or refute the notion that increased IBR-related UPCs would estimate subsequent malpractice claims. 10Given our findings, the lack of difference in UPCs between the pre-and post-IBR groups suggests that a large increase in malpractice claims related to medical record information release through the IBR should not be expected.
The Cures Act requires immediate electronic availability upon patient requests for all test results, medication lists, and clinical notes.The Cures Act does not require health care institutions to push unrequested designated electronic health information to patient portals.However, given the real-world constraints of existing commercially available electronic health record systems and patient portals, many health care systems are providing immediate, real-time access via their patient portals as the only feasible solution for accommodating the IBR. 18In a recent study by Steitz et al, 19 immediate access to all health results resulted in a substantial difference in patients accessing them before their physician did.In addition, the number of daily messages sent to physicians nearly doubled from a median of 78 to 146 after the transition to IBR compliance. 19This increase in patient messaging suggests that the IBR may increase some patient concerns that may not rise to the level of a UPC.Although follow-up research by that team indicated that patients prefer to continue having immediate access to their electronic health information 95% of the time, the risk of increased physician work remains. 19
Unintended Consequences of the IBR
Although our study did not reveal substantial differences in the monthly proportion of UPCs per patient encounter, the types of complaints seen after IBR implementation were specific to results found in the medical record.In our qualitative analysis of post-IBR UPCs, themes were identified
Clinical and Policy Implications
These findings offer insight for clinicians to consider when they are ordering tests or performing procedures, particularly in an effort to temper patient expectations.The concept of open notes has been widely implemented across the US and other countries for more than a decade and has been well studied and characterized. 213][24][25][26] A recent survey study with respondents from 4 academic medical centers revealed that 96% of patients preferred immediate release of and access to online test results even if their health care practitioner had not yet reviewed them. 19However, without adequately preparing patients and adjusting clinical systems, the increased messaging seen in the immediate release of electronic health information could increase the workload of clinicians and their clinical staff. 19To mitigate this increased messaging, clinicians and health care organizations can prepare patients for the possibility that results may be posted at the same time or even before the clinician has a chance to review them. 27actices could build in processes to ensure that patient questions are answered in a timely manner.
A best practices guideline would be helpful for both the practitioner and the patient to anticipate some of these anxiety-provoking reports that may lead to IBR-specific UPCs. 28For example, a practice called precounseling involves communicating to the patient that they should expect results to populate in their online portal and to consider waiting to view them until their follow-up appointment or phone conversation.Also, having a mechanism within a practice to follow up with patients on potentially worrisome results could help to reassure patients that communication will occur if there is a concern.Consideration of patient preferences when receiving protected health information (PHI) could help facilitate better health care communication.For example, patients may choose to see only abnormal laboratory values or may opt out of viewing any PHI.Also, given that patients did not have access to clinical notes in the pre-IBR period, access could have provided rationale for the tests ordered and decreased confusion among patients.
Many of the changes mentioned would require clarification in the law as written and could be included in the recent IBR changes proposed by the US Department of Health and Human Services Office of the National Coordinator for Health Information Technology. 29Some best practices for PHI receipt or opt-out options may need to be specifically addressed in the Cures Act to avoid be considered information blocking.In addition, information blocking as defined in the Cures Act could still be taking place and not yet have any precedent set for prosecution.The definition of the term developer is important because developers participating in information blocking are subject to penalties, whereas practitioners are not.These topics provide an opportunity for clarification in updates to the Cures Act.
Limitations
Our study has several limitations.This was a single-institution study representing an academic health system, which may not be representative of other US academic practices or nonacademic health systems.Our institution implemented MyChart (Epic) in 2017, so these findings may only translate to those who use the Epic electronic medical record system.Although the Cures Act IBR requires immediate access upon patient request, it does not necessarily require immediate, real-time access to an online portal as implemented at our study site.Other health care entities may have different implementation dates and different strategies that could further affect the generalizability of these findings.A multi-institutional study would strengthen the findings of this study and should be
JAMA Health Forum | Original Investigation
Patient Complaints Following the 21st Century Cures Act Information-Blocking Rule pursued in the future.In addition, our study only examined complaints that were tied to physicians, because our goal was to estimate malpractice activity with UPCs.Some complaints not aimed at physicians but still related to immediate access to PHI in MHAV may have occurred and were not captured in our analysis.
Of the 1 115 905 unique patients at VUMC, 653 455 (58.6%) were enrolled in MHAV during the study period.Of those patients enrolled in MHAV, only 447 124 (68.4%) were active.Thus, many patients may not be aware of the IBR or may not have experienced any change in their health care experience as a result.The institutional transition for Cures Act compliance occurred during the height of the COVID-19 pandemic and could have affected patient experiences, which may have resulted in UPCs, including hospital volume and access to care. 30To account for this, we performed a sensitivity analysis using the prior year (2019) data and observed no difference in the interrupted time series.
Conclusions
In this cohort study with interrupted time-series analysis, the Cures Act IBR was not associated with a change in the monthly rate of UPCs at a large academic medical center.A qualitative review of the complaints suggests that there are unintended consequences of complex medical information being immediately available to patients.Further study of the effects of this legislative mandate with multi-institutional data and a longer time horizon may be helpful for further understanding of this law's effect on UPCs.
Table 1 )
. For the CDTD categories, the median monthly rate of UPCs per 1000 patient encounters
Table 1 .
Complaint Comparison Before and After the 21st Century Cures Act IBR Institutional Compliance Date (January 20, 2021) a a Complaint rates were compared using a Wilcoxon rank-sum test.
Table 2 .
Proportion of UPCs by Type Before and After Implementation of the 21st Century Cures Act IBR Abbreviations: IBR, information-blocking rule; UPC, unsolicited patient complaint.aProportions of complaints by UPC category were compared between time periods using a Pearson χ 2 test.
for Dr. *** to call her with results from MRI … very upset and concerned that her MRI results mention lymphoma."Another patient complained, "I spent all weekend terrified after reading some terrifying information … I could not reach my doctor to discuss."There were also several reports in which patients expressed surprise at what they found in their medical record or stated that they may have had a test they were unaware was performed.One patient stated, "I was given an EEG [electroencephalogram] and told to follow up with neurology.The emergency room doctors said my EEG was normal.The portal says my EEG was abnormal.What is happening?"Another patient noted, "I logged back on to My Health … and several, test results were Figure 1.Interrupted Time Series of Unsolicited Patient Complaints (UPCs) per 1000 Encounters Before and After Implementation of the 21st Century Cures Act Information-Blocking Rule (IBR)
Table 3 .
UPCs Related to the 21st Century Cures Act "When I logged back on to My Health at Vanderbilt, there were test results listed that no one told me they were performing.Why was this done?I want these test results purged from my medical records."Downloaded From: https://jamanetwork.
com/ on 10/01/2023 regarding
20agnostic results from radiology and pathology reports, documentation in the medical record, anxiety related to unexplained medical findings, medical team communication, and unexpected findings in the medical record.By empowering patients to have ownership of their health care, having electronic health information immediately available could improve communication with their practitioners.20 | 2023-10-01T06:17:39.742Z | 2023-09-01T00:00:00.000 | {
"year": 2023,
"sha1": "b00f350cb56d4258769f9b2badf11111e255f381",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ScienceParsePlus",
"pdf_hash": "3471cbf5eba6de92ae6d5c5ed81890427407bb5a",
"s2fieldsofstudy": [
"Law",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
14054496 | pes2o/s2orc | v3-fos-license | PERFORMANCE ANALYSIS OF DISTINCT SECURED AUTHENTICATION PROTOCOLS USED IN THE RESOURCE CONSTRAINED PLATFORM
Most of the e-commerce and m-commerce applications in the current e-business world, has adopted asymmetric key cryptography technique in their authentication protocol to provide an efficient authentication of the involved parties. This paper exhibits the performance analysis of distinct authentication protocol which implements the public key cryptography like RSA, ECC and HECC. The comparison is made based on key generation, sign generation and sign verification processes. The results prove that the performance achieved through HECC based authentication protocol is better than the ECCand RSA based authentication protocols.
INTRODUCTION
The significant growth of communication technologies and the massive usage of the Internet have contributed to the development and blooming of m-commerce.On the other hand, now a day, we have seen a huge demand for mobile devices.This seeks for resource constrained platforms have guided to the appearance of smart phones and iphones.Moreover, the needs for migration of e-commerce applications from the conventional desktop to these resource constrained platforms are become mandatory.For example, one might think of buying/selling products through smart phone or browsing pay-per-view news on iPhone, while waiting on the bus stop.
However, being the internet, an open and insecure network, some anxiety has been raised in transmitting sensitive information.The solution lies in using cryptography and secures authentication protocols that guarantee the confidentiality, authentication and integrity of communications [4].Such protocols, like SSL [13] and SET [12], already exist and are widely used in current e-commerce applications.Most of them are based on RSA public key cryptography.Alternate, two distinct protocols are developed which is exclusively based on Elliptic curve cryptography (ECC) and Hyper-elliptic curve cryptography (HECC), an asymmetric cryptography that performs well in resource constrained platforms and maintain the high security level that one can achieve with the protocols in use today [9] [10].The efficiency and effectiveness of ECCbased direct and indirect authentication protocols are analyzed by Thilagavathi and Rajeswari and they concluded that depends on the application and the environment, the protocol could be utilized to make a secure environment in mobile networks [14].Fengling Han and Ron van Schyndel proposed an m-identity authentication (MA) protocol based on mobile user's biometric features.M-identity merges mobile device identity into biometrics images [11].
The paper is organized as follows.Section 2 describes the general aspect of the different protocols.In section 3, the implementation and performance analysis of the protocols in J2ME wireless toolkit is provided.Finally, section 4 presents the conclusion.
ARCHITECTURE
The authentication protocol must be able to create a secure communication channel between two parties on top of an insecure network, like the internet.It's not difficult to eavesdrop a line or to compromise a router and be able to listen / alter all messages in transit [1].In order to prevent this, the protocol must ensure the mutual authentication of both parties and the confidentiality and integrity of all the data transmitted through it.Such protocols already exist and have gone through deep analysis, like SSL [13] and TLS [15].However, they rely heavily on RSA asymmetric cryptography, which causes some anxiety about their performance on resource constrained small devices.In fact, some performance measurement is done for cryptographic functions on one of these devices, the Palm III from 3Com which is shown in the Table .1 [8].
From the Table .1, it is clear that generating RSA keys on the Palm III Pilot is prohibitively expensive and time consuming process.Moreover, RSA signature generation is also very slow and RSA itself is vulnerable [7], which shows a protocol like SSL to become unfeasible.Conversely, different efficient authentication protocols for resource constrained platform have been proposed and implemented [9] [10].Generally, the security level provided by the HECC using 80 bit key size is equivalent to ECC's 160 bit which in turn equivalent to RSA's 1024 bit [2][3] [6].Also, the performance of the digital envelope implemented using HECC is better than the digital envelope implemented using ECC [5], which lead to a simple conclusion: the authentication protocol for resource constrained platforms must be based solely on ECC and HECC separately.The following Fig. 1 depicts the general asymmetric authentication protocol over the internet which can be employed using RSA/ECC/HECC.
The idea behind this protocol is simple: in step 1, the mobile starts the protocol by sending its ID (e.g.Serial Number) to the server.In step 2, the server stores the mobile's ID for authentication purpose and generates mobile's private key and public key using any one of the asymmetric cryptographic techniques (RSA/ECC/HECC).These keys (private and public key of the mobile) along with the public key of the server are sent to the mobile.Notice that the keys travel from the server into the mobile through a secure channel.To send the key to the respective destination, one can even adopt Diffie-Hellman key exchange algorithm.In step 3, the mobile generates a challenge and sends it along with its ID to the server, encrypted with a combination of the server's public key and the mobile's private key.
Fig.1. Asymmetric Authentication Protocol over the Internet
The server decrypts the message with mobile's public key and its private key and verifies if this ID matches the ID sent in step 1.This authenticates the client.In step 4, the server sends the challenge received in the previous step plus one and a randomly generated session key and encrypted with a combination of mobile's public key and server's private key.The mobile then decrypts this message with server's public key and its private key and verifies the challenge.If it matches the one that was sent in step 3, then the mobile can trust that it's indeed talking to the right server.Both encryption and decryption process, specified in step 3 and 4 are done using any one of the cryptographic techniques (RSA/ECC/HECC).From now on, in step 5, a secure channel has been created and all data is encrypted with a session key.Notice that a new key is setup for each message to prevent replay attacks.
IMPLEMENTATION
The secured authentication protocol using RSA algorithm was implemented in Palm III [8] and the other two secured authentication protocols using ECC and HECC were implemented in J2ME Wireless Toolkit 2.5.1 [9] [10].The details of the Sun Java Wireless Toolkit 2.5.1 can be had from [16] and the toolkit can be downloaded from [17].With the growing diversity of mobile devices to which the protocol targeted for, its portability was a major concern since the beginning.Therefore it was developed using the J2ME, whose features meet this requirement.
To achieve the high security level required, the ElGamal based Elliptic curve cryptography and Hyper-elliptic curve cryptography, and also MD5 algorithms, were used for the encryption, decryption and digest calculation of the messages exchanged in this protocol.Table .1 shows the performance measurement for the different asymmetric key cryptographic techniques.Fig. 2 depicts the pictorial representation of the performance analysis of the key generation, sign generation and sign verification processes by the RSA, ECC and HECC used in the secured authentication protocols.From Fig. 2, it is clear that the secured authentication protocol using HECC serves better than ECC and RSA in terms of key generation, sign generation and sign verification processes.Moreover, it is already proven that the security level provided by the HECC using 80 bit key size is equivalent to ECC's 160 bit which in turn equivalent to RSA's 1024 bit [2][3] [6].Finally, due to confines in mobile device's computing power, its memory capacity and key sizes used in the algorithm, the secured authentication protocol implemented using HECC algorithm is more appropriate authentication protocol to the resource constrained platforms other than RSA and ECC.
CONCLUSION
This analysis shows that it is possible to implement distinct secured authentication protocols using different asymmetric key cryptographic techniques (RSA/ECC/HEC) in resource constrained platforms.This paper concludes that the performance of HECC based secured authentication protocol serves better than RSA and ECC based protocol.Hope this analysis to be a big contribution to the development and widespread acceptance of secured authentication protocol based on HECC in m-commerce. | 2015-07-06T21:03:06.000Z | 2014-03-01T00:00:00.000 | {
"year": 2014,
"sha1": "7601f966a8475a170b13323d5491038ff285c487",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.21917/ijct.2014.0129",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "7601f966a8475a170b13323d5491038ff285c487",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
200060848 | pes2o/s2orc | v3-fos-license | Effect of sulfuric acid solution on thermal conductivity and impact strength of epoxy resin reinforced by silicon dioxide powder
In this search, Ep/SiO2 at (3, 6, 9, 12 %) composites is prepared by hand Lay-up method, to measure the change in the thermal conductivity and Impact Strength of epoxy resin before and after immersion in H2SO4 Solution with a 0.3N for 10 days. The results before immersion decreases with the increase of the weight ratios of the reinforcement material (SiO2), It changed from (82.6×10-2 to 38.7×10-2 W/m.°C) with change weight ratios from (3 to 12) % respectively, but after immersion time in the chemical solution where it was (65.6×10-2 W/m.°C) at the weight ratios (6 %) and became (46.6 × 10-2 W/m.°C) after immersion in sulfuric acid. The results of the Impact strength decreased by increasing the percentage weight ratio, it changed from (1.48 to 0.87 kJ/m2) with change weight ratios from (3 to 12) % respectively, but found an increase in the value of Impact Strength after immersion in the chemical solution Where it was (1.28 kJ/m2) at the weight ratio of 6 % and became (1.82 kJ/m2) at the same weight ratio after immersion in sulfuric acid at normality of 0.3 for 10 days.
Introduction
There was an insistent need to find alternatives to the materials so that these alternatives are of a high quality in terms of weight, cost and properties in general to be adopted in the various manufacturing applications that called composite materials, which is known as the material resulting from the mixing of two or more material by a physical coherence without chemically interaction to obtain new materials that differ in their properties from the properties of their constituents [1]. Researchers are interested in studying polymer compositions reinforced by SiO 2 Particle in Special, characterized by poor thermal and electrical conductivity and resistance to oxidation and carry around to high temperatures [2].
In 2015, Maryam Zuhair studied the mechanical and thermal properties of epoxy resins reinforced by magnesium oxide and nano-silica oxide In different weight ratios, where the results showed that the thermal conductivity of all cases (the base material with the nanomagnesium oxide once and twice the silica oxide once and the mixture again) is higher than the value of the thermal conductivity of the base material as well as the value of the wear coefficient was higher for all cases than the values of wear coefficient of epoxy resin without reinforcement and the value of the elasticity coefficient is higher at the hybrid composite at the rate (0.2 %) [3].
In 2016, Hind W. Abd Allah studied the effect of granular particle size on the thermal conductivity. Magnesium oxide and silica particles were used in different granular sizes (125, 75, 37 (μm)) at 5 % volume fracture. Both types was prepared at the ratios 2.5 %. The results showed an increase in the thermal conductivity values with a decrease in the granular volume of the additives. The thermal conductivity showed an improvement of the mixtures compared with the base material of silica values and gave high values compared to the composite substance of magnesium and silica [4]. The aims of this research are to: -Prepare composite material of epoxy resins supported by micro particle of SiO 2 at different weight ratios (3, 6, 9 and 12) wt%.
-Studying the thermal conductivity and Impact strength property of composite materials which are tested befor and after immersion in sulfuric acid.
Thermal conductivity
Thermal conductivity is a very important physical characteristic, which represents the amount of heat transferred through the unit of area [5]. Which occur when the transfer of thermal energy from the areas of high temperature to areas with low temperatures, that is, when there is a difference in temperature called thermal gradient [6]. There are different mechanism of thermal conduction depending on the type of material and the nature of the movement particles. The thermal conductivity is different depending on the type of material and the nature of the movement of the molecules Solid conductive materials, in which free electrons are responsible for the transmission of heat energy as well as phonons, In heat-insulating polymers where there are no free electrons, the heat is transmitted by flexible waves, due to the oscillation of the particles, as a result of the proximity of the molecules and their association with the bonds, this oscillation will move to the neighboring molecules so The heat is then transferred and these flexible waves are called phonons [7]. Fig.1 shows the method of measurement of thermal conductivity thermal conductivity can be calculated from the following relationship [8]: K: thermal conductivity (W / m.K).
(e): reflect the thermal energy passing through the unit area of the disk per second and its units (W / (m 2 .K)) and can be calculated from the following relationship [9]: whereas
Impact strength
Impact strength is a measure of the strength of polymer material and its resistance to refraction under the influence of stress and high speed and durability. The strength of the Impact strength depends on many variables, including type of material, stress resistance, manufacturing conditions, the environment, the geometric shape of the sample. Therefore impact strength can be defined as the amount of energy absorbed during the collision to the cross section of the sample. It is measured by kJ/m 2 and is calculated from the following relationship [10,11].
Experimental work First: Materials used in research:
The following materials were used in this research: 1-Epoxy Resin: Which is in a liquid form type of (Epoxy Sikadur® 52 LP), which is manufactured by the US company Sky Spring, which solidifies after the addition of the hardener of the type (Bisphenol A (epichlorohydrin) Oxiraine) and manufactured by US compan Sky Spring and the ratio of crucified to resin (2: 1).
Second: Preparation of thermal conductivity test models:
The amount of epoxy resins was mixed with the hardener material using a hand-Layup method. SiO 2 powder was then added to the mixture at different percentage (3, 6, 9 and 12) %.
Mix the mixture well and pour into the prepared mold with a thickness of (3 mm) and then leave in the mold to hard well for 48 h then the sample is extracted and placed in the oven at 50 °C for 2 hours as shown in Fig. 3 to complete the hardening process.
The samples are then cut according to the international standard of the thermal conductivity and Impact strength measuring device using a manual saw with very small teeth and sharp to ensure that the samples do not vibrate during the cutting process. Then the process of smoothing and polishing the edges of the samples using silicon carbide papers with different degrees of smoothness and the samples are then diluted with sulfuric acid diluted at 0.3 N for 10 days. Table 1 listed the standard dimensions of the thermal conductivity and Impact test. Fig.4 shows the photographic image of the thermal and Impact samples. Results and discussion 1. Thermal conductivity before and after immersion in H 2 SO 4 : From Table 2 and Fig.5 shows the vibration of the thermal conductivity values with the added weight ratios of SiO 2 before and after immersion of the samples with H 2 SO 4 solution at 0.3N for 10 days. We note that the thermal conductivity begins to decrease with increasing the weight ratios of SiO 2 powder and the reason for this is because the base material and the reinforcing material are considered heat insulation materials [12]. Vibrations in the internal structure of the resin are low. And a further reduction in the increase of the strengthening material, which will work to prevent vibrations. Thus causing a decrease in the value of thermal conductivity [13,14]. We note in Fig.5 The thermal conductivity starts to decrease with increasing immersion time this is because the process of immersion in chemical acid may cause, break down the chemical bonds and slip because this acid solution attacks the polymer strongly due to positive ions (H +) works to form a bond With the ends of the polymer chains and negative ions (SO 4 -) attack and weakness the interstitial area and therefore this acid has caused a disruption of the transfer of the phonons, which are responsible for the transfer of heat [15,16]. Table 3 shows the practical results of the Impact strength values for samples reinforced by silicon dioxide before and after immersion in sulfuric acid. Fig. 6 shows that the Impact strength decreases by increasing the percentage weight of the reinforcement materials for all samples. For example, it was (1.48 kJ/m 2 ) at the percentage of 3 % and became (0.87 kJ/m 2 ) at the percentage of 12 %. This is due to the fact that the silicon dioxide added to the base material is a glass material with low resistance to Impact. The reinforcement particles form points to concentrate the stresses and centers of the dot defects which will increase the probability of spreading the cracks quickly as they are within the main polymer bonds and thus reduce the durability of the composite material and thus reduce the Impact [17,18]. Showing from Fig.6 we found that the amount of Impact strength it increase after immersion in sulfuric acid, The reason for increasing Impact strength after acid immersion is due to reduce the elasticity of the composites because of increasing the crosslinking by interaction of acid with polymer back bounds this prevent cracks propagation and increase of the value of Impact strength.
Conclusion
1. The thermal conductivity of the epoxy resins decreases with the increase of the SiO 2 reinforcement material. The low thermal conductivity of the composite material prepared after immersion of the samples in the H 2 SO 4 solution is higher than in normal conditions. 2. The impact strength is reduced by increasing the weight ratios of all samples in normal conditions. After the submerged sulfuric acid is immersed at a 0.3 N, the Impact strength value is higher than higher than in normal conditions and for all samples. | 2019-08-16T18:38:42.155Z | 2019-05-29T00:00:00.000 | {
"year": 2019,
"sha1": "51b002ea7fa2accdcad1338168cb640c0e19c281",
"oa_license": "CCBY",
"oa_url": "http://ijp.uobaghdad.edu.iq/index.php/physics/article/download/462/408",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "9280cd423e92b00e39745683288e932a1140c37c",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
260153870 | pes2o/s2orc | v3-fos-license | Study of the Effect of Intestinal Microbes on Obesity: A Bibliometric Analysis
Obesity is a serious public health problem. According to statistics, there are millions of obese people worldwide. Research studies have discovered a complex and intricate relationship between the gut microbiota and obesity. Probing and summarizing the relationship between intestinal microbes and obesity has important guiding significance for the accurate control of the research direction and expanding the choice of obesity treatment methods. We used bibliometric analysis to analyze the published literature with the intention to reveal the research hotspots and development trends on the effects of intestinal microbes on obesity from a visualization perspective, both qualitatively and quantitatively. The results showed that current research is focusing on related mechanisms of the effects of intestinal microbes on obesity and therapeutic methods for obesity. Several noteworthy hotspots within this field have garnered considerable attention and are expected to remain the focal points of future research. Of particular interest are the mechanisms by which intestinal microbes potentially regulate obesity through metabolite interactions, as well as the role of microbiomes as metabolic markers of obesity. These findings strongly suggest that gut microbes continue to be a key target in the quest for effective obesity treatments. Co-operation and communication between countries and institutions should be strengthened to promote development in this field to benefit more patients with obesity.
Introduction
Obesity is a serious public health problem. According to statistics, there are approximately 600 million obese people and 1.9 billion overweight individuals worldwide [1]. It is predicted that the number of obese people in the world will reach 1.12 billion by 2030 [2]. As far as we know, obesity is not only a change in appearance, but it is also associated with disorders of lipid and glucose metabolism, chronic inflammation, oxidative stress, and increased risk of a variety of diseases. According to research, obesity is one of the non-negligible reasons for cardiovascular disease, diabetes, and malignant tumors [3,4], while it is estimated that more than 80% of the global burden of diabetes and cardiovascular disease will occur in developing countries such as China and India in 2025 [5]. However, the etiology of obesity is not yet completely clear. The increasing prevalence of obesity cannot be explained by changes in energy balance alone. Therefore, it is vital for the whole world to be involved in determining the formation mechanism of obesity and then to explore effective methods to treat it in order to reduce the incidence and mortality of obesity-related diseases.
The intestinal microbiota is a microbiota that lives in the gastrointestinal tract of the human body and can interact with the external environment through diet [6]. These microbes colonize the intestinal tract through mother-to-child transmission from the prenatal period. The colonization of human intestinal microbes continues after birth and is regulated by factors including gestational age, mode of delivery (natural delivery or cesarean section), diet (breastfeeding or baby food), hygiene, and antibiotics. The environment and diet of the first three years of life are essential for the acquisition of adult-like microbiota and the establishment of bacteria-host symbiosis that affects the immune and nervous system development. Between the ages of two and five, the intestinal microbiota gradually stabilize, and there is no significant difference with the intestinal microbiota in adults [7]. Under normal circumstances, the intestinal microbiota maintains a symbiotic relationship with the human body, and a good symbiotic relationship between them is very important for the maintenance of human health [8]. However, instantaneous changes in the intestinal ecosystem can lead to the destruction of this symbiotic relationship between microorganisms and hosts, such as aging, obesity, sedentary lifestyles, and dietary patterns, and the effects of antibiotics can change the intestinal microbiota [9]. Because of the important role of the intestinal ecosystem in maintaining host physiology, its changes can also cause a variety of physiological disorders in the human body, including low-grade inflammation, metabolic disorders, excessive lipid accumulation, and loss of insulin sensitivity, in turn leading to changes in the composition of the intestinal microbiota and decreasing the diversity of flora and metabolic pathways [10,11], thus increasing the risk of metabolic diseases such as obesity and diabetes [12]. Probing and summarizing the relationship between intestinal microbes and obesity has important guiding significance for the accurate control of the research direction and expanding the choice of obesity treatment methods.
Bibliometrics is a discipline that uses mathematics and statistics to quantitatively analyze literature and information. Through quantitative analysis of the subject literature, we can construct a knowledge structure and explore the development trend. At present, it has been widely used in many research fields [13][14][15]. Overall, this analysis method and the traditional literature review are both based on previous research, summarizing the current situation and shortcomings of the research, and finally guiding further research. A literature review emphasizes the content of an article, that is, summarizing the aspects and shortcomings of existing research, while most studies quote the representative papers in the literature according to a preset research context. On the other hand, bibliometric analysis places more emphasis on the analysis of "quantity". There is no need to analyze the research content of each document in detail but to analyze the number of published documents, the distribution of authors, and the relationship of citations in the target field. Most of the included documents are highly quoted. Compared with the traditional literature review, bibliometric analysis can make a more intuitive systematic analysis of all the literature in this field in a visual way, which is helpful for researchers entering a new field to grasp the overall trend of the field. There have been many review articles in the field of intestinal microbes and obesity, but the quantitative analysis in the field is insufficient. Therefore, in addition to briefly summarizing the hotpots in review articles, we used bibliometrics to analyze the published literature with the intention to reveal the research hotspots and development trends in this field, both qualitatively and quantitatively.
Data Sources and Search Strategies
Data were obtained from the core collection of Web of Science (WoSCC), a database of Clarivate Analytics. The following search strategies were presented: ((overweight OR obesity OR obes* OR over$weight) AND (gastrointestin* OR gastro-intestin* OR gut OR intestin*) AND (bacteria OR prebiotic OR probiotic OR microbiot* OR microbiome* OR flora OR microflora)). In the end, a total of 8888 original articles in English were retrieved. All the documents were downloaded as "Full Record and Cited References" and saved as "Bibtex" and "plain text file". The above data were obtained on 2 February 2023.
Data Analysis and Graph Acquisition
The corresponding files retrieved from WoSCC were imported into Microsoft Excel (16.66.1), Biblioshiny (4.1) and CiteSpace (6.1.R6) to perform a bibliometric and visual analysis. Annual publications, country annual publications, and publisher annual publications were analyzed with Microsoft Excel. The Biblioshiny platform provides a web-based graphical interface in the Bibliometrix package, which was used to conduct total citations and average articles cited, as well as showing publications of journals and a country collaboration map. CiteSpace is an interactive visualization analysis software developed by Professor Chen Chaomei to conduct different types of network analyses, such as country collaboration networks, institution collaboration networks, author collaboration networks, co-cited author collaboration networks, co-cited journal collaboration networks, and keyword co-occurrence, which aid in visually analyzing the knowledge domain and emerging trends. A dual-map overlay was constructed for journals. The parameters of CiteSpace were as follows: time slicing (2013 to 2022), years per slice (1), links (strength: cosine, scope: within slices), selection criteria (g-index: k = 5 in country, institution, author collaboration network analysis and keyword co-occurrence), pruning (pathfinder, pruning sliced networks). Cluster labels used keywords by log likelihood ratio (LLR). To identify emerging topics, we detected keywords and references with a strong citation burst.
Ethical approval was waived by the local ethical board because the data came from public databases, and no human or animal subjects were involved
An Overview of the Annual Growth Trend
As shown in Figure 1a, from 2013 to 2022, a total of 8888 English language articles met our inclusion criteria. The number of annual publications began to exceed 500 in 2016, and only after 3 years did the annual publications exceed 1000. The annual number of publications increased from 267 in 2013 to 1768 in 2022, with an amazing increase of over sixfold. We further analyzed the annual national output of the 10 most productive countries and regions (Figure 1b). The different colors represent different countries, and the slope represents the speed of annual publication number increase. China ranked first in the number of annual publications, followed by the United States and Spain. The top 10 publishers according to their contribution to the total number of articles about the effect of intestinal microbes on obesity are shown in Figure 1c. The various colors represent different year publications. Regarding the number of publications, Elsevier (1685) and Springer Nature (1388) far surpassed other publishers, confirming their unsurpassable international position in the publishing industry.
Data Analysis and Graph Acquisition
The corresponding files retrieved from WoSCC were imported into Microsoft Excel (16.66.1) , Biblioshiny (4.1) and CiteSpace (6.1.R6) to perform a bibliometric and visual analysis. Annual publications, country annual publications, and publisher annual publications were analyzed with Microsoft Excel. The Biblioshiny platform provides a webbased graphical interface in the Bibliometrix package, which was used to conduct total citations and average articles cited, as well as showing publications of journals and a country collaboration map. CiteSpace is an interactive visualization analysis software developed by Professor Chen Chaomei to conduct different types of network analyses, such as country collaboration networks, institution collaboration networks, author collaboration networks, co-cited author collaboration networks, co-cited journal collaboration networks, and keyword co-occurrence, which aid in visually analyzing the knowledge domain and emerging trends. A dual-map overlay was constructed for journals. The parameters of CiteSpace were as follows: time slicing (2013 to 2022), years per slice (1), links (strength: cosine, scope: within slices), selection criteria (g-index: k = 5 in country, institution, author collaboration network analysis and keyword co-occurrence), pruning (pathfinder, pruning sliced networks). Cluster labels used keywords by log likelihood ratio (LLR). To identify emerging topics, we detected keywords and references with a strong citation burst.
Ethical approval was waived by the local ethical board because the data came from public databases, and no human or animal subjects were involved
An Overview of the Annual Growth Trend
As shown in Figure 1a
Distribution and Cooperation between Countries/Regions and Institutions
In the ten years from 2013-2022, 124 countries or regions participated in the domain of intestinal microbes and obesity. China ranked first with 3036 publications, accounting for 34.16% of all publications, followed by the United States (2356, 26.51%) and Spain (480, 5.40%) (Table 1, Figure 2a). Nevertheless, the United States tops the ranking of the most cited countries with a frequency (number of documents) of 87,065, followed by China (65,250), France (13,809), and Canada (13,496) ( Table S1). The country collaboration map shows the overall perspective of country and region academic cooperation ( Figure S1). The United States had the broadest academic associations with other countries or regions, while its collaboration with China was the tightest, followed by Canada. The United States and China both had frequent connection with European countries. Moreover, among European countries, there was also tight collaboration, but communication among the other countries or regions still needs to be strengthened.
There were 179 institutions included in the results of institution analysis. The top 10 institutions mostly came from China (n = 5) and the USA (n = 2), followed by Denmark (n = 1), Belgium (n = 1), and Spain (n = 1). University of Copenhagen, Chinese Academy of Sciences, and Shanghai Jiao Tong University had the highest total link strength, revealing close cooperation with other institutions (Table 2, Figure 2b).
Distribution and Cooperation between Countries/Regions and Institutions
In the ten years from 2013-2022, 124 countries or regions participated in the domain of intestinal microbes and obesity. China ranked first with 3036 publications, accounting for 34.16% of all publications, followed by the United States (2356, 26.51%) and Spain (480, 5.40%) (Table 1, Figure 2a). Nevertheless, the United States tops the ranking of the most cited countries with a frequency (number of documents) of 87,065, followed by China (65,250), France (13,809), and Canada (13,496) ( Table S1). The country collaboration map shows the overall perspective of country and region academic cooperation ( Figure S1). The United States had the broadest academic associations with other countries or regions, while its collaboration with China was the tightest, followed by Canada. The United States and China both had frequent connection with European countries. Moreover, among European countries, there was also tight collaboration, but communication among the other countries or regions still needs to be strengthened. There were 179 institutions included in the results of institution analysis. The top 10 institutions mostly came from China (n = 5) and the USA (n = 2), followed by Denmark (n = 1), Belgium (n = 1), and Spain (n = 1). University of Copenhagen, Chinese Academy of Sciences, and Shanghai Jiao Tong University had the highest total link strength, revealing close cooperation with other institutions (Table 2, Figure 2b).
Distribution and Co-Authorship of Authors
In this analysis, 16 authors had published at least 21 publications. The 10 most productive authors are listed in Table 3. Most of the authors were from Europe, with Belgium (n = 3), followed by China (n = 3), Canada (n = 2), Sweden (n = 1), and France (n = 1). Patrice D. Cani (97 publications) and Nathalie M. Delzenne (73 publications), both from Université Catholique de Louvain, contributed the top two most publications, followed by Raylene A. Reimer (51 publications) from the University of Calgary. The top two authors maintained the most contact with other research labs, and the cooperation between Patrice D. Cani and Nathalie M. Delzenne was close (Figure 3a). Co-cited authors are shown in Figure 3b and Table 4. Peter J. Turnbaugh and Patrice D. Cani rank as the top two, which manifests their centrality in the research field.
Analysis of Journals and Co-Cited Academic Journals
As shown in Table 5 Figure 4). Nature had the highest centrality (0.18), followed by PLOS ONE (0.17), which means that the published articles in the two journals were widely recognized and cited by researchers. Figure S2 shows the dual-map overlay of journals. The dual-map overlay shows seven main citation paths. The published articles were mostly focused on journals in the fields of medicine, medical, clinical, molecular, biology, and immunology and partly focused on veterinary, animal, and science. Cited articles were mostly published in journals in the fields of molecular biology and genetics, and many articles were published in journals in the fields of health, nursing, medicine, environmental, toxicology, and nutrition. Figure S2 shows the dual-map overlay of journals. The dual-map overlay shows seven main citation paths. The published articles were mostly focused on journals in the fields of medicine, medical, clinical, molecular, biology, and immunology and partly focused on veterinary, animal, and science. Cited articles were mostly published in journals in the fields of molecular biology and genetics, and many articles were published in journals in the fields of health, nursing, medicine, environmental, toxicology, and nutrition.
Co-Occurrence Analysis
Keywords could accurately reflect the main research point of an article, which has high condensation in a research field and can directly point to the center of the text. Therefore, a high frequency of keywords represents hot issues in a research field and research hotspots. The keyword co-occurrence graph for the effects of intestinal microbes on obesity is shown in Figure 5. The density value was 0.0457. The ten most frequent keywords were gut microbiota, obesity, inflammation, insulin resistance, intestinal microbiota, health, metabolism, diet, gut microbiome, and metabolic syndrome (Table 7).
Co-Occurrence Analysis
Keywords could accurately reflect the main research point of an article, which has high condensation in a research field and can directly point to the center of the text. Therefore, a high frequency of keywords represents hot issues in a research field and research hotspots. The keyword co-occurrence graph for the effects of intestinal microbes on obesity is shown in Figure 5. The density value was 0.0457. The ten most frequent keywords were gut microbiota, obesity, inflammation, insulin resistance, intestinal microbiota, health, metabolism, diet, gut microbiome, and metabolic syndrome (Table 7). 5. Keywords co-occurrence network for the effect of intestinal microbes on obesity research (node label: by centrality, label font size: uniformed).
Cluster Analysis
The clustering analysis of the keywords yielded 288 nodes and 1083 links, with a Q value of 0.4123 (>0.3) and an S value of 0.7178 (>0.5), which are two important indicators to evaluate the significance of a clustering effect. There were eight clusters, including diversity, insulin resistance, short-chain fatty acids, activation, food intake, bariatric surgery, body composition, and nonalcoholic fatty liver disease ( Figure 6). "Diversity" #0 was the largest cluster, followed by "insulin resistance" #1 and "short-chain fatty acids" #2. Timeline View analysis was conducted to further analyze the keywords of the effects of intestinal microbes on obesity ( Figure S3).
Cluster Analysis
The clustering analysis of the keywords yielded 288 nodes and 1083 links, with a Q value of 0.4123 (>0.3) and an S value of 0.7178 (>0.5), which are two important indicators to evaluate the significance of a clustering effect. There were eight clusters, including diversity, insulin resistance, short-chain fatty acids, activation, food intake, bariatric surgery, body composition, and nonalcoholic fatty liver disease ( Figure 6). "Diversity" #0 was the largest cluster, followed by "insulin resistance" #1 and "short-chain fatty acids" #2. Timeline View analysis was conducted to further analyze the keywords of the effects of intestinal microbes on obesity ( Figure S3).
Burst Detection
CiteSpace was further used to perform burst detection with high frequency and therefore revealed research frontiers and hotspot trends over a period. As shown in Table S2, the first 25 keywords were sorted by strengths of burst to discover the research hotspots about the effects of intestinal microbes on obesity. Keywords such as "ecology" (strength 28.33), "human gut microbiota" (strength 27.34), "microbiota" (strength 23.44), "diet induced
Burst Detection
CiteSpace was further used to perform burst detection with high frequency and therefore revealed research frontiers and hotspot trends over a period. As shown in Table S2, the first 25 keywords were sorted by strengths of burst to discover the research hotspots about the effects of intestinal microbes on obesity. Keywords such as "ecology" (strength 28.33), "human gut microbiota" (strength 27.34), "microbiota" (strength 23.44), "diet induced obesity" (strength 21.12), and "irritable bowel disease" (strength 16) were the strongest keywords leading the research boom from 2013-2018. Keywords such as "obese" (strength 9.3) and "serum" (9.69) were the most recent trending keywords. Table S3 shows the top 25 references with the strongest citation bursts, which were recognized as a critical milestone and led the development direction of the field for a while.
Discussion
In this study, we performed a bibliometric analysis of the effects of intestinal microbes on obesity-related studies from 2013 to 2022 using the core collection of WoSCC to comprehensively understand global research trends and hotspots and provide references for researchers in this field or those who want to become involved in the field.
Global Trends in Effects of Intestinal Microbes on Obesity
The amount of annual scientific production is an important indicator of development in an academic field. In our study, a total of 8888 original articles from 1392 journals met the inclusion criteria. The article annual growth rate reached 22.93%, and the annual growth rate from 2019 to 2020 was the highest, shedding light on the increasing attention and expanding research exploration dedicated to this field.
The closeness of collaboration between countries/regions, institutions, authors, and journals was assessed, which can help to find the laws of scientific research cooperation, guiding more effective scientific research activities, and promoting potential collaborative opportunities for other groups.
According to Table S1, the total number of publications in the USA (87,065) was greater than that in China (65,250) and far greater than that in France (13,809), Canada (13,496), and the other top 10 most cited countries. The total citations of the United States and China are far more than the other top 10 most cited country citations combined. Without doubt, the United States and China demonstrate their position as leaders in the field. Notably, Israel, with 49 publications, was the most cited country per article (average article citations, 124.18), suggesting that the quality of research on the effects of intestinal microbes on obesity in Israel is very high. A similar story is unfolding in many European countries such as Belgium (average article citations, 107.85) and Sweden (average article citations, 103.95), etc., which sets a model for other countries/regions in this research field (Tables S1 and S4).
To improve the quality of articles, the reliability of the data source and the rigor of the experimental design may be the first factors affecting the quality of publications, and more attention should be given to these aspects in future studies.
According to Figure 2b, the bottom right is most dominated by universities from China, while the upper left is most dominated by schools in North America or Europe. Nevertheless, more cooperation between the two clusters could be conducted to catalyze breakthrough progress in research on the effects of intestinal microbes on obesity. Four of the top 10 institutions are based in China, followed by the United States with three, which are thereby maximizing regional advantages and demonstrating the dominance of the United States and China in the field. This may partly explain why China and the United States consistently maintain a high quantity of publications. The University of Copenhagen in Denmark was the most productive institution worldwide, followed by the Chinese Academy of Sciences in China, indicating that these two institutions participated in the most collaborations with other institutions worldwide. Although Spain and Canada ranked third and fourth in terms of total publications, only one of the Spanish research institutions ranked in the top 10, indicating a lack of institutions with professional and research stature in terms of the effects of intestinal microbes on obesity research (Tables 1 and 2). The most effective organizations and groups are leading the trends on the effects of intestinal microbes on obesity research; thus, further study at these institutions will ensure continuous future development in this field.
Notably, Prof. Patrice D. Cani from Université Catholique de Louvain, Belgium, has published the most articles with the highest centrality, mainly focusing on physiology, molecular metabolism, and nutrition. Researchers have emphasized the significant roles played by the intestinal microbiota in the development of diseases associated with overweight and obesity, including type 2 diabetes, cardiovascular diseases, and certain cancers. Prof. Patrice D. Cani discovered a very particular bacterium called Akkermansia muciniphila, which has beneficial effects on health by strengthening the intestinal barrier, decreasing body weight and fat mass gain while decreasing insulin resistance and diabetes [16]. Recently, his team discovered a new bacterium called Dysosmobacter welbionis, a completely new genus isolated from the human intestine. Interestingly, they found that this bacterium was present in the intestinal microbiota of the general population but was less abundant in the intestines of individuals with obesity and type 2 diabetes. Experimental studies have demonstrated that the administration of this bacterium improves the health of obese and diabetic mice [17]. The most cited co-author, Peter J. Turnbaugh, is from the University of California San Francisco, United States. He employed interdisciplinary approaches utilizing preclinical models and human cohorts to investigate the mechanisms by which the gut microbiome influences nutrition and pharmacology. His team observed that the consumption of diets exclusively composed of animal or plant products led to rapid changes in the structure of the gut microbial community. These changes were significant enough to override any pre-existing inter-individual differences in microbial community gene expression [18]. This research finding was published in the prestigious journal Nature and has become the most highly cited article in the field.
Conducting an analysis of the characteristics of international peer-reviewed journals enables us to gain insights into current research directions and keep pace with the cuttingedge research in the field. Among the top 10 influential journals in the field of intestinal microbe effects on obesity research, three of them (Nutrients, Scientific reports, PLOS ONE) were also among the top 10 co-cited journals. Undoubtedly, Nutrients, Scientific Reports, Food & Function, and PLOS ONE have emerged as the leading publications, publishing the most relevant articles and maintaining their popularity among researchers who wish to stay updated on the latest research trends. Notably, three of the top 10 co-cited references were published in Nature, one in PNAS, and one in Gut (Table 8)
Basic Knowledge of the Effects of Intestinal Microbes on Obesity Research
The most frequently cited references possess a pivotal academic position in the field. Either positive or negative conclusions from the documents influence the research direction of others. In our study, the top co-cited references were used to investigate the knowledge base for the effects of intestinal microbes on obesity research. The top 10 co-cited references consist of six articles, two letters, one correspondence, and one brief communication [16,[18][19][20][21][22][23][24][25][26]. Among them, two articles and one letter experimentally demonstrate that A. muciniphila could decrease body weight and fat mass gain while decreasing insulin resistance and diabetes, three articles and one letter focused on omics studies to uncover the mechanisms of the interactions between microorganisms and their hosts, one correspondence and one brief communication developed software and data analysis platforms to make more accurate and full use of sequencing data, and one article experimentally demonstrated the transmissibility of diet-by-microbiota interactions. Therefore, combining microbial sequencing with metabolomics analysis to examine changes in microbial metabolites is a widely employed research approach in this field. A. muciniphila has attracted significant attention for its impact on human obesity and its potential for obesity treatment. The impact of diet on gut microbiota remains a focal point of research (Table 8).
Current Hotspots and Field Development Predictions of the Effects of Intestinal Microbes on Obesity Research
Keywords with high frequency are usually used to accurately reveal the main research interests and hotspots in the field within a certain period. Combined with burst detection, which helps researchers to precisely catch up to the research trends from numerous studies, could effectively capture the dramatic increases in references or keywords in one research field within a specified period. Therefore, it served as an important indicator of research hotspots or research frontiers over time (Table S2). The "obese patient" and "serum" burst separately in 2019 and 2020 and continue to burst until now, which indicated that the two topics have received continuous attention in recent years and might be the main trends of research on the effect of intestinal microbes on obesity. During the co-citation reference burst value analysis, five references stood out prominently, focusing primarily on key bacterial metabolites such as short-chain fatty acids, succinate, and secondary bile acids. These references also discuss the benefits of A. muciniphila supplementation in improving metabolic parameters in obese patients. Additionally, these references present the latest tools for microbiome data analysis, which warrants thorough exploration and investigation (Table S3). In conjunction with the keywords that continue to exhibit high burst values, we conducted an analysis of the current research hotspots and identified future research trends.
Intestinal Microbes May Regulate Obesity through Bacterial Metabolites
There are complex interactions between the host and intestinal microbes in carbohydrate, amino acid, lipid, and nucleic acid metabolism. Intestinal microbes can use their respective metabolites to maintain intestinal viability while affecting the development, homeostasis and function of the host immune system through nutrition-and metabolitedependent mechanisms [27]. Small molecules, such as vitamins, fatty acids, amino acids, and bile acids, regulate host-intestinal metabolic homeostasis by binding to specific host membranes or nuclear receptors [28,29]. Short-chain fatty acids (SCFAs), as one of the major products from microbial fermentative activity in the gut [30], can directly activate G proteincoupled receptors, inhibit histone deacetylases, and serve as energy substrates [31,32]. G protein-coupled receptors are also called free fatty acid receptors (FFARs) since they sense free fatty acids. SCFAs facilitate gut-brain axis signaling by activating FFAR2 and FFAR3. This leads to increased satiety and reduced food intake, ultimately impacting the host's body weight [33,34] and other physiological responses, such as immunity, intestinal transit time, and inflammation [35][36][37][38].
Bile acids, such as cholic acid and chenodeoxycholic acid, can facilitate dietary fat and fat-soluble vitamin absorption. Modified bile acids are referred to as secondary bile acids and include deoxycholic acid, lithocholic acid, and ursodeoxycholic acid [39]. Bile acids can directly and rapidly affect the metabolism of bacteria, including membrane damage and disruption of amino acid, nucleotide and carbohydrate metabolism, while shortterm exposure to bile acids significantly affects host metabolism by altering the bacterial community structure [40]. Research has found that supplementation with Parabacteroides distasonis in mice can modulate the composition of bile acids in the gut, resulting in decreased weight gain, reduced hyperglycemia, and alleviated hepatic steatosis in ob/ob and high-fat diet (HFD)-fed mice [41].
According to the above analysis, although notable advances in the effects of intestinal microbes on obesity have been made, our understanding of the interrelationships between them remains descriptive, and we still have numerous gaps to fill.
Microbiomes May Act as Metabolic Markers of Obesity
Human obesity is a heterogeneous condition in the context of pathogenesis, pathophysiology, and therapeutic responsiveness. Studies of alterations in the genome-the microbial gut metagenome-may define subsets of adult individuals with different metabolic risk profiles, which could contribute to resolving some of the heterogeneity associated with adiposity-related phenotypes [21]. With the help of sequencing, recent research found that the richness of the human gut microbiome correlated with human metabolic markers. Individuals with a low bacterial richness were characterized by more marked overall adiposity, insulin resistance, and dyslipidemia as well as a more pronounced inflammatory phenotype when compared with high bacterial richness individuals [21]. Correspondingly, higher gut microbiome gene richness and A. muciniphila abundance could exhibit the healthiest metabolic status, particularly in fasting plasma glucose, plasma triglycerides, and body fat distribution [19]. Research has also proven that transmissible and modifiable interactions between diet and microbiota influence host biology. The transformation correlated with invasion of members of Bacteroidales from Ln into Ob microbiota that prevented development of increased adiposity and body mass in Ob cage mates and transformed their microbiota metabolic profile to a lean-like state. Therefore, it may be possible to intervene in obesity by targeting the gut microbiota [22]. The gut microbiome can also rapidly respond to an altered diet, and a dynamic balance can be achieved between intestinal microbes and diet immediately, indicating that not only a long-term diet but also a short-term diet can influence the gut microbiome [18].
All these findings suggest that gut microbiome richness is a key factor in maintaining the homeostasis of human health and may act as a metabolic marker of obesity
Gut Microbes Could Be a Target for Obesity Treatment
Bariatric surgery is considered the only effective and sustainable weight loss method for obese patients. There is a 50-70% decrease in body weight and fat mass after surgical procedures such as Roux-Y gastric bypass (RYGB) and sleeve gastrectomy (SG) [42]. Nevertheless, obese patients undergoing bariatric surgery may experience overgrowth of small intestinal bacteria [43], a condition that snags with weight loss and increases the risk of micronutrient deficiency that appears to be harmful for the configuration and composition of intestinal microbiota [44,45]. To maintain the effect of surgery and avoid weight rebound, it is important for obese patients to correct the microbial balance and improve microbiota-host interactions with specific interventions [46]. Research has also shown that the elevated pH resulting from RYGB could ensure the survival of probiotic bacteria, making it possible for surgical patients to receive probiotic therapy (80). Probiotics are a kind of active microorganism beneficial to the host that colonizes the human intestinal tract and reproductive system, and they can improve the host microecological balance and play a beneficial role. Bariatric surgery is often followed by an increase in Streptococcaceae and a decline in Bifidobacteriaceae [42]. Proper supplementation with probiotics can compensate for the intestinal microbial imbalance caused by surgery. Moreover, recent research showed that probiotic intervention could increase the levels of peptide YY and GLP-1 in mice [47], reduce the level of intestinal inflammation [48], and induce the production of antiinflammatory cytokines [49]. Meanwhile, intestinal peptide signals activate the gut-brain axis, which, in turn, exerts endocrine effects on other organ systems, especially the brain, regulating appetite, metabolism, and other dietary behaviors [33,34,50,51]. Consequently, weight loss and reduced insulin resistance occur.
Weight-loss intervention by bariatric surgery partially reversed obesity-associated microbial and metabolic alterations in obese individuals [26]. This means that external interventions can affect intestinal microbes. It may be possible to intervene in obesity by targeting the gut microbiota [22], and there is already evidence proving this speculation. A. muciniphila is a mucin-degrading bacterium that resides in the mucus layer. A research study found that A. muciniphila decreased in obese and type 2 diabetic mice, and A. muciniphila treatment reversed high-fat diet-induced metabolic disorders, including fatmass gain, metabolic endotoxemia, adipose tissue inflammation, and insulin resistance [16]. Pasteurized A. muciniphila could enhance its capacity to reduce fat mass development, insulin resistance, and dyslipidemia. Moreover, Amuc_1100, a purified membrane protein from A. muciniphila, could improve the gut barrier and partly recapitulate the beneficial effects of the bacterium [23]. All these findings are of great driving significance, indicating that it is feasible to use microbes to treat obesity, but more research is needed to support this hypothesis. However, due to the heterogeneity of experimental techniques, methods, and objects, the effectiveness and safety of microbial product intervention in improving obesity against intestinal microbiota need to be further verified, and the mechanism remains unclear. Few studies have focused on the development of new functional microbiological products, and most of them have been carried out in mice. Before microbial products can be reasonably and effectively used as treatment for obesity, a large number of studies are needed, especially randomized controlled clinical trials.
New Technology Promotes In-Depth Research on the Role of Intestinal Microbes in Obesity
Among the top 10 co-cited references, we found that second-generation sequencing was a vital technology that relies on its high-throughput characteristics, making it easy to sequence the transcriptome or genome of a species [52]. With the development of sequencing technology, a large amount of data is generated. To make more accurate and full use of sequencing data, software and data analysis platforms have also been developed. An open-source software package named DADA2 was used for modeling and correcting Ill umina-sequenced amplicon errors. With the help of this software, researchers could accurately reconstruct amplicon-sequenced communities at the highest resolution, which ensured the accuracy of the research to the greatest extent [24]. Another utility tool named QIIME 2 could serve not only as a marker-gene analysis tool but also as a multidimensional and powerful data science platform that can be rapidly adapted to analyze diverse microbiome features [25]. These tools help to drive rapid development in microbiome research ( Table 8). The serum metabolome has emerged as a technique that focuses on defining the functional status of host-microbial relationships in biological specimens, which can reflect the dynamic changes in metabolites and explore disease-related metabolites or dysregulated metabolic pathways (metabolome analysis for investigating host-gut microbiota interactions). A top 10 co-cited reference showed the gut microbiome and serum metabolome alterations in obesity that patients after bariatric surgery not only appeared partially to reverse obesity-associated microbial alterations but also were accompanied by metabolic alterations, including the decreased abundance of Bacteroides thetaiotaomicron and the elevated serum glutamate concentration, which also suggests that it may be possible to intervene in obesity by targeting the gut microbiota [26].
Strengths and Limitations
This bibliometric study conducted a systematic analysis of the basic situation, research hotspots, and trends in effects of intestinal microbes on obesity from a visualization perspective. The results of the bibliometric study were objective and accurate, which could provide a comprehensive guide for academics who are already working or wish to work in this field. Nevertheless, there are still some limitations in our study. First, owing to the nature of the CiteSpace software, our data are filtered only from the WoSCC database, which is not sufficiently comprehensive and may lead to data omission. Second, our results were processed by CiteSpace software with certain algorithms, which could lead to bias in some of the results. Finally, only English articles were included from the database and analysis, potentially leading to a source bias.
Conclusions
The effect of intestinal microbes on obesity was investigated by bibliometric analysis in the current study over the period of 2013 to 2022. Compared with other review articles, the contribution of the study is evident in its visualized ways of revealing the countries/regions, institutions, co-authorship of authors, journals and co-cited journals, and popular keywords and references that exert great influence in research on the effect of intestinal microbes on obesity. Our study highlights the potential role of intestinal microbes in regulating obesity through metabolite interactions and identifies microbiomes as potential metabolic markers for obesity. While bariatric surgery is widely recognized as the most effective and sustainable weight loss method for obese patients, our findings suggest that targeting gut microbes could also hold promise as a future approach to obesity treatment. Nevertheless, cooperation and communication between countries and institutions should be strengthened to promote development in this field and to benefit more patients with obesity.
Supplementary Materials: The following supporting information can be downloaded at: https:// www.mdpi.com/article/10.3390/nu15143255/s1, Table S1: Top 10 most local cited countries/regions related to the effect of intestinal microbes on obesity, Table S2: Top 25 burst keywords in articles related to the effect of intestinal microbes on obesity, Table S3: Top 25 burst references in articles related to the effect of intestinal microbes on obesity, Table S4: Top 10 most cited countries/regions related to the effect of intestinal microbes on obesity, Figure S1: Country collaboration map. The red line indicates that there was collaboration between two countries, Figure S2: Dual-map overlay related to the effect of intestinal microbes on obesity, Figure S3: Cluster timeline view map of keyword analysis for the effect of intestinal microbes on obesity research.
Author Contributions: Conception and design, Z.S. and J.G.; data collection, Z.S. and C.T.; data analysis and interpretation, Z.S. and C.T.; the methodological process was guided by X.Y.; writing-original draft preparation, Z.S.; writing-review and editing: G.W., J.G. and X.Y. All authors have read and agreed to the published version of the manuscript.
Funding: This study was funded by West China Hospital, Sichuan University (Project No. 339180262).
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement:
The data used to support the findings are available from the corresponding author upon request. | 2023-07-26T15:09:53.359Z | 2023-07-01T00:00:00.000 | {
"year": 2023,
"sha1": "bd6b70d9f0ea9095db44137ba1566b7b3c011da5",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6643/15/14/3255/pdf?version=1690085772",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "63a6a0d015693ea268d4e2b128720398ddf5b3ba",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
239729563 | pes2o/s2orc | v3-fos-license | TREATMENT OF MANDIBULAR CONDYLE FRACTURE-A COMPARISON OF TWO PROTOCOLS
Objectives: To compare the outcome of two protocols (open reduction and closed reduction) in the treatment of mandibular condyle fracture. Study Design: Comparative study. Setting: Department of Oral & Maxillofacial Surgery, Institute of Dentistry, Liaquat University of Medical and Health Sciences Jamshoro. Period: January 2016 to September 2016. Material & Methods: After selection of patient into either group, for closed reduction technique, maxillomandibular fixation was applied for 4-6 weeks. For open reduction, a preauricular incision was given and fracture was reduced and fixed with miniplates after achieving the normal occlusion. Post operatively, patients of both the group were recalled after one, two and three months for the assessment of the functional effects such as occlusion and range of motion in both groups. Results: Mean age of the patients was29.80+4.42 years in group 1 (open reduction group), while mean age of the group 2 (closed reduction group) was 23.95+6.63 years. Male were in the majority in both groups 17 in group 1 and 16 in group 2. Out of 20 patients in group 1, 11 patients had condylar neck fracture and 9 had subcondyle fracture. While in group 2, 13 patients had subcondyle fracture and 7 had condylar neck fracture. In both groups again no significant difference was found according to location of the fracture (p-value 0.20). Group 1 (open reduction) had better functional results in terms of occlusion and range of motion after 3 months as compared to the group 2 (closed reduction). P-value 0.19. Conclusion: In dealing with mandibular condyle fractures it was concluded that open reduction and internal fixation is a better technique as compared to the closed reduction with maxillomandibular fixation.
INTRODUCTION
The bone which has the compound role in esthetics of the face and efficient occlusion is solely the mandible. Mandibular fractures are the most common fractures of the facial skeleton as they have the famous position in lower jaw, and it accounts for 36% to 59% of all maxillofacial fractures. 1,2 The mandibular fracture varies according to centers and countries, with the condyle, angle and symphysis as the most frequently encountered fracture site. 3 Condyle fracture occurs most frequently and in dentate and edentulous patients it accounts for about 30% and 37% of mandible fractures respectively. 4 Pain, limited mandibular movement, muscle spasm, deviation of the mandible, malocclusion, pathological changes in the temporomandibular joint, facial unevenness and ankylosis are the mostly encountered problems with fractures of mandibular condyle. 5 Mandibular condyle fracture can be classified as unilateral or bilateral; according to the fracture position (condyle head, condyle neck and subcondyle) and according to the degree of fracture displaced (non-displaced, deviated, displaced and dislocated). 4 The common etiological factors of mandibular condyle fractures are road traffic accidents, falls, firearm injuries, interpersonal violence, sports injuries and industrial accidents. 6 The treatment options of unilateral condyle DOI: 10.29309/TPMJ/2020.27. 10.4310 fractures are open reduction and internal fixation or closed reduction with maxillomandibular fixation; however, for either closed or open technique, the success of the treatment relies on the restoration of normal dental occlusion and bony union. 7 Both methods of management have their own advantages and disadvantages. Closed reduction does not upset the vascular envelope, less costly for the patient, no jeopardy to facial nerve and can be managed under local anesthesia; but it carries a drawback of having mouth closed and a extensive period of immobilization. It also requires the dentition being intact, uneasy for the patients and in terms of oral hygiene maintenance it grossly requires patient's fulfillment. Advantages achievable with open reduction are: direct visualization and reduction of fractured bone segments and restoration of the patient's pre injury occlusion without maxillomandibular fixation; this permit for bony healing in a shorter phase of time (primary bone healing), enhanced nutrition and oral hygiene; however, possible foreign body infections are more expected with this method and it carries a momentous danger of facial nerve traction injury. 8 The recommended treatment of bilateral mandibular condyle fracture is open reduction and internal fixation of at least one condyle followed by maxillomandibular fixation for certain period of time. There are two thoughts of management of unilateral condyle fracture that makes it still controversial, as some favors close reduction with MMF and other group prefers open reduction with internal fixation. 9,10 The aim of this study was to compare the outcome of unilateral mandibular condyle fracture treated by open reduction and internal fixation with closed reduction with maxillomandibular fixation.
MATERIAL & METHODS
This comparative analytical study with non probability purposive technique was conducted at department of oral and maxillofacial surgery Hyderabad/Jamshoro. Total Sample size was divided in to two groups Group A (Open Reduction) = 20 Patients Group B (Closed Reduction) = 20 Patients
INCLUSION CRITERIA
• Patient having unilateral non comminuted low condylar fracture. • Patient in age group 15-50.
• Patient with either gender.
• Patient with sufficient bilateral dentition to allow Maxillomandibular Fixation.
EXCLUSION CRITERIA
• Patient with bilateral condyle fracture.
• Patient with undisplaced condyle with normal occlusion. • Patient not medically fit to undergo surgical intervention. • Patient not willing to participate in this study.
DATA COLLECTION TECHNIQUE
Patients meeting the inclusion criteria coming through Out Patient Department (OPD) and through emergency department were included in this study. Personal details of patient including name, age, gender, hospital registration number were recorded on Proforma. Diagnosis of mandibular condyle fracture was done on the basis of clinical examination, Orthopantomography (OPG) and P.A view of face. Random allocation was done by lottery method: odd number were treated with open reduction with internal fixation and even number were treated with closed reduction with maxillomandibular fixation.
For open reduction and internal fixation the patient was prepared for general anesthesia and the surgical area was cleaned with antiseptic before starting surgery. The fracture area was reduced with the instrument manually and fixed with miniplates (5 holes titanium miniplates with 4 screws having the size of 6mm, Moin International, Made in Pakistan). The patient was discharged from the hospital and requested for follow-up after one, two and three months for the assessment of the functional effects such as occlusion and range of motion.
For closed reduction with maxillomandibular fixation once arch bar was secured, cross intermaxillary wires were used to obtain pre-injury occlusion and fixation. The patient mouth was closed and advised to take only soft and liquid Professional Med J 2020;27(10):2176-2181.
www.theprofesional.com 2178 3 diet for next 4-6 weeks. The standard antibiotics with analgesic in syrup were prescribed for 5 days. Oral hygiene instructions were given and mouthwash was prescribed to maintain good oral hygiene. The patient was discharged from the hospital and requested for follow-up after one, two and three months for the assessment of the functional effects such as occlusion and range of motion. 18 Various studies reported that cases where condyle is displaced into middle cranial fossa, inappropriate occlusal restoration by closed reduction and lateral extracapsular displacement are the absolute indications where open reduction and internal fixation has to be carried out. [19][20][21][22] In our study functional assessment in terms of occlusion and range of motion after 2 and 3 month, group 1 (patients who underwent open reduction and internal fixation) had good occlusion and good range of motion as compared to the group 2 (closed reduction with maxillomandibular fixation) but no significant difference was found. P-value0.06. In the study of the Santleret al 23 two hundred thirty-four patients with fractures of the mandibular condylar process were treated by open or closed methods. On the basis of radiological, objective and clinical examination 150 patients with a mean follow-up time of 2.5 years were analyzed. While comparing surgical and non surgical treated cases no noteworthy dissimilarity in mobility, joint troubles, occlusion, muscle pain or nerve disorders were observed. Marker et al 24 in his study informed that treating fractures of condyle by closed reduction is non distressing, secure and dependable and rarely cause instability of function and malocclusion.
It was concluded that open reduction and
internal fixation is better technique as compared to the closed reduction with maxillomandibular fixation in the treatment of mandibular condyle fracture. Open reduction and internal fixation showed good occlusion and range of motion till 2 months follow-up, while after three month followup outcome was non-significant in both groups.
More randomized research is needed for further confirmation and accurate results as the sample size was small for definite conclusions to be drawn. Copyright© 26 Dec, 2019. | 2020-10-28T19:22:29.718Z | 2020-10-10T00:00:00.000 | {
"year": 2020,
"sha1": "54709a1ed881e54386185d4d6563a24b028b4cd6",
"oa_license": null,
"oa_url": "http://theprofesional.com/index.php/tpmj/article/download/4310/4287",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "d278e2188d1559912e5b472094bef7bf3cd389e4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
267381396 | pes2o/s2orc | v3-fos-license | Sociodemographic factors, health behavior, parental or workplace smoking, and adult asthma risk in the United States
BACKGROUND: Although several studies have found a link between parental or workplace smoking and asthma risk, particularly in children and adolescents, only a few studies have found this link in adults. OBJECTIVE: This study aimed to investigate the associations of sociodemographic factors, health behavior, and parental or workplace smoking with adult asthma risk in the United States (US). METHODS: A secondary data analysis on 874 participants aged 25–45 was performed using data from the 2011–2014 National Survey of Midlife Development in the United States Refresher. Participants were divided into smokers and nonsmokers. Participants were further divided into groups A (a father or mother with a smoking history) and B (others in the house or colleagues in the workplace who had a smoking history). RESULTS: Findings from the FREQ procedure revealed that sociodemographic (female, black, school or college education, unmarried/divorced, and employed) and lifestyle (no alcohol intake, physically inactive, and obese) and clinical (diabetes and joint disease) factors were significantly associated with one- or more-fold odds of asthma among adult smokers than nonsmokers. Adult smokers in group A, particularly females, those with a high school or college education, physically inactive, and overweight or obese, had a higher risk of asthma than those in group B. CONCLUSION: Adult smokers’ risk of developing asthma is increased in the US by having smoked with their parents, being a woman, being black, having a school or college education, being single or divorced, working, not drinking alcohol, being physically inactive, being obese, having diabetes, and having a joint disease.
Introduction
Asthma has been recognized as a significant public health problem since the 1970 s [1].The prevalence of asthma worldwide has increased by 12.6% since the 1990 s [2].The global prevalence of self-reported, doctor-diagnosed, and treated clinical asthma in adults was estimated to be 4.3% and 4.5%, respectively, with a 21-fold variation among 70 countries (0.2% in China and 21.5% in Australia) [3].The five countries with the highest prevalence of clini-cal asthma were Australia (21.5%), Sweden (20.2%), the United Kingdom (UK) (18.2%), the Netherlands (15.3%), and Brazil (13.0%) [3].Approximately 9% of the population of the United States (US) [4] and 6.5% of those in the UK have asthma [5].From 2000-2010, the average cost per asthma-related hospital stay in the US for adults increased from $5,200 to $6,600.
Smoking leads to acute bronchoconstriction and lung damage [6], which reduces lung function [7].Additionally, smoking reduces responsiveness ISSN 1051-9815 © 2024 -The authors.Published by IOS Press.This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial License (CC BY-NC 4.0).
to glucocorticoids [8], which leads to an increase of T-lymphocytes (mainly CD8 cells [9] and macrophages) within the airway walls.Thus, in smokers, the peripheral airways are infiltrated by mononuclear cells and macrophages [10].Previous studies have shown that the rate of decline in lung function is more accelerated in smokers with asthma than in nonsmokers without asthma [11].Moreover, an accelerated decline in lung function over time has been found in asthmatic individuals [12] due to their lower forced expiratory volume in one second.
Numerous studies have explored the risk factors for asthma among children [13,14], especially those living in the US [15,16].These studies identified older age, obesity, low socioeconomic status, smoking, and physical inactivity as asthma risk factors.One study also found that a group of US children, tweens, and adults had poor asthma outcomes because of ongoing exposure to modifiable factors, such as tobacco smoke, being overweight, modest physical activity, and limited family incomes [16].One risk factor, as well as a common asthma trigger, is cigarette smoking, mainly passive (secondhand) [17].A widespread and avoidable risk factor, passive smoking exposes individuals to more than 7,000 chemicals, including toxic hundreds, that may cause cancer [18].Parental smoking is a common source of passive smoking.
Several studies have shown that parental smoking history is associated significantly with asthma risk, especially among children and adolescents [19,20].Limited studies have shown this association in adults [21].However, these studies differ methodically (e.g., setting, population, self-reported smoking, history).Thus, the current research investigates the associations of sociodemographic (sex, race, education, marital status, and employment status), lifestyle (alcohol intake, physical inactivity, obesity, and central obesity), clinical (diabetes, joint, and cardiovascular diseases) factors, and parental or workplace smoking with adult asthma risk in the US.This research hypothesized that those who grew up living with smokers, especially with a father or mother, could be at risk for triggering asthma.
Methods
For this secondary analysis, data was used from the National Survey of Midlife Development in the United States (MIDUS) refresher conducted between 2011 and 2014, as previously explained [22].The 2011-2014 MIDUS refresher study data was selected because the new national probability sample was intended to expand and refresh the MIDUS study overall by recruiting a new sample with demographics similar to those of MIDUS 1 participants.The MIDUS refresher was an interdisciplinary study investigating psychosocial factors and health in adults by collecting demographic and biomarker data through an initial structured telephone interview and clinic visit.The combined response rate for the telephone interview and clinic visit was weighted to all eligible respondents across sample types to obtain unbiased estimates by including them in the analyses.The recruitment and assessment procedures utilized for the MIDUS refresher are available elsewhere [23].
The collected data in the MIDUS refresher has complied with the data protection and privacy regulations of the National Archive of Computerized Data on Aging (NACDA).The institutional review board (IRB) of Harvard University, Georgetown University, the University of California at Los Angeles, and the University of Wisconsin approved the MIDUS refresher.All participants in the MIDUS refresher provided written informed consent.The current study does not require IRB approval due to a secondary analysis performed using the original data from the MIDUS refresher.This can be explained by the lack of personal information in the data and the inability to obtain the codes due to proper coding, both of which prevent retrieval of the data.Since there were no human participants in this study, no consent was needed from the participants.
In this study, 874 adults aged between 25 and 45 years with smoking status data were included excluding those who answered "don't know" or "refused" (n = 588).Individuals older than 45 years (n = 2,115) were also excluded.The inclusion was limited to 24-45 years of age due to an overlap in the diagnosis of adult-onset asthma in the older age group, which was particularly apparent in the survey-collected data.All participants were categorized as smokers or nonsmokers based on self-reporting to the following questions: "Have you ever smoked cigarettes regularly-that is, at least a few cigarettes every day?"If a person answered "no," they were considered nonsmokers, whereas they were determined as smokers if they responded to the question "yes."This question was broadly similar to a question asked in a previous study [24].
Parental smoking was determined based on the following self-reported question."When you were growing up-that is, during your first 16 years-did you live with anyone in your household who smoked cigarettes or other tobacco products (other than yourself)-FATHER or MOTHER?"Workplace smoking was defined by asking, "While at your job in the past, did anyone regularly smoke cigarettes or other tobacco products in your immediate work area (other than yourself)?"Group A included participants who answered "yes" to the first question and defined it as parental smoking, while Group B included those who answered "yes" to the second question.
The outcome of asthma was assessed based on self-reporting: "In the past twelve months, have you experienced or been treated for asthma?"If a person answered "no," they were considered to have no asthma, whereas those who answered "yes" were believed to have asthma.This question was similar to a question in a previous study [3].
This research included a series of sociodemographic, lifestyle, and clinical factors such as sex, race, education, marital status, employment status, alcohol intake, physical inactivity, obesity, diabetes, and joint and cardiovascular diseases.The following variables were dichotomized: male (used as a reference) and female, white (used as a reference) and Black/Asian, school/college and graduates (used as a reference), married (used as a reference) and unmarried/divorced/widow, employed and unemployed (used as a reference), and alcohol consumption (yes and no).An alcohol intake was used as a reference.
The physical inactivity was assessed based on the self-reported question: "How often do you engage in vigorous physical activity?"[25] Physical inactivity was determined if the participants answered: "once a month," "less than once a month," or "never" [26].Obesity was assessed using body mass index (BMI), which is calculated by dividing individuals' weight in kilograms (kg) by height in meters (m) squared.The participants' height (in inches) was multiplied by 0.0254 to obtain the height in meters.The weight (in pounds) was multiplied by 0.4536 to acquire the mass in kilograms.The height was set at 84 inches to restrict extremes if it was greater than 84 inches.
The central obesity was determined using a selfreported question: "What is your waist size-that is, how many inches around is your waist?"Patients were instructed to measure their waist circumference (WC) at the level of their navel at the vastest point using non-stretchable tape.Ensured participants did not measure over clothing (including thin clothing) and recorded answers to the nearest quarter (¼) inch.The status of other diseases (e.g., cardiovascular diseases, diabetes, and joint/bone diseases) was measured using the dichotomous indicators "yes" and "no" (reference).
The Farrington-Manning test was used to estimate the required sample size for each group to prove accurate results by using the level of significance (alpha = 0.05), power (0.8), and proportion between groups (0.32, 0.21) [27].Based on this test, the minimum sample for each group was 213.Further, the Kolmogorov-Smirnov test was used to determine the normality of the continuous variables [28].The data was presented as the mean (standard deviation) for continuous variables and the number (percentage) for categorical variables.The chi-square test was used for the categorical variables and independent student t-tests for the continuous variables to identify statistically significant differences between smokers and nonsmokers.
The FREQ procedure was used to assess the prevalence of smokers, nonsmokers, and groups (A and B).The associations of demographic, behavioral, and clinical risk factors with the risk of asthma among adults with smokers compared to nonsmokers were examined using the FREQ procedure [29].A multivariable regression analysis was performed to test for possible confounding interactions affecting asthma occurrence with a high degree of collinearity.To achieve this analysis, the following conditions were applied: tolerance value and eigenvalues should not be below 0.1 or close to 0, respectively.The FREQ procedure was also used to assess the prevalence and risk of asthma according to demographic, behavioral, and clinical risk factors by group (A and B) among smoking adults.A significance between asthma and exposure to smoke during childhood was determined using the Chi-square test.Furthermore, odds ratios (ORs) with associated confidence intervals (95% CI) were calculated.
The age of asthma onset in groups A and B was assessed using unadjusted and adjusted multiple linear regression models.The adjusted model includes age, sex, race, education, marital status, employment status, alcohol intake status, obesity, central obesity, joint/bone disease, diabetes, and cardiovascular disease.The Kaplan-Meier (log-rank) test was used to assess the relationship between childhood smoke exposure and the onset of asthma in groups A and B. All statistical analyses were accomplished using SAS version 9.4 (SAS Corporation Inc., Cary, NC, USA), and statistical significance was determined as a p-value <0.05.
Results
Of 3,577 participants, 874 were included in the analysis after excluding missing data or who refused to fill in the questionnaires (16.4% of 588).Those older than 45 (59.1% of 2,115) were excluded.The study included 490 (56.1%) smokers and 384 (43.9%) nonsmokers (Fig. 1).The majority of adult smokers significantly had attained high school/college education (70.1%) (Table 1).The prevalence of asthma was higher in smokers (11%) and group B (6.2%) than in nonsmokers and group A (Fig. 2).
Female sex, Black race, school/college education, unmarried/divorce status, employed status, no alcohol intake, physical inactivity, obesity, diabetes, and joint diseases were significantly associated with 1or more-fold odds of asthma.Diabetes and joint/bone diseases had two-or more-fold odds of asthma among adults who smoked (Fig. 3).The prevalence of asthma was greater (10% or more) in adult smokers who grew up living with a father or mother who had a smoking history, especially among males with a school/college education, physical inactivity, and overweight/obesity.Adult smokers -especially those with physical inactivity and overweight/obesitywere more likely to have asthma (10% or more) if they grew up living with others in the house or workplace who had a smoking history (Fig. 4).
Female sex, black race, school or college education, being unmarried or divorced, being employed, having no alcohol intake, being physically inactive, being obese, having diabetes, and having joint diseases were significantly associated with 1-or more-fold odds of asthma among adults who smoked and grew up living with a father or mother who had a smoking history (Fig. 5).The mean age of asthma onset varied significantly by smoking history (18.3 years in group A and 18.5 years in group B, p = 0.021) after adjusting for covariables (Fig. 6).
The results comparing the age at which the individual began to smoke regularly and the probability of asthma onset for groups A and B are shown in Fig. 7.The log-rank tests for homogeneity indicated no significant difference in groups regarding the age at which smoking began and asthma onset (p = 0.150).The Kaplan-Meier curve estimating the probability of asthma onset among smokers aged 30 years was about 0.8 for group B and 0.4 for group A.
Discussion
The results showed that female sex, black race, school or college education, unmarried or divorced status, employed status, no alcohol intake, physical inactivity, and obesity were associated with an increased risk of asthma.Moreover, adult smokers who grew up living with smokers, especially a father or mother, were at increased risk of asthma compared to workplace smoking.The mean age of asthma onset varied slightly among those living with smokers, such as a father, mother, others in the house, or colleagues at the workplace.About 60% of adult smokers who grew up with a smoking father or mother triggered asthma.
The findings of this study were broadly consistent with a previous study that showed that both parents' smoking more than half a pack of cigarettes a day was significantly associated with a 2.9-fold risk of physician-reported asthma and a 3.3-fold risk of current asthma among participants in grade 7 [30].Another study found that cigarette smoking was consistently associated with prevalent asthma in a sample of 57,779 adults aged 20 years or older from six geographical regions in China [21].They also showed that a 10-year increase in age was associated with an increased risk of asthma in primary school graduates or those without formal education.A possible trigger of asthma among smokers might be the increase in T-lymphocytes, mostly CD8 cells and macrophages within the airway wall, and the infiltration of mononuclear cells and macrophages into the peripheral airways [8,31].A recent study revealed that children for whom both parents smoked had the highest urine 4-(methylnitrosamino)-1-(3pyridyl)-1-butanol and cotinine concentrations [32, Similar to the present study's findings, previous research showed that female sex, black race, an annual family income <$50,000, physical inactivity, and obesity were individually associated with increased risk of asthma development, especially in US children, adolescents, and adults [34].One possible reason for this finding may be the failure to reach the recommended level of 15 or more minutes of moderate activity five or more days per week, as per the Centers for Disease Control [34].However, our educational finding was somewhat contradictory to an earlier US study, which showed that higher academic degrees had been achieved by individuals with asthma than those without asthma [35].
Alcohol contains histamines and sulfites, which are known to exacerbate asthma symptoms in certain people [36].It's crucial to keep in mind that not everyone who consumes alcohol experiences the development or exacerbation of their asthma [37].According to a previous study, consuming wine is related to a lower risk of developing asthma, which suggests that it may even help prevent asthma [38].The finding of this study that there was no association between alcohol consumption and the risk of developing asthma could be explained by potential residual confounding, such as socioeconomic or lifestyle fac- tors.Therefore, this finding from the present study can raise doubt on the idea that drinking alcohol increases asthma risk.
Although the relationship between diabetes and asthma in adults has not been thoroughly investigated, one study found that type 1 diabetes was associated with a significantly higher incidence of asthma in patients aged < 8 years than in the general population [39].That study provided a possible explanation for the association between diabetes and asthma onset at the pathophysiological level.Another study found that asthma onset was higher among adults with either type 1 or 2 diabetes than those without diabetes [40].This risk of asthma onset in those with type 2 diabetes might be due to obesity, insulin resistance, or metabolic syndrome [41,42].Some studies found that arthritis was associated with a higher risk of asthma [43].
It's interesting to note that a recent study [44] contradicts the current study's finding that adult smokers who are single or divorced are much more likely to develop asthma.Asthma sufferers were shown to have higher rates of alcohol usage as they aged, according to another conclusion from the prior study [45].The results of this investigation contradict this finding.Geographical variation and various populations may be a factor in this finding's explanation.
The primary strength of this study was the larger sample size, with more than 80% power, which showed a statistically significant relationship.However, this study also had some limitations.The first limitation was self-reported questionnaires.The results may have been biased by self-reported asthma, leading to recall bias and misclassification.Second, the cross-sectional design of this study somewhat limited the associations between sociodemographic, behavioral, and clinical factors with asthma risk in adult smokers [46].Third, the history of the smoking measure was valid to use with parents, others in the house, and colleagues at the workplace.Fourth, the method used for participants' smoking status categorization could be misleading, as it neglects any effect of the degree of smoke exposure in ex-smokers and current smokers.Finally, data on the quantity of smoking, medication for asthma, type and quantity of alcohol intake, severity, and family history of asthma were not available, which would have been useful in our study.
While asthma is not yet curable, symptoms can typically be improved by identifying triggers.Adjustable triggers, such as cigarette smoking and exposure to parental smoking, are significant for asthma prevention [47,48].Additionally, the effectiveness of medications may be reduced by cigarette smoking or passive smoking [49].Therefore, the harmful effects of passive smoking show the need for maintaining a smoke-free environment [50].Comprehensive national anti-smoking policies are needed to achieve this favored environment; laws that limit smoking decrease the number of people hospitalized for asthma [51].In addition, a prior study showed that a long-term, multi-component program for workplace health promotion could alter smoking behavior, attitudes about smoking, and preparedness to stop smoking [52].
Our findings could help provide patient education, one of the four essential components of asthma management, including monitoring symptoms and lung function, controlling triggering factors and comorbid conditions, and providing pharmacologic therapy [53].Since asthma severity varies among people depending on socioeconomic, behavioral, and clinical characteristics, patient education and physical exercise are essential for decreasing asthma exacerbations and improving the control [54,55].Thus, expert panel reports three and Global Initiative for Asthma has described a stepwise treatment approach and strategy to prevent future risks of asthma onset [56].Since most workplaces have anti-smoking regulations, they might be applicable to the likelihood that secondhand smoke exposure occurs within prohibited smoking areas.However, despite the implementation of the smoking prohibition policy at work, evidence from a recent evaluation suggests that a number of employees are still suffering from the effects of secondhand smoke [57].This shows that there is still a knowledge, attitude, and practice gap among nonsmokers when it comes to enhancing the smoke-free workplace policy.
Conclusions
Being female, black, having a school/college education, unmarried or divorced, employed, having no alcohol intake, physically inactive, obese, having diabetes, and having a joint disease was associated with asthma risk among adults who smoked.Moreover, smoking adults who grew up living with smokers, especially a father or mother, were at increased risk of asthma compared to workplace smoking.These results suggest that parental smoking history is one of several critical modifiable triggers for populations at risk for asthma.However, more empirical evidence is needed to use familial smoking history as a tool for the early diagnosis and management of asthma.Further evidence is also required to determine whether the use of parental smoking history for identifying patients at high risk for asthma will lead to an overall reduction in the rates of asthma-related morbidity and mortality.
Ethical approval
Since a secondary analysis was conducted utilizing the initial data from the MIDUS refresher, the current study does not need institutional review board approval.No personal information is included in the data and appropriate coding makes it impossible to obtain the codes, both of which preclude retrieval of the data.
Informed consent
This study did not require informed consent because it did not involve any human subjects.
Fig. 1 .
Fig. 1.The flow diagram of the study sample through the MIDUS refresher.
Fig. 3 .
Fig. 3. Associations of demographic, behavioral, and clinical risk factors with the risk of asthma among adults who smoked.A dark dot circle represents odds ratios, and bars show a 95% confidence interval.
Fig. 4 .
Fig. 4. Prevalence of asthma among adults who smoked according to grew up living with A) a father or mother who smoked; B) a workplace who smoked.A dark dot circle represents prevalence, and bars show a 95% confidence interval.
Fig. 5 .
Fig. 5. Associations of demographic, behavioral, and clinical risk factors with the risk of asthma among adults who smoked according to grew up living with A) a father or mother who smoked; B) other people who smoked in the workplace.A dark dot circle represents odds ratios, and bars show a 95% confidence interval.
Fig. 6 .
Fig.6.The mean age of asthma onset among adults who smoked according to grew up living with A) a father or mother who smoked; B) other people who smoked in the workplace.The multiple linear regression model is adjusted for age, sex, race, educational level, employment status, marital status, alcohol intake status, physical activity status, obesity status, central obesity status, joint disease, diabetes, and cardiovascular diseases.Empty bars represent unadjusted estimates of the mean age of asthma onset; light gray bars show adjusted estimates of the mean age of asthma onset.
Fig. 7 .
Fig. 7. Kepler-Meire curve for age began to smoke regularly and probability of asthma onset among groups.
Table 1
Descriptive characteristics among adults with smoking status | 2024-02-03T06:17:46.493Z | 2024-02-01T00:00:00.000 | {
"year": 2024,
"sha1": "3c07fe7cb8ef64b5d4c1d54538ad94782ca85118",
"oa_license": "CCBYNC",
"oa_url": "https://content.iospress.com/download/work/wor230026?id=work/wor230026",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "393ccc1e1ba7af433d29b975fc0be1493aae896d",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
19015649 | pes2o/s2orc | v3-fos-license | An Experimental Test of Insect-Mediated Colonisation of Damaged Pinus radiata Trees by Sapstain Fungi
Vector-pathogen dynamics play a central role in understanding tree health and forest dynamics. There is substantial evidence that bark beetles act as spore vectors for many species of fungi that cause ‘sapstain’ discolouration of damaged trees and timber. However, the direct quantitative link between vector-mediated spore dispersal and subsequent sapstain colonisation of wood is not fully understood. Here, we used caged versus uncaged experimental logs to test whether the exclusion of bark beetles quantitatively alters the distribution and intensity of sapstain fungal spread within damaged trees. Using generalised linear mixed models, we tested the effect of bark beetle exclusion on sapstain intensity within and among cut logs at two plantation forest sites. Overall, sapstain was found on all logs regardless of caging treatment, indicating that sapstain colonisation can occur (to some degree) without arthropod vectors, probably via wind, rain-splash and, potentially, latent endophytic development. This was supported by the dominance of Diplodia pinea in fungal isolations taken from trees felled at the site, as this fungal species is known to disperse independently of bark beetles. However, the intensity of sapstain within and among experimental logs was significantly greater in uncaged than in caged logs, where beetle colonisation was significantly greater. This appeared to be driven by a significant within-log association between the intensity of staining and the intensity of beetle, and other arthropod, tunnelling and feeding activities. Taken together, these results strongly suggest that the dominant mechanism underlying the role of bark beetles in sapstain development in this study system is not vector-mediated spore dispersal, per se, but rather the facilitation of spore entry and hyphal development through tunnelling and feeding activities. We discuss the implications of these findings for forest management and the effective salvage-harvest of trees damaged by stochastic climate events such as storm and fire damage.
Introduction
Bark beetles that colonise conifers are known to be associated with specific fungal species that cause sapstain. Sapstain is the discolouration of wood caused by darkly-pigmented fungal hyphae [1,2] growing through the sapwood and water-conducting cells of susceptible live trees and fallen timber [3]. Sapstain fungi can increase tree mortality rates and reduce the cosmetic quality of the wood at harvest, and is of concern to forest managers worldwide. Sapstain is generally caused by three groups of fungi; the ophiostomatoid fungi (such as species of Ceratocystis, Ceratocystiopsis and Grosmannia [4,5]), black yeasts such as Hormonema dematioides Lagerb. & Melin and Aureobasidium pullulans (de Bary) G. Arnaud, and dark moulds such as Cladosporium spp. and Diplodia pinea (Desm.) J. Kickx f. (syn. Sphaeropsis sapinea (Fr.) Dyko & B. Sutton) [6]. Sapstain discolouration is primarily an issue in dead or cut/ storm damaged trees (hereafter referred to as ''damaged''), because sapstain fungi require aerobic conditions for development, a state that rarely occurs under the high sapwood moisture content of healthy trees [6,7]. Consequently, damaged trees are at high risk of sapstain fungal colonisation, which can occur within as little as five days after damage given optimal conditions for fungal growth [8]. The rapid rate of sapstain onset is often thought to be due, in large part, to the assisted dispersal of sapstain fungi by highly vagile bark beetle vectors [9,10].
A large body of literature has identified and described the associations between bark beetles and sapstain (primarily ophiostomatoid) fungi in many regions of the world [11,12,13]. In some studies, correlative evidence has shown that the intensity of sapstain in dead and damaged trees is higher in the presence of bark beetles, whereas stain is rare in their absence [14]. However, in other regions of the world the most important stain-causing fungi can be non-vectored species such as D. pinea [15,16], in which the main method of dispersal is abiotic (through wind, rain splash, and horizontal transmission via spores and mycelium from mature to young trees [17,18,19,20]) and there is little evidence of active transport by bark beetles [21,22,23]. Nevertheless, even abiotically-dispersed species such as D. pinea almost certainly benefit from bark beetle attack on trees because direct fungal penetration of host tissue requires fresh wounds of the tree to allow spore entry [24]. This raises the question of whether the apparent association between beetle activity and sapstain development in damaged trees should necessarily be viewed as facilitation via vector-mediated spore colonisation, or as facilitation via vectormediated hyphal spread within the timber.
In an ecological context, increases in the frequency of extreme weather events, causing severe windthrow [15] and promoting bark beetle outbreaks [25], are predicted to alter the ''disease triangle'' between three crucial factors affecting host-plant damage -plant pathogen, plant host, and environmental change [26,27]. In the case of sapstain, this complexity is potentially exacerbated by significant variation in a fourth factor, pathogen-vector dynamics, driven by variation in bark beetle abundance and propensity to attack susceptible trees under differing environmental conditions. Although sapstain colonisation of wind-thrown trees is a secondary cause of tree damage (following damage from the storm itself), the complexities of vector-pathogen and pathogenhost relationships are nevertheless central to improved understanding of the onset and development of tree damage overall.
In an applied context, there is growing concern about the uncertainty regarding timeframes available to salvage harvest trees following storm-or fire-damage in production forests [15,28], before their commercial value is reduced by sapstain. This is becoming increasingly important as climate patterns are changing and extreme weather events are becoming more frequent [29]. The interactions between environmental conditions, the composition of the complex of sapstain-causing species, and the potential for rapid colonisation of damaged timber by fungi are all thought to be important in harvest management decisions. However, few studies have attempted to directly test the degree to which bark beetle presence and attack rates causally determine the spatial extent and distribution of sapstain [15,28,30]. Knowledge of the role of bark beetles and their relative importance to the spread and extent of sapstain in damaged logs would provide decision-makers with tools to act appropriately in the face of insect attack on their damaged resource.
This study aims to experimentally test whether colonisation by bark beetles quantitatively increases the extent of sapstain fungi in logs through direct facilitation of spore dispersal, or primarily through their tunnelling and feeding activities that facilitate hyphal development of fungi that spread through abiotic dispersal. Our goal was to experimentally exclude the abundant non-native bark beetles Hylurgus ligniperda (F.) and Hylastes ater (Paykull) from experimental cut logs in Pinus radiata D. Don plantations in New Zealand, and compare the intensity and spatial distribution of sapstain colonisation within and among caged versus uncaged logs. Both of these abundant bark beetle species were accidentally introduced to New Zealand, and are now found throughout New Zealand wherever P. radiata forests occur [31,32]. We find clear evidence that bark beetles facilitate sapstain distribution and intensity primarily through their tunnelling and feeding activities in this system, and not through facilitation of spore dispersal.
Site selection and experimental setup
Ten uncaged and ten caged logs (ca. 0.5 m in length) were placed at each of two forest stands in the Nelson region of the South Island, New Zealand. Nelson Forests Ltd provided private land access, and all necessary permits were obtained for the described field studies, which did not involve or affect any endangered or protected species. The 40 logs of 9-16 cm diameter (mean 6 standard error: 12.160.3 cm), plus an additional six logs of the same size that were used to monitor sapstain development at regular intervals, were cut from four seven year-old P. radiata trees felled on the day the experiment was set up, on the 25th of January 2011. Apparently healthy trees suitable for bark beetle colonisation were selected, felled with a chainsaw, de-limbed and cut into ca. 0.5 m lengths in the early morning when bark beetle activity is low [33]. The 46 logs were checked to ensure there was no damage to the bark, and immediately enclosed inside a vehicle to prevent arthropod colonisation prior to caging. The period of time between cutting the logs and setting up the experiment did not exceed one day, and care was taken not to damage the logs in transit.
The two sites were selected in second-rotation P. radiata forests that had been harvested within the six months leading up to January 2011. Recently harvested sites were selected to ensure sufficient bark beetle activity was present to provide high colonisation rates of uncaged logs. Both sites were flat, un-shaded, and not flood-prone so that pooling of water did not affect the progression of beetle colonisation and stain within the logs.
Half of the logs were protected from bark beetle colonisation (caged) with aluminium mesh of 1.861.4 mm mesh-size that was small enough to exclude the common pine-infesting bark beetles present in New Zealand. Although rarely considered in sapstain studies, many smaller arthropods other than bark beetles might also colonise logs, and it is possible that their feeding and tunnelling activities might also influence fungal growth and distribution. Therefore, evidence of any colonisation by arthropods other than bark beetles was recorded and considered in the analyses.
As H. ater and H. ligniperda are known to preferentially colonise logs that are in contact with the ground [34], the full length of every log was placed in ground contact. At each site, the 10 uncaged and 10 caged logs were placed at random points in a 4.566 m grid pattern, all with the long axis of the log facing north, separated by an equal distance of 1.5 m from any other log. In addition, three sapstain monitoring logs were placed 1.5 m away from the experimental logs at each of the two sites. One of these three logs was cut open every two weeks, at each site, as a rough guide to visually monitor beetle colonisation and sapstain growth, in order to ensure we sampled at a suitable time when sufficient amounts of staining had occurred.
In order to identify the fungal species causing sapstain in these logs, three entire P. radiata trees, independent of the experimental and monitoring logs, were felled in nearby forest stands (several hundred metres from the experimental logs) at each of our study sites, and subsequently analysed to identify the species causing stain. Trees were of a similar diameter to the experimental logs, and were located a sufficient distance away so that beetle or sapstain colonisation did not directly influence the experiment, but close enough to be subject to similar biotic and abiotic conditions as the experimental logs. These separate trees were used for fungal isolation so that the fungi could be isolated immediately at the time of collection, rather than after the extended period required to process the experimental logs, during which the chance of successfully isolating the stain fungi would be reduced due to contamination and rapid drying.
Log processing
The logs were left out for a total of 34 days during the warmest months of the southern hemisphere summer, when the prevalence of fungal staining is known to be at its peak [35]. The logs were collected when approximately half of each cut face in the monitoring logs was covered in stain. Each experimental log was removed from its cage, where necessary, and placed in sealed plastic bags, with the upper surface of the log precisely marked so that it was known later which surface was in contact with the ground. To limit post-collection stain growth, logs were kept as cool as possible by ensuring they were left in a shaded and cool area until processing, which was completed within four days of collection.
Measurements were taken of log length, diameter, and bark thickness at the ends and middle of each log. Each log was delineated into eight sections along the length of the log, and within each section, six radial segments were delineated, as illustrated in Figure 1. The two end sections (1 and 8) were thinner than the other sections in order to measure the amount of stain and beetle colonisation in the immediate vicinity of the exposed cut ends. The remaining six sections were divided equally into the remaining length of the log. As each log was around 0.5 m long, sections 2-7 were typically about 75 mm thick (65 mm). Within each section, the radial segments closest to the ground had finer divisions (45u, as opposed to 90u) to increase the resolution of sapstain and arthropod counts where colonisation was expected to be greatest.
Logs were processed by first making a visual inspection of external arthropod colonisation in the external sectors indicated in Figure 1. To do this, the bark was carefully stripped away and bark beetles, their larvae, and their galleries were identified within each sector. Arthropod larvae and galleries that were not those of bark beetles were also counted, but not identified further. Following this, the logs were cut into the eight delineated sections, and a high-resolution photograph was taken of each cut face. Care was taken to ensure that the segments were aligned in their correct vertical orientation in the photo, allowing accurate placement of a digital grid over each image when making stain measurements.
Note, that as there were eight sections this results in seven cut surfaces where stain was measured; these were between sections 1 and 2, 2 and 3, and so on, through to the cut face of segments 7 and 8. Only one image was required from one of the cut faces at each cut as the amount of stain on each face was identical to the other. The measurement points were then referred to as 'section cuts'.
In order to measure the proportion stain in each segment, the entire image was imported into Adobe PhotoshopH CS4 Extended, Version 11.0 (Adobe Systems Inc., San Jose, California, U.S.A.), and a radial grid was overlaid on the image, as illustrated in Figure 2. Each segment was extracted and saved as a singlecoloured, black, bitmap image (one image file per segment) to determine total segment area. This process was then repeated for each segment, but this time manually drawing around staincovered areas only. Image J software version 1.43u [36] was then used to calculate the number of pixels in the stain-covered area relative to the total segment area ( Figure 2).
Fungal isolation and identification
At the time of log collection, two 3 cm discs were cut from each of the three trees felled at both sites near the experimental logs. The first disc was taken from the cut end of the tree where the sapwood was exposed, and the second from 4.5 m up the length of the tree (to encourage sapstain by airborne fungal spores at the cut end, and beetle dispersed and endophytic fungal spores at the bark covered end). The discs were immediately labelled and sealed in plastic bags, and within 12 hours were moved to a cold room at Figure 1. Schematic illustration of sampling areas on the experimental logs. All logs were assessed for bark beetle, other arthropod and sapstain colonisation. Numbers 1 to 8 indicate log sections (cuts are the dashed lines dividing the sections), and letters A to F indicate log segments within each section of the experimental Pinus radiata logs. The two outer sections, where sapstain penetration was expected to be greatest, were cut 20 mm thick, and the remaining length of log was divided into six equal-sized sections (ideally 77 mm, but varying from 75-79 mm, depending on variability between individual logs and individual section-cuts). The entire outer surface of the log was divided into discrete areas (1 A to 8 F ) where arthropod colonisation was recorded. doi:10.1371/journal.pone.0055692.g001 4uC in order to minimise moisture loss and slow fungal growth before isolations were taken.
Fungal isolation to identify and quantify representation of sapstain fungal species was undertaken within two days of sampling. All isolations were attempted on two agar media, one general malt medium for the isolation of most stain fungi present, and one selective medium with the addition of the eukaryote antibiotic cycloheximide, which ophiostomatoid fungi can tolerate [37]. The standard medium consisted of malt agar (3% malt extract, and 2% agarose) with 100 mg/ml streptomycin to inhibit bacterial growth, while the selective medium was the same as the standard but with 400 mg/ml cycloheximide added.
A wedge was taken from an arbitrary point within each disc using a hatchet. The wedge was bisected along the radial longitudinal plane using the hatchet and a mallet to initiate the split, and then separated manually, avoiding any external contact of the newly exposed surfaces. Five small chips were cut along the radial line from the newly exposed face using a sterile scalpel (as in McCarthy et al. 2012 [35]), the 1st and 5th of which were directly below the cambium, and above the pith, respectively. The remaining three were taken at equidistant lengths between the 1st and 5th chips. This was repeated twice per wedge, once inoculating the standard medium, and once inoculating the ophiostomatoid selective medium. After incubation periods of up to 10 weeks, emerging isolates were sub-cultured to tubes of malt agar (2% malt extract, and 2% agar). Bacterial colonies were recorded, but not isolated or identified further. Emerging fungal isolates were sorted into groups, and identified from their vegetative morphology and fruiting structures.
Data analysis
The effect of bark beetle exclusion on variation in proportion sapstain cover was tested using a generalised linear mixed effects model (GLMM) with fixed categorical factors for the caging treatment, and log segment nested within log section, as well as random factors specified for site replicate and log replicate. The sapstain model was tested as a GLMM with binomial errors using the lme4 package in the R programming environment [38]. If overdispersion was evident in the fitting of the GLMM, then this was controlled for using a model with Poisson lognormal error structure [39]. Model simplification was performed using an information theoretic approach with AICc (Akaike Information Criterion corrected for small sample size) and Akaike weight (W m ) to rank and subsequently select the best model describing the data, as recommended by Burnham & Anderson [40].
To assess the effect of the caging treatment, evidence of arthropod colonisation was separated into a binary 'beetle evidence' dataset (evidence = 1, no evidence = 0), where bark beetle evidence constituted the presence of adults, larvae, or galleries, and a binary 'other arthropod evidence' dataset, where the evidence of arthropods other than bark beetles constituted the presence of the arthropod larvae themselves or their galleries. For each sapstain measurement, corresponding measures of beetle evidence and other arthropod evidence were recorded from the two adjacent segments (combined) on either side of the cut-face (e.g. 2 B and 3 B , Figure 1).
Finally, to test the relative contribution of bark beetle colonisation and colonisation by other arthropods in explaining the distribution of sapstain within caged and uncaged logs, the sapstain GLMM was repeated with 'beetle evidence' and 'other arthropod evidence' entered as covariate predictors ahead of the fixed caging, section and segment variables in the model. Models were compared and selected using AICc and W m values, as above. If the spatial distribution of beetle and/or other arthropod colonisation was sufficient in its own right to explain the spatial distribution of sapstain then the covariate would subsume all the variance attributable to the fixed factors in the model. All statistical analyses were performed using R 2.13.1 software [41].
Spatial distribution of sapstain
The amount of sapstain cover varied from as low as 0% in the central section cut and upper segments of the log, to as high as 100% in the terminal cuts and lower segments of the log nearest to the ground ( Figure 3A, B). Evidence of sapstain fungal colonisation was evident to some degree in all logs in the study, whether they were uncaged or caged. However, on average the uncaged logs had a noticeably greater occurrence of stain along the lower segments of the log which were in contact with the ground ( Figure 3A, 4E, F), whereas caged logs typically had intense staining only near the terminal (cut) ends of these logs ( Figure 3B). Surprisingly, in the Poisson lognormal GLMM analyses this latter effect of varying stain between section cuts of caged versus uncaged logs did not contribute significantly to the best-fit model based on comparison of Akaike weights (Table 1a). Instead, the best-supported model showed only significant effects of caging and segment location (i.e., the radial angle of segments), as well as their associated interaction, on the proportion cover of stain per segment (Table 1a); that is, the distribution of stain among segments varied depending on whether the log was caged or not. The caging by segment interaction effect was driven by higher levels of stain at the bottom of the un-caged logs than at the bottom of the caged logs (Table 2a, Figure 4E, F).
Spatial distribution of arthropod colonisation
The caged logs were successful at excluding, on average, 93.8% of the beetle colonisation of an uncaged log, with a total of only 26 bark beetles found among all the cages logs, in comparison with 420 in the uncaged logs. Overall, bark beetles penetrated seven (35%) of the caged logs. As expected, the best-fit GLMM model describing bark beetle colonisation included significant effects of the caging treatment and the distribution across segments, as well as their associated interaction (Table 1b). Bark beetle attack was predominantly distributed on the lower surfaces of uncaged logs, and the caging treatment significantly reduced the difference in beetle attack between the upper and lower surfaces of the log ( Figure 3C, D). For the few beetles that were able to penetrate the caged logs, colonisation was sporadic but also concentrated in the lower segments of the logs ( Figure 3D).
Unavoidably, colonisation by other smaller arthropods was evident in both caged and uncaged logs, at both sites (only 2 out of 20 caged logs had no evidence of arthropod colonisation at all). However, among other arthropods there was no evidence of colonisation by any other adult beetle groups, such as ambrosia beetles. As with bark beetle colonisation, the best-fit GLMM model for other arthropod colonisation also included significant effects of both the caging treatment and segment distribution segments, as well as their associated interaction (Table 1c). Once again, there was a significantly greater level of other arthropod colonisation in the uncaged logs than in the caged logs (Table 2c).
Arthropod colonisation driving sapstain distribution
Spatial patterns of log colonisation by bark beetles and other arthropods corresponded strongly with sapstain distribution and intensity (Figure 3). A GLMM analysis of stain distribution showed that when variables representing colonisation by beetles and other arthropods were entered first into the model, ahead of caging and segment effects, there was still a highly significant effect of segment position on stain intensity, but only a weak and equivocal remaining influence of the caging effect ( Figure 3, Table 1d, Table 2d). This suggests that the caging effect on stain intensity is predominantly driven by reduction in beetle and other arthropod colonisation (despite the fact that some arthropods did penetrate cages), but that the beetle and other arthropod evidence is not sufficient in its own right to explain variation in stain intensity among segments within logs (Table 1d).
Fungal species causing sapstain
In this study, 10 isolations were attempted from two discs from each of three additional cut trees at the two sites, using ophiostomatoid-and non-ophiostomatoid-selective agar media, giving a total of 120 isolation attempts. There were 26 positive isolations of stain fungi, 100% of which were Diplodia pinea, while the majority of other attempted isolations did not produce fungal growth, or consisted of non-staining fungi or bacterial contaminants. No ophiostomatoid fungi were present. The D. pinea isolates were identified from both the exposed end of the log (8 isolates) and deep within the log at 4.5 m up the length of the tree (18 isolates).
Discussion
Sapstain developed and spread in all logs in this study regardless of the presence or absence of potential bark beetle vectors, but beetle colonisation was clearly an important factor in the intensity of fungal colonisation and degree of spread of fungi in cut logs. Our experimental design used a caging treatment to exclude beetle colonisation while keeping other site-level and seasonal environmental factors constant. Although the cages were only 93.8% effective in preventing beetle colonisation, there was nevertheless a strong correspondence between the spatial distributions of bark beetle colonisation and areas of high sapstain intensity within and between logs. Equally, however, it is clear that the feeding and tunnelling activity of other saproxylic arthropods and their larvae also contributed significantly to the onset and extent of stain within timber [23,42]. Consequently, although bark beetles do appear to be instrumental in the colonisation and spread of sapstain to damaged timber [10,43], it seems that endophytic or wind blown fungal spores (such as those of Diplodia pinea) and the distribution of arthropod activity within fallen timber also play an important role in sapstain spread. Below, we discuss the implications of these findings for our understanding of vector-pathogen dynamics, and for the forest industry to highlight the importance of protecting harvested logs and trees damaged by stochastic climate events such as storm and fire damage.
Stain and beetle colonisation
There was a low level of stain in the upper segments of all logs, especially toward the middle of the log where the sapwood was protected by the bark, and beetles had not colonised. If left in situ for a longer period of time these sections would presumably also have become stained as the moisture content of the logs declined further, and the 'clean' timber succumbed to stain fungi growing from the sites of beetle colonisation and from the exposed ends of the logs. In the uncaged logs, the greater intensity of staining was along the lower surface of the log, where the log was in contact with the ground. This was mirrored almost exactly by the pattern of bark beetle colonisation, in this case by Hylastes ater and Hylurgus ligniperda. As these species are known to feed preferentially on roots, the lower stem and other parts of trees that are in ground contact, they are behaviourally adapted to preferentially colonise logs where they are in contact with the ground [34].
Although the cages were not entirely effective, the spatial distribution of stain within caged logs was significantly different to that of uncaged logs. Sapstain distribution within caged logs was concentrated at the exposed ends where wind and rain-splash might act as sources of spore inoculum into the unprotected sapwood, and where the sapwood would also dry faster allowing ideal conditions for aerobic growth. In fact, fungal isolations taken from whole cut logs that had previously been felled at each site revealed that the majority of sapstain was caused by the abiotically-dispersed D. pinea. Although it is likely that other sapstain fungal species also occur at these sites (e.g. Ophiostoma spp.), these must have occurred at such low abundances that they were not detected in this study.
In previous large-scale studies using similar isolation methods, 12 species of ophiostomatoid fungi were identified from Pinus radiata timber and from bark beetle vectors [15,44]. Furthermore, from a recent broad-scale sapstain fungal isolation effort in the same region, with a total of 680 sapstain fungal isolations from P. radiata sapwood, 666 were D. pinea (97.9%), and the remaining 14 were distributed among Sporothrix inflata de Hoog and nine different Ophiostoma species (McCarthy et al. unpublished data). This reinforces earlier research identifying D. pinea as New Zealand's most widespread and dominant sapstain fungus on pine [15,16]. Because damaged and diseased plant tissue (including needles and cones on the forest floor) can be a prolific source of D. pinea inoculum [20,45,46], abiotically-dispersed spores are likely to be ubiquitous within pine forests in New Zealand.
In addition to abiotic spore dispersal, the possibility of endophytic sapstain development must be considered, as some staining species such as D. pinea can (unlike ophiostomatoid fungi) be latent endophytes in healthy Pinus trees. Among these species, visible evidence of stain development often only becomes evident once a tree has been damaged or stressed [17,47], such as at the cut ends of the experimental logs. It is unknown whether the stain development in these logs was dominated by endophytic growth or spore inoculation following the cutting of the logs. Certainly, stain development in the middle sections of the experimental logs was dramatically lower where protection by the bark prevented the entry of fungal spores, suggesting spore dispersal and their ability to colonise the sapwood, not endophytic growth, is the major factor to consider.
Given these findings, it appears more likely that the strong correspondence between sapstain intensity and beetle colonisation intensity was due to facilitation of D. pinea spore entry through the bark into beetle tunnels, and hyphal development through the network of larval feeding galleries under the bark, potentially from fungal inoculum that predominantly arrived by wind or rainsplash, rather than by beetle vectors. Likewise, the same is true for the colonisation activities of other arthropods.
Stain and colonisation by other arthropods
In addition to beetle-mediated stain development, there was also a significant influence of the distribution of other (unidentified) arthropod larvae on stain development, in both the caged and uncaged logs. These larvae were likely to have entered the cages through the mesh directly, or been introduced by adult arthropods which burrowed below the log and oviposited through the mesh. The ability of flies, mites, beetles, and their larvae, to vector sapstain fungi has been documented [23,42,48], but the possibility of arthropods other than bark beetles as facilitators of sapstain development is not commonly considered. Further research is needed to examine and identify the species involved, and quantify their respective importance as vectors of sapstain fungi, or their role in facilitating fungal infection of host tissues through feeding induced tree-wounds. There is evidence of D. pinea exploiting the entry points of wood created by arthropods including a variety of bark beetles and wood borers [22,49], as well as other taxa such as the aphid Cinara cronartii T&P [22], the scale insect Matsucoccus josephi Bodenheimer and Harpaz [50], the moth Dioryctria sp. [51], and the true bug Leptoglossus occidentalis Heidemann [52].
Applied management implications
The risk of timber discolouration due to sapstain colonisation is greatest during harvesting, storage and transport of logs, and following storm events where salvage-harvesting of damaged trees may be an economically viable option. Bark beetles and other arthropods play an important role in the facilitation of fungal entry into susceptible logs, and protection of logs from these organisms should be attempted. This study shows that if logs are not protected from arthropod colonisation then sapstain will progress rapidly, acting either additively or-most likely-synergistically with wind-blown spores to increase the distribution and intensity of staining in timber. In quantitative terms, it should be noted that this will happen at a slower rate than observed in our experimental logs because entire fallen trees would retain their moisture for longer, depending on the area of exposed sapwood and environmental conditions at the site.
In the case of storm damaged stands where salvage harvest may be warranted, New Zealand is in the situation where the two major bark beetles species, H. ater and H. ligniperda, are behaviourally adapted to feed preferentially on roots and other parts of the tree that are in direct contact with the ground [34]. Following storm events, the fallen stems of most trees are often elevated above ground-level by their branches, and this limits the amount of the log likely to be colonised. In a recent study investigating the onset of sapstain following windthrow, only 28 of 480 discs (5.8%) sampled systematically from fallen trees had any form of contact with the ground or another tree (McCarthy et al., unpublished data). Although these trees are likely to sustain some beetle colonisation, and are susceptible to staining by winddispersed and endophytic fungi [15], the extent of damage will probably be lessened in the absence of extensive beetle colonisation. Mausel et al. [34] found that logs stacked in a manner that reduced contact with the ground, or with other logs, greatly limited colonisation by both H. ater and H. ligniperda. This is likely to be the case for other bark beetle species with similar colonisation strategies, where colonisation of stacked logs could be reduced with the use of ''spacers'', or similar, to reduce the points of contact between stacked logs, as this is when bark beetle colonisation would be likely. In other regions of the world, effective protection of susceptible logs, whether they be harvested or storm damaged, will require examination of the bark beetle species most likely to colonise, and their colonisation strategies. In areas with more aggressive bark beetles that colonise logs under all conditions, more intensive methods of arthropod and fungal management will be required or very rapid log processing to avoid sapstain of commercial timber.
Acknowledgments
Field assistance was provided by Rex Mitchell, who was supplied by Nelson Forests Ltd. Lincoln University and Kelvin Nicolle offered the use of their facilities following damage to facilities at the University of Canterbury during the 2011 Christchurch earthquake. Dave Henley and Alan Leckie (Scion) assisted with processing samples. We are grateful for the advice and other support from Ian Hood, Lindsay Bulman, Mark Kimberley, Steve Pawson (Scion), Mark Forward (Nelson Forests Ltd.) and Craig Galilee (University of Canterbury). We also thank Andrew Barnes, Kristy Udy and Colleen Carlson for their assistance with statistical analysis, and three anonymous reviewers for many helpful suggestions. | 2018-04-03T06:19:37.874Z | 2013-02-06T00:00:00.000 | {
"year": 2013,
"sha1": "f0947821875c942fd0d929ccb389faef529e8d1e",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0055692&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f0947821875c942fd0d929ccb389faef529e8d1e",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
247606683 | pes2o/s2orc | v3-fos-license | The Relationship between Physical Activity, Physical Exercise, and Human Gut Microbiota in Healthy and Unhealthy Subjects: A Systematic Review
Simple Summary To date, the influence that physical activity (PA)/physical exercise (PE) can exert on the human gut microbiota (GM) is still poorly understood. Several issues arise in structuring research in this area, starting from the association between PA/PE and diet. Indeed, the diet of an individual is a key factor for the composition of the GM and those who regularly practice PA/PE, generally, have dietary patterns favorable to the creation of an ideal environment for the proliferation of a GM capable of contributing to the host’s health. It is therefore difficult to establish with certainty whether the effects generated on the GM are due to a PA protocol, the type of diet followed, or to both. In addition, most of the available studies use animal models to investigate a possible correlation between PA/PE and changes in the GM, which may be not necessarily applied to humans. Evidence suggests that aerobic PA/PE seems capable of producing significant changes in GM; training parameters, likewise, can differentially influence the GM in young or elderly people and these changes appear to be transient and reversible. Abstract Several studies have been conducted to find at least an association between physical activity (PA)/ physical exercise (PE) and the possibility to modulate the gut microbiome (GM). However, the specific effects produced on the human GM by different types of PA/PE, different training modalities, and their age-related effects are not yet fully understood. Therefore, this systematic review aims to evaluate and summarize the current scientific evidence investigating the bi-directional relationship between PA/PE and the human GM, with a specific focus on the different types/variables of PA/PE and age-related effects, in healthy and unhealthy people. A systematic search was conducted across four databases (Web of Science, Medline (PubMed), Google Scholar, and Cochrane Library). Information was extracted using the populations, exposure, intervention, comparison, outcomes (PICOS) format. The Oxford Quality Scoring System Scale, the Risk of Bias in Non-Randomized Studies of Interventions (ROBINS-I) tool, and the JBI Critical Appraisal Checklist for Analytical Cross-Sectional Studies were used as a qualitative measure of the review. The protocol was registered in PROSPERO (code: CRD42022302725). The following data items were extracted: author, year of publication, study design, number and age of participants, type of PA/PE carried out, protocol/workload and diet assessment, duration of intervention, measurement tools used, and main outcomes. Two team authors reviewed 694 abstracts for inclusion and at the end of the screening process, only 76 full texts were analyzed. Lastly, only 25 research articles met the eligibility criteria. The synthesis of these findings suggests that GM diversity is associated with aerobic exercise contrary to resistance training; abundance of Prevotella genus seems to be correlated with training duration; no significant change in GM richness and diversity are detected when exercising according to the minimum dose recommended by the World Health Organizations; intense and prolonged PE can induce a higher abundance of pro-inflammatory bacteria; PA does not lead to significant GM α/β-diversity in elderly people (60+ years). The heterogeneity of the training parameters used in the studies, diet control, and different sequencing methods are the main confounders. Thus, this systematic review can provide an in-depth overview of the relationship between PA/PE and the human intestinal microbiota and, at the same time, provide indications from the athletic and health perspective.
Introduction
It is now well established that several microbial communities coexist with our organism; most of these-including bacteria, archaea, fungi, and even viruses-are hosted in the human gastrointestinal tract and, for this reason, they are generally referred to as the gut microbiota [1]. These microbes have developed a symbiotic relationship with our organism, as they are essential for our survival, and this is due to their ability to express a number of genes approximately 150 times greater than the entire human genome [2], which collectively are referred to as the microbiome.
Several studies [3,4] demonstrate that the gut microbiome is a mediator between many factors such as genetics, diet, exercise, and the external environment. While this interaction has been studied extensively in diet-induced changes in the microbiota [5] and host disease processes [6,7], there has been little focus on other factors such as PA/PE. Chronic diseases-such as obesity, metabolic syndrome, diabetes, atherosclerosis, liver dysfunction, and inflammatory bowel disease (IBD)-have been associated with alterations in the GM [8]. Moreover, increasing evidence shows that the complex bidirectional system of communication between the central and enteric nervous systems-named the gut-brain axis (GBA)-can be influenced by the human microbiota, which in this way acts as an emotional and behavioral modulator [9].
The first studies on the human GM date back to about 40 years ago [10], but this remains a relatively young field of research where there are few certainties and much remains to be investigated. Only recently, however, a possible interaction between PA/PE and the intestinal microbiota has been proposed. Only in the last decade have the first studies appeared reporting that PE is able to enrich the diversity of the human intestinal microflora [11] and consequently, improve the health status of the host. It has also been shown that there is a correlation between cardiorespiratory fitness (CRF) and increased microbial diversity in healthy subjects; thus, an improved ability of the heart, lungs, and muscles to take up and consume oxygen for energy is positively correlated with a more diverse microbial profile that is able to promote butyrate production, both factors associated with overall host health. This has allowed the suggesting of exercise prescriptions as a parallel therapy to control diseases associated with a state of dysbiosis [12].
Furthermore, studies carried out first in animal samples [13], and later in humans [14], have highlighted the ability of PE to modify the composition and functionality of the microbiota independently of diet and body composition, thus in both lean and obese, as well as previously sedentary subjects. Considering, moreover, the mutual influence that is established between the microbiota and the central nervous system, it is possible that PA/PE can somehow exert some effects even on that complex system previously mentioned and known as GBA; this is the conclusion reached by recent work [15] according to which the current evidence suggests that PE is able to mediate this bidirectional relationship between the gut and the brain through the modifications induced at the level, precisely, of the intestinal microbiota. Thus, PA/PE could become a strategy and therapeutic factor for gastrointestinal and psychological problems. Only recently has a new definition, the muscle-gut-brain axis, been introduced, suggesting that exercise-mediated changes at the level of the human microbiota could influence the development and progression of neurodegenerative diseases such as Alzheimer's that currently have no cure [16].
The mechanisms by which PA/PE may lead to alterations in the human GM are poorly understood. Several hypotheses are currently proposed: changes in the profile of bile acids, which may have an antimicrobial function and exert selective pressure on certain bacterial strains; suppression of TLR4 (Toll-like receptor 4) signaling pathways, which can reduce serum LPS levels; increased production of intestinal immunoglobulin A (IgA), which leads to increased resistance to colonization by specific commensal microbes; reduced intestinal transit time; activation of the hypothalamic-pituitary-adrenal (HPA) axis and subsequent hormone production following physical stress [17]; hormetic effect of stress induced by PA/PE, at the intestinal level, that can stimulate beneficial adaptations of the intestinal barrier [18].
However, this field seems to permeate different aspects of human health in a capillary manner, branching out to the current COVID-19 emergency. In fact, a recent work [19], following the equally recent evidence available [20][21][22], theorizes that intestinal microbiota dysbiosis associated with poor integrity of the intestinal barrier could allow SARS-CoV-2 to easily access the enterocytes circulating and infecting organs expressing the ACE2 receptor, thus increasing the chances of contracting the infection and/or exacerbate the symptoms. If future studies were to empirically demonstrate the connection between the intestinal microbiota and COVID-19, the relevance of an adequate composition of the intestinal flora would be further emphasized, and PA/PE would appear to contribute to this. Therefore, the present systematic review aims to summarize the evidence on the bidirectional relationship between PA/PE and the human GM, providing more knowledge and details; specifically, the aims were threefold: to investigate the effects induced, on the GM, by different types of PA/PE; to investigate the effects induced, on the GM, by a different training frequency, intensity, and volume; and to investigate the age-related effects of PA on GM and the persistence of induced changes.
Data Sources and Search Strategy
This systematic review was carried out following the PRISMA statement [23]. The protocol was registered in PROSPERO (International Prospective Register of Systematic Reviews; code: CRD42022302725). Two team authors performed initial research questions which were then transformed according to the participants, intervention, comparators, outcomes, and study design (PICO S ) system. The databases used for the identification of scientific articles were Web of Science, Medline (PubMed), Google Scholar, and Cochrane Library. Searches were performed from September to November 2021. Candidate studies were identified by using the following Boolean search syntax: "((microbiota or microbiome) and (exercise or "physical exercise")"/"(microbiota or microbiome) and (activity or "physical activity")"/"(microbiota or microbiome) and (sport or athletes))". Afterward, the following filters were activated-text availability: full text; species: humans; languages: English. The search strategy used for the PubMed database was a combination of the MeSH database and Boolean search syntax. After candidate articles were collected, further identification was conducted based on inclusion and exclusion criteria.
Elegibility Criteria
The inclusion criteria for this systematic review were only English-language original peer-reviewed articles, published from January 2010 to December 2021. Excluded records were review articles, meta-analyses, practical guidelines, unpublished studies, books and book chapters, editorials, letters to the editor, and essays, although they were used as a reference to identify the original search to examine for inclusion. In addition, the framework of population, intervention, comparator, outcomes, and study design (PICOs) was followed to incorporate the studies into a systematic review (Table 1). Table 1. Selection criteria used in the systematic review.
Inclusion Criteria Exclusion Criteria
Population Healthy and unhealthy subjects, no age restrictions, both sexes (from sedentary to athlete subjects).
Subjects who take or have taken (in the month before the intervention) pre/pro-biotics and/or antibiotics.
Intervention/Exposure Intervention with any kind of PE protocol or PA/PE exposure.
Intervention with a specific dietary protocol.
Comparator
Intervention that has a control group running a different PA/PE protocol or none, a comparison subgroup, or at least a pre/postintervention comparison.
Absence of any kind of control/comparison.
Outcome(s)
Measures of differences for α and β diversity, relative abundance of specific bacteria, metabolomic and metagenomic data analyzed with any kind of sequencing tool.
Lack of baseline and/or follow-up data, or absence of at least one of the measurements indicated in the inclusion criteria-outcome(s).
Study Design
Studies with experimental design (randomized and non-randomized trial), observational studies (sample size >30 subjects).
Data Extraction and Synthesis
Two team authors independently extracted relevant information from included studies using a Microsoft Excel spreadsheet: author, year of publication, study design, number and age of participants, type of PA/PE carried out, protocol/workload (training volume, frequency, intensity) and diet assessment, duration of intervention, measurement tools used, and outcomes. Any disagreements were resolved by consensus. The characteristics of the studies were summarized, and data on the effects of different types of PA/PE on the metabolic variables of GM were qualitatively synthesized.
Data Items
The following information was extracted from the included articles: changes in microbial α-diversity (differences within one sample) and β-diversity (assessment of differences or similarities in diversity between two samples); relative abundance of specific bacteria (phylum, class, order, family or species); fecal metabolites; Firmicutes:Bacteroidetes ratio.
Additionally, the following data items were extracted: number of participants, agegroup (young, adult, elderly), sex (man, woman or both), training or physical activity level (untrained, trained, physical active or inactive), type of PA/PE performed (aerobic, anaerobic or both), modalities of diet assessment, type of gut microbiota analysis (16S rRNA amplicon sequencing, metagenomic shotgun sequencing, fecal metabolomics or combination of these); characteristics of experimental settings and procedure, dietary patterns, if available, and their possible influences on the outcomes will be described. We only included GM data extracted from fecal samples, thus data extracted through intestinal biopsy or other biological samples were excluded to avoid heterogeneity in results.
Risk of Bias within Studies
The modified version of the Jadad Scale (Table 2) was used to assess risk of bias within randomized controlled studies. The scale includes five items, total score for each article ranged from 0 to 5 and was computed by summing the score of each item. The Risk of Bias in Non-Randomized Studies of Interventions (ROBINS-I) tool (Table 3) was used to assess the quality of non-randomized studies of interventions (NRSIs). The tool includes seven items and an overall outcome for each study resulting from a detailed evaluation. Lastly, the JBI Critical Appraisal Checklist for Analytical Cross-Sectional Studies (Table 4) was used to assess risk of bias within cross-sectional articles. The checklist includes eight items and an overall appraisal. All analyses were independently assessed for methodological quality by two reviewers.
Identification of Studies
At the end of the selection process, 1189 articles were extracted, of which n = 663 from Web of Science, n = 452 from PubMed, n = 65 from Google Scholar, and n = 9 from Cochrane Library. Each title and abstract were screened for relevance, removing review articles, unpublished studies, meta-analyses, practical guidelines, books and book chapters, editorials, letters to the editor and essays (n = 618). Thereafter, the search strategy was based on the assessment of the full text of the remaining 76 articles to verify their eligibility. Lastly, 25 research articles specifically focusing on GM responses to PA/PE, in healthy and unhealthy participants, were included ( Figure 1).
Novelty of This Review
A small number of systematic reviews have attempted to investigate and summarize the effects of PA or PE on the human GM, in particular a recent work by Aya et al. [46] focuses on this topic, considering healthy subjects aged 18-45 years. Our review also looks at studies that consider subjects with certain diseases so that the effects of PA/PE on the human GM from the perspective of the unhealthy population can be assessed as well. In addition, we analyzed the most recent studies [29][30][31]35,45] not evaluated in the previous reviews, which can consolidate some of the aspects that have previously emerged.
We have focused our analysis not only on the effects of PA/PE on the GM, but we also explored how certain training parameters (intensity, frequency, and volume) can differently affect the human GM.
Furthermore, when analyzing the results, we separated those obtained from groups of athletes and those from the general population from those relative to elderly subjects (60+ years) to investigate the age-related effects of PA/PE on GM and the persistence of induced changes.
Different Types of PA/PE and Their Influence on GM
Most of the studies carried out on humans consider aerobic physical activity to assess the changes induced in the GM. Three studies [27,28,30] administered a mixed intervention protocol based on resistance training (RTE) plus cardiorespiratory exercise (CRE) or strength training plus endurance training, three studies [31,33,35] proposed separate interventions (RTE or CRE) with the same research design.
Another study reported that GM diversity in young and healthy subjects is associated with aerobic fitness, furthermore, GM adaptation to the demands of increased physical fitness would appear to be linked to several functional cores rather than specific bacterial groups. Moreover, the microbial profile of subjects with high levels of cardiorespiratory fitness (CRF) appears to stimulate the production of butyrate, probably through an increase in genera such as Clostridiales, Roseburia, Lachnospiraceae, and Erysipelotrichaceae [12]. Subjects with low aerobic fitness show a higher presence of Eubacterium rectale-Clostridium coccoides (EreC), associated with obesity and related metabolic disorders [47]; Enterobacteria; and a lower presence of Bacteroides. However, results seem to be confounded by adiposity, since the relationship between maximal aerobic power (VO 2max ) and EreC disappeared after adjusting for body fat% [38]. There appears to be also a relationship between VO 2max and the Firmicutes:Bacteroidetes ratio [41], supporting what has been observed in previous studies [12,14,38] and suggesting that PE can induce favorable changes in the composition of the GM in healthy subjects. Endurance activity, in line with what has already been observed [14], is able to modestly modify the GM compositionally and functionally without significantly affecting body composition in overweight, previously sedentary subjects [32]. Exercise increased the abundance of the phylum Verrucomirobia, and within it the genus Akkermansia, associated with beneficial host effects due to its ability to enhance lipid oxidation [48]. Similar changes have been noted in other works [11,36], but here the increase in the genus Akkermansia occurred independently of weight, body composition, and diet, suggesting that these taxa are indeed responsive to exercise. Similarly, a reduction in the phylum Proteobacteria and an unidentified genus of Enterobacteriaceae associated with negative effects on host health was observed [49,50].
One study compares RTE and CRE activity [35], unlike in a previous study where RTE activity was marginal and not isolated [28], observing that cardiovascular, aerobic activity, determines a change within the GM. In the RTE group, on the other hand, no changes in the microbial composition were observed, suggesting that changes in the GM are not a foregone conclusion of exercise but could depend on the way in which the activity is practiced. It is possible, anyway, that no difference was observed due to the lower adherence to microbe sampling, in the RTE group. Only one short-term (six-week) randomized controlled trial [31] divided participants into three groups: endurance, strength, and control. The 16rRNA gene amplicon sequencing did not detect specific bacterial changes in any of the experimental groups, but only a wide variability in changes to the GM of individuals within the same group, suggesting that these modifications may be individual specific. Stool samples of elite athletes were also analyzed and compared to control group subjects who did not exercise. No significant difference was found in αand β-diversity, contrary to previous studies [11,39]; whereas in line with these observations a higher abundance of different Bacteroidetes species was found in the elite athletes, but the lack of dietary data could be a confounder. The main limitation of this study is the control group that showed similar daily step values to the endurance group and even higher values to the strength group, which means that they cannot be considered sedentary subjects, as they performed low-intensity physical activity.
Few studies investigate the influence of exercise on the GM composition of athletes performing endurance activities. A specific correlation between certain taxa, detected only in professional cyclists and high endurance workload was identified, suggesting a possible connection between exercise intensity and specific GM changes. There is a significant correlation between the increased abundance of the genus Prevotella and the total duration in hours of weekly training performed (>11 h/week); consistent with previous observations [11] a reduced presence of Bacteroides and a relative abundance of the genus Akkermansia is observed. A high percentage of M. smithii has been also detected in some professional cyclists [39], with an overproduction of genes involved in methane production, which would allow the microbial community to be more energy-efficient [51,52]. Assessing the variation in the GM and metabolome across athletes of different sports, based on static and dynamic components, O'Donovan et al. [43] found the existence of individual variability among athletes with clustering of majority samples driven by the relative abundances of five species, namely F. prausnitzii, as previously found by Bressa et al. [36]; E. rectale; P. necessaries; B. vulgatus; and G. massiliensis. Variability, possibly due to variance in duration and exercises modalities, was observed in absence of significant differences in dietary intakes across athletes from different sports. After a single bout of moderateintense exercise, an increase in the tryptophan, tyrosine, and phenylalanine metabolitesessential amino acids (EAAs) synthesized by GM-was observed in 40 male endurance cross-country athletes, and in particular a strong cross-talk between GM and systemic tryptophan metabolism highlighted by changes in the abundance of E. coli and several bacterial taxa, such as Romboutsia, Ruminococcocaceae UCG-005, Blautia, Ruminiclostridium 9, and Clostridium phoceensis [45], which possess tryptophan synthesis machinery [53].
Gut Microbiota and Different Physical Activity Levels
Several cross-sectional studies tried to evaluate the specific relationship between PA/PE and GM, one of the earliest studies done in humans [11] was conducted on professional rugby players during preseason training. Results were correlated to diet, since it was not possible to isolate PE from the diet of professional athletes. A higher richness and α-diversity were observed in the athletes compared to the control groups, which was also detected later by another study [37]. Abundance in the group of athletes was observed in some specific groups while at the phylum level the highest diversity was found in Firmicutes, particularly in the genus Faecalibacterium prausnitzii, producers of butyrate [54,55] and markers of intestinal health [56]. Furthermore, abundance in the Akkermansiaceae family and genus Akkermansia has been found in athletes and the low BMI control group, which appears to be inversely correlated with obesity and associated metabolic disorders [48]. Lack of dietary and other external controls are the main confounders. Sequencing (shotgun) of the metagenome in its entirety to investigate the taxonomic composition and functional potential [40] has shown that the differences in the GM of athletes compared to the sedentary group is even more pronounced at a functional level than merely compositional. Contrary to the observations of Clarke et al. [11], comparing the composition of the intestinal microbiota of subjects who do not practice any type of PA and subjects who exercise at the minimum dose recommended by the World Health Organization (WHO) for adults aged 18-64 years [57], it seems that PA does not lead to significant changes in microbial richness and diversity (α and β). As a possible cause of this discrepancy, different exercise modalities and intensities are indicated [36]. A similar protocol was implemented through a randomized controlled trial with a 10-week moderate-intensity aerobic intervention [29]. Similarly, no significant difference was found in bacterial diversity (α and β), as already observed in other studies [14,32], nor at phylum, class, order, family, or species level. The same conclusion was reached by the only study linking short-term high-intensity interval training (HIIT) and changes in the GM [34]. This suggests that changes in diversity (α and β) require a greater and/or longer stimulus.
Since hormonal set-up can affect GM composition [58], 40 premenopausal women were divided into two groups, active (ACT) and sedentary (SED). No changes are present in the Bacteroidetes:Firmicutes ratio, but consistent with what has been previously reported [11], a downward trend is observed regarding the presence of Bacteroidetes in the ACT group.
In agreement with preclinical studies [59][60][61] in the ACT group, a lower proportion of the family Turicibacteraceae was observed, as well as a significant abundance of some species such as R. hominis, A. muciniphila, and F. prausnitzii-species associated with positive effects on host health [62][63][64]. A greater presence of the Coprococcus genus has been found in ACT subjects, the scarcity of which has been associated with IBD [65]. The amount of fiber intake was significantly higher in the ACT group than in the SED group which, in turn, consumed more processed meats, all of which may play a key role in shaping the gut microbiota [66].
Topological analysis of the microbial network [44] suggests that the transition from an active to a sedentary lifestyle leads to changes in those bacterial taxa considered 'key' to host health and, in the opposite case to a modification of those 'key' bacteria associated with diseases. Among the most relevant in the reorganization that occurs during the transition from ACT to SED are the species Roseburia faecis and an unclassified species of Roseburia which are considered markers of good host health [64,67]. Within the bacterial network of ACT subjects, the Rikenellaceae and Erysipelotrichaceae families have been detected, whose role in humans, however, is still not entirely clear [11,68]. Finally, playing a key role in the transition from SED to ACT would appear to be some unclassified species of the genus Sutterella, recently associated with neurological disorders [69], conditions such as ulcerative colitis with impaired immune system function [70], as well as being identified as a typical driver in diabetes [71]. Strong limitations remain represented by the different dietary habits between ACT and SED, the former consumed more dietary fibers while the latter consumed greater amounts of processed meats and sugars.
Athletes who engage in intense and prolonged PE show a particular composition of the microbiota, characterized by a higher abundance of bacteria involved in inflammatory processes, such as Haemophilus and Rothia genus [72], Mucispirillum [73,74], and Ruminococcus gnavus [75]. Interestingly, a relative abundance of Faecalibacterium, a butyrateproducing bacterium usually known to be beneficial for human's health [76][77][78], has been observed concomitantly with a high abundance of pro-inflammatory bacteria in female endurance runners whose abnormal gut environment can lead it to act like an opportunistic bacterium [79]. Excessive PA/PE leads to stress that generates intestinal permeability [80], which results in the release of bacteria and products, such as lipopolysaccharides (LPS), recognized as toxic by the body, which triggers an inflammatory response [81,82].
In both high-intensity and moderate-intensity PA, there is a reduction in intestinal inflammation with concomitant changes in the microbial profile [26]. A reduction in the Firmicutes:Bacteroidetes ratio is noted, with an increase in Bacteroidetes-which could be beneficial for athletes by playing a key role in the metabolic conversion of complex sugar polymers and protein degradation [83][84][85][86]-and a reduction in the genus Clostridium and Blautia, which would appear to play a key role in the immune response [87][88][89]. In a recent randomized controlled trial [25], using a specific training protocol called ACTIWE [90], changes in β-diversity were observed in both subjects performing intensive (VIG) or moderate-intensity (MOD) physical activity and a reduction in heterogeneity only in the VIG group. Changes in α-diversity were also observed in the MOD group, in contrast to Munukka et al. [32], whereas the VIG group experienced the greatest increase at 3 months post-intervention, compared to the control group, suggesting that more intense activity may be required to induce changes in the GM of previously sedentary overweight and obese subjects.
High intensity physical activity seems to be capable of inducing alterations in the GM correlated closely with improvements in glucose homeostasis and insulin sensitivity regardless of body weight and fat mass reduction [27]. Increased gene expression of functional pathways inducing the production of SCFAs and degradation of BCAAs is observed, which may be related to improvements in glucose metabolism as increased BCAA concentrations have been associated with insulin resistance [91][92][93][94].
Age-Related Changes on GM Induced by PA and How Long They Persist
As it has been observed that the changes produced by PA on the microbiome are not unidirectional, as multiple factors can influence the outcome, including the host's age [95][96][97]. Furthermore, when compared to young adult populations, elderly subjects show a reduction in the diversity of the microbiota, which is usually characterized by a large interindividual variability [98]. An increase in the relative abundance of the genus Bacteroides in parallel with an improvement in CRF has been detected [33], as a result of 12-week performing brisk walking in healthy elderly women (65+ years), supporting the observation made by Yang et al. [38] of an association between CRF and the presence of Bacteroides. Generally, bacterial species belonging to the genus Bacteroides can contribute to a reduction in metabolic dysfunction [99]. The narrowness of the sample used, the absence of previous habits in the performance of PA, and the higher energy expenditure induced by aerobic exercise may represent some limitations. Similarly [24], after 5 weeks of aerobic exercise, in agreement with Bressa et al. [36], no changes in microbial diversity (α and β) were observed in elderly men (60+ years), only minor changes such as increased relative abundance of the genus Oscillospira, positively associated with leanness [100,101], and reduced C.difficile suggesting a beneficial effect of PA operating through toxin reduction produced by this bacterium. The results of this study are, however, limited by its short duration, lack of dietary control, and the exclusive use of healthy elderly male subjects.
No significant changes in α-diversity were detected in previously sedentary elderly women after 8 weeks of aerobic and resistance exercise. A partial change in the relative abundance and structure of the GM was observed, such as an increase in Betaproteobacteria, Burkholderiales, and Prevotella [30], as previously noted by Petersen et al. [39]. A modest association was found between PA levels and specific gut microbes among communitydwelling elderly men. PA was not associated with α-diversity and just a slight association with β-diversity was observed [42]. Comparing lifetime elderly endurance athletes (LA) with healthy controls (CTRL) who met the American College of Sports Medicine (ACSM) recommendations on physical activity for older adults [102], no significant difference was observed in the GM diversity [103], a lower Bacteroidete:Prevotella ratio was detected in the LA group, which could be a predictive marker of weight and fat loss [104].
To date, it is not entirely clear how long it takes for physical activity-induced changes in the GM to become stable. Aerobic activity appears to induce changes in the GM, but these changes did not seem to persist during, or until the end, of a short-term exercise protocol (8 weeks); in fact, these changes were more pronounced at the beginning of the program, subsequently decreasing until they became irrelevant [35]. This suggests that persistent microbiome differences between athletes and non-athletes [11,37,40] may be the result of long-term lifestyle differences between these groups that cannot be quickly achieved.
Limitations and Future Recommendations
Given the many factors that can interact with and modify human GM (e.g., type of birth, genetics, nutrition, age, antibiotics, stress), extrapolating the effects induced directly by PA/PE becomes a complex process. In particular, the type of diet, when not controlled, is a major confounding element; but even when attempts are made to track and standardize it, margins of uncertainty persist. In addition, the different exercise modalities-frequency, intensity, and volume used by the researchers-can contribute to the discrepancies in the results. Finally, different gene sequencing methods (16S rRNA, ITS rRNA, Shotgun metagenomic) can lead to different results under similar experimental conditions, because of their different sensitivities in microbial identification. In the currently available studies, therefore, these critical aspects lead to a very cautious consideration of the results obtained.
Thus, further randomized controlled trials are needed, with even more scientific rigor and greater focus on aspects that are often overlooked-such as the presence of viruses and bacteriophages in the gut and metabolomic analysis-controlling as much as possible for the various confounding factors that can substantially influence the results.
Conclusions
PA/PE appears to be able to counteract changes in GM related to obesity and T2D; as well as being able to reduce inflammatory signals by creating a controlled environment for GM. In addition, the ability to stimulate bacterial community richness has been observed. Furthermore, the GM of athletes may have enhanced functional capabilities such as tissue repair and increased energy gain from carbohydrate metabolism.
From a practical point of view, some indications may arise as follows: higher levels of CRF result in greater bacterial diversity, regardless of diet; aerobic activities appear to be able to induce significant changes in the composition of the GM, in contrast to resistance activities, even if GM modifications could be individual-specific; the training modalities and intensity can affect the GM differently; volume and frequency recommended by the WHO (the minimum dose of physical activity) for adults aged 18-64 years appears to prompt some changes in the composition of the GM, but not significant changes in terms of richness and diversity; a medium-low intensity training protocol produces limited changes in the GM, higher intensity seems to be necessary to induce changes in overweight and obese, previously sedentary subjects; strenuous and/or excessively prolonged exercise may negatively impact the GM. However, the ideal dose-response relationship of exercise remains uncertain, given the multiplicity of exercises and their dynamics of intensity and volume, which have not yet been fully studied. Factors that can produce adaptive changes in the structure of the GM, including PA/PE, may have a greater and more lasting impact if exploited early in life; exercise-induced changes are transient and most likely dependent on repeated stimuli over time. Funding: This work is funded by Fundação para a Ciência e Tecnologia/ Ministério da Ciência, Tecnologia e Ensino Superior through national funds and when applicable co-funded with EU funds under project UIDB/50008/2020.
Institutional Review Board Statement:
The study was conducted from September 2021 to February 2022 in accordance with the guidelines of the Declaration of Helsinki. For this study, the Department of Basic Medical Sciences, Neurosciences and Sensory Organs-Sports Science Section-at the University of Bari "Aldo Moro", did not consider the approval of the Ethics Committee necessary because the research did not provide any clinical, health, or biological treatments.
Informed Consent Statement: Not applicable.
Data Availability Statement:
The data presented in this study was obtained from the included studies and was openly available.
Conflicts of Interest:
The authors declare no conflict of interest. | 2022-03-23T15:21:34.539Z | 2022-03-01T00:00:00.000 | {
"year": 2022,
"sha1": "0f05c6ec9bbd16636f388a7bcf4d3131ddb8d1ad",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-7737/11/3/479/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5cf003714bc11b69088784145700b55568bd6fee",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
209862421 | pes2o/s2orc | v3-fos-license | The Influence of Mechanical Deformations on Surface Force Measurements
Surface Force Balance (SFB) experiments have been performed in a dry atmosphere and across an ionic liquid, combining the analysis of surface interactions and deformations, and illustrate that the mechanical deformations of the surfaces have important consequences for the force measurements. First, we find that the variation of the contact radius with the force across the ionic liquid is well described only by the Derjaguin–Muller–Toporov (DMT) model, in contrast with the usual consideration that SFB experiments are always in the Johnson–Kendall–Roberts (JKR) regime. Secondly, we observe that mica does not only bend but can also experience a compression, of order 1nm with 7μm mica. We present a modified procedure to calibrate the mica thickness in a dry atmosphere, and we show that the structural forces measured across the ionic liquid cannot be described by the usual exponentially decaying harmonic oscillation, but should be considered as a convolution of the surface forces across the liquid and the mechanical response of the confining solids. The measured structural force profile is fitted with a heuristic formulation supposing that mica compression is dominant over liquid compression, and a scaling criterion is proposed to distinguish situations where the solid deformation is negligible or dominant.
Introduction
Understanding the behavior of liquids in nanoconfinement is crucial for a range of applications including energy storage (electrolyte in contact with porous electrodes), lubrication (lubricant in between rough sliding surfaces) and filtration (such as sea water through a membrane), as well as in biological systems (ion crossing the cell membrane in a nanochannel, etc.). Very often, model experiments are performed with elementary systems of simple geometry: two surfaces facing each other in force measurements, a single channel in flow measurements. This allows for a simpler mathematical description of the system, while putting aside the complex collective effects happening in the presence of multiple asperities or pores. The ultimate goal is to understand the underlying physics at a fundamental level, independent of a particular geometry. For example, the Surface Force Balance/Apparatus (SFB/SFA), and the Atomic Force Microscope (AFM) are force measurements techniques using surfaces of different shapes and radii of curvatures R: crossed cylinders of radii ∼1 cm in SFB, sphere of radius ∼10 nm-10 µm and a plane in AFM. The normal interaction force profiles F(D) obtained can, in certain conditions, be directly compared by rescaling the measured force F by the radius of curvature R. Indeed, Derjaguin showed that F/R is proportional to the energy density between equivalent planar and parallel surfaces, provided that (i) D R, (ii) the interactions involved are additive and (iii) the surfaces are not deformed [1,2]. However, real solids are not perfectly rigid, and significant deformations of the surfaces can occur depending on the strength and range of the interaction. For example, when measuring friction in the boundary lubrication regime and under applied load, a strong repulsion across the molecularly thick boundary film typically leads to a substantial flattening of the surfaces [3]. Such deformations are not limited to situations of nanometric proximity of the confining solids, as they can be mediated by hydrodynamic interactions across the liquid [4,5]. These mechanical deformations have to be taken into account in order to correctly interpret the data, to compare with theories or numerical simulations, and to extrapolate the results to other experimental set-ups or practical applications. The SFB/SFA is a tool of choice in this respect, because the analysis of the so-called Fringes of Equal Chromatic Order (FECO) allows for an in situ characterization of the geometry, with a submolecular resolution (∼0.1 nm) in the normal direction and an optical resolution (∼1 µm) in the lateral direction. Although some pioneering works have been specifically dedicated to the study of contact mechanics with the SFA (i.e., the variation of the contact radius with the force, see, for example, [6,7]), deformations measurements are relatively rare in the extensive SFA literature. This is maybe because the amplitudes of the reported forces are generally small, and so these deformations are too small to be measured accurately and are calculated using theoretical models instead, or the deformations are considered to have a negligible effect on the force profile (i.e., the variation of the force with the distance) [8][9][10][11][12][13][14].
In this paper, we report on SFB experiments performed in a dry atmosphere and across an ionic liquid, for which we exploited the full capabilities of the instrument to measure directly and simultaneously the surface interactions and deformations. In the first case of the dry atmosphere (adhesion force of strong amplitude and short range), we find that the variation of the contact radius with the force is well described by a Johnson-Kendall-Roberts (JKR) model [15] and an effective elastic modulus describing the layered solid surfaces, in perfect agreement with the seminal work of Horn, Israelachvili and Fribac [6]. For relatively thick mica (∼7 µm), we simultaneously observe that the mica does not only bend but also experiences a compression of ∼1 nm. Consequently, the common calibration procedure that uses the jump-in point in a dry atmosphere leads in this case to an underestimation of the mica thickness and an equivalent outward shift of the force profile measured after injecting the liquid; we propose a modified calibration procedure to take this effect into account. In the second case of the ionic liquid (adhesion force of moderate amplitude and long range), we find that the variation of the contact radius with the force is well described by a Derjaguin-Muller-Toporov (DMT) model [16] and an effective elastic modulus describing the layered solid surfaces, in contrast with the widespread assumption that contact mechanics between adhering surfaces in the SFA/SFB is always described by JKR model [17]. For the system studied here, we also show that the mechanical deformations of the solid surfaces strongly affect the shape of the structural force profile, with the commonly used exponentially decaying harmonic oscillation being convoluted with the compression of the mica. Finally, we propose a heuristic formulation to describe such convoluted structural force profile when the solid compression dominates the liquid compression, as well as a general scaling criterion to determine, for any surface force experiment, its regime of convolution (i.e., solid compression negligible or dominant compared to liquid compression).
Materials and Methods
The Surface Force Balance (SFB) has been explained in detail in previous publications [2,11,18]. Here, we briefly recall the principle of the instrument, illustrated in Figure 1b, and the details particular to the present experiments.
Muscovite mica is cleaved to produce atomically smooth facets of micrometric thickness and millimetric extension, which are backsilvered and glued onto glass (fused silica) cylindrical (radius R ∼1 cm) lenses with an epoxy resin (EPON 1004, Shell Chemicals, Houston, TX, USA). Two surfaces are made with mica of the same thickness, form a stack of different layers (represented in Figure 1c, together with their thicknesses, Young's moduli, Poisson's ratios and elastic moduli), and are arranged in a cross-cylinder geometry. First, calibrations are carried out in a dry atmosphere, which is achieved by inserting P 2 O 5 , phosphorus pentoxide (Sigma-Aldrich, St. Louis, MO, USA, 99%), in the chamber and purging the chamber with N 2 , nitrogen, about one hour prior to the calibrations. Secondly, measurements are performed with an ionic liquid, because these liquids have been proven to resist squeeze-out even under very large loads [19], a regime in which significant mechanical deformations of the surfaces are expected. The liquid used is [C 4 C 1 Pyrr][NTf 2 ], 1-butyl-1-methylpyrrolidinium bis[(trifluoromethane)sulfonyl]imide (Iolitec, 99%), whose chemical structure and size are indicated in Figure 1a (molar mass M = 422.41 g/mol, density ρ = 1.405 g/mL, refractive index n = 1.422 and dynamic viscosity η = 74 mPa.s at 25 • C [20]). It is dried in a Schlenck line at 60 • C and 5 × 10 −3 mbar for ∼10 h and inserted in the chamber just after, which contains P 2 O 5 and is purged again with N 2 for about one hour prior to the measurements. . Ion sizes are estimated from geometry, bond lengths and covalent radii, associated with the most stable configuration found by energy minimization (Chem3D 16.0, PerkinElmer Informatics). (b) Schematic of the SFB experiment to measure the surface interactions and to characterize, in situ, the geometry, when a liquid is confined between two mica surfaces. (c) Composition of the layers forming each solid surface, with associated thickness e, Young's modulus E, Poisson's ratio ν and elastic modulus K = 2 3 E 1−ν 2 (values from [7,21,22]). For the anisotropic mica, the given mechanical properties correspond to the c-axis, the direction of interest for this study. A wide range of Young's moduli have been reported for mica (50-500 GPa in [6]); the quoted value has been consistently obtained by Brillouin scattering [21] and nanoindentation [7].
White light is passed through the confined medium, it interferes in this optical resonator, and is then directed towards a spectrometer and collected by a CCD camera (QImaging Retiga R6, resolution 2688 × 2200 px 2 ). The analysis of the Fringes of Equal Chromatic Order (FECO, shown in Figure 2a,c) then allows one to calculate the profile of the distance between the surfaces (Figure 2b,d) [23]. We developed an automatic procedure in order to extract the apical distance D and the contact radius a from this profile. The procedure is described in detail in the supplementary materials, together with a brief presentation of the models of contact mechanics that will be used in the next section to analyze the data. Finally, the apical distance D is measured with a precision of 0.02 nm given by the standard deviation of the signal and an accuracy of 1 nm due to light disalignment when changing the contact spot [24]. The contact radius a is measured with a precision of 0.03 µm given by the standard deviation of the signal, and an accuracy of 1 µm due to the uncertainty regarding the value of the radius of curvature R.
The top surface can be moved normally with a stepper motor (large displacement range ∼10 µm, poor linearity, mechanical vibrations induced) or with a piezoelectric tube (small displacement range ∼1 µm, good linearity, no measurable mechanical vibrations induced). For a given run, the velocity v can be determined with a precision of ∼1%. From run to run, this velocity can typically vary by ∼10% for the same control parameters, due to thermal drifts. In the following, some graphs result from the superposition of several runs, which is why the indicated velocities have an error bar of ∼10%. The bottom lens is mounted on a spring of constant k = 2670 ± 84 N/m, which is calibrated before the experiment by measuring its deflection when adding different masses. The normal force F is then deduced from the temporal evolution of the distance D(t) when applying a constant velocity v to the top surface, using a procedure that takes into account the presence of the viscous force at large separations (detailed in [18]). In comparison to previous studies we performed, the normal spring used in the present measurements is about 20 times stiffer; this was chosen in order to a apply larger load (at fixed displacement range). Note that this choice of a stiff spring increases the explored ranges of load and deformation as well as the slope of the spring instability, which is convenient for the investigation of contact mechanics, but does not limit the generality of this study. This comes with a price in terms of sensitivity limit, ∼10 −2 mN, which does not allow one to detect the anomalously long-range electrostatic force that has been observed with concentrated electrolytes [25,26]. A parabolic fit at small scale close to the apex (in green) allows one to measure the apical distance D, while a parabolic fit at large scale (in blue) is used to extract the contact radius a. In both cases, the FECO images were recorded with a black and white camera, then recolored using the calibration of the wavelength axis with a mercury lamp. The two particular cases shown here correspond to the points of maximum load reached in Figure 3.
Calibrations in a Dry Atmosphere
In Figure 3a,b are shown the force F, the distance D and the contact radius a measured when approaching (in red) then retracting (in blue) the top surface with the stepper motor at 13.2 nm/s across N 2 . Initially separated by N 2 , the surfaces are brought closer until they experience a strong van der Waals attraction, which together with the spring instability causes a jump-in to mica-mica contact and a slightly negative force (F = −0.20 mN); thereafter, the applied load is increased continuously and very large positive forces are reached. At some point (F = 27.91 mN), the direction of the motion is reversed, the applied load is decreased continuously and large negative forces are reached until the spring instability at F s = −5.71 mN leads to a jump-out to large distances.
The variation of the contact radius a with the force F, plotted in Figure 3b, is totally consistent with the work of Horn, Israelachvili and Fribac [6]. Clearly, the jump-out happens at a non-zero value for the contact radius (a s = 10.23 µm), which is a typical feature of the JKR model (see the corresponding FECO image and separation profile in Figure S1a,b). The measured a(F) relationship is compared to the JKR prediction (Equation (4) of the Supplementary Materials) in the following manner. First, the force F s = −5.71 mN reached just before the jump-out is used to compute the adhesion energy W = − 2F s 3πR = 131.79 mN/m. Then, a fitting procedure provides the elastic modulus K = 47.0 ± 0.5 GPa. As the range of a van der Waals attractive force across a gas is d ∼0.3 nm [6], a Maugis parameter Ma ∼19 1 is obtained (using Equation (6) of the Supplementary Materials), confirming the suitability of the JKR model for this system [27]. The data are reasonably fitted by the model (see the curve in green), and the extracted W and K are consistent with the values reported in previous studies for similar systems [6,7]. As the solids used in this experiment are complex stacks of layers associated with different thicknesses, Young's moduli and Poisson's ratios (values from the literature given in Figure 1c), the value of K extracted from the fit should be interpreted as an effective quantity. Therefore, it is expected that this value is between the Young's modulus of mica or glass and the Young's modulus of the glue, and depends on the mica and glue thicknesses [6]. Note that the curves measured on approach and retraction are not superimposed, a phenomenon known as the adhesion hysteresis and due to non-elastic processes at play. For this system, it is generally assumed that adhesion hysteresis is mainly caused by viscoelasticity and plasticity in the glue layer [6,7].
The variation of the force F with the distance D is plotted in Figure 3a. We observe a clear change in D when the surfaces are in contact, of ∼5 nm between the points of maximum load and of jump-out, or ∼2 nm between jump-in and jump-out. Potentially, imperfections of the double cantilever spring may induce a tiny rotation of the solids when varying the load, leading to a progressive dealignment of the light and so to a shift of the fringe positions and of the extracted distance. However, we ruled out this possible artefact by checking the tuning of the optics at regular intervals during the loading-unloading cycle. Note that the curves measured on approach and retraction are not perfectly superimposed, mainly due to mechanical imperfections of the set-up, i.e., non-linearity of the stepper motor and long-term thermal drifts, leading to errors in the determination of the force. In addition, we have made sure that this phenomenon is not due to a potential contamination of a particular experiment, by systematically observing that mica undergoes a significant indentation in many separate experiments with different mica sheets. As the mica is the single material separating the two silver mirrors, such a variation can only be explained by a compression of the two mica layers in contact. In early SFA/SFB studies, FECO were observed directly by eye or recorded with analogical photography, which is why very thin mica layers were used (typically of 2 µm thickness) to maximize the resolution of the measurement, and it was estimated that the mica experiences only negligible compression, of order ∼0.1 nm [6,28,29]. However, in recent years, as high resolution CCD cameras have been used in combination with fast spectral correlations and multilayer matrix methods to analyze the FECO [30], there has been a tendency towards working with thicker mica. One expects, in this case, a significant compression of mica, which indeed has been repeatedly mentioned in studies using a so-called extended surface force apparatus [31][32][33][34][35][36][37], but has not been studied specifically.
Such compression of the mica spacer layers raises a technical difficulty for analyzing the FECO. Like what is usually carried out, we have supposed that the interferometer is composed of two mica layers of the same constant thickness e mica,0 separating a vacuum layer of thickness D (with known and constant refractive indexes), which is clearly wrong here when the surfaces are in contact. To be more rigorous, we have also re-analyzed the FECO when the surfaces are in contact, now supposing that the interferometer is composed of a single mica layer of variable thickness 2e mica = 2e mica,0 − 2δe mica (δe mica is defined as positive for compression and negative for dilatation). It turns out that the two methods provide the same amplitude of compression; qualitatively, this is because in both cases we are looking at a small wavelength variation around a given reference point, which is not sensitive to the refractive index considered. Consequently, the variations of the distance D observed here have to be interpreted as changes in N 2 thickness when the surfaces are out of contact, and changes in mica thickness when the surfaces are in contact.
In these conditions, an important question is related to how can we define the mechanical origin, i.e., the point at which D = 0. A sensible choice is to select the point where the indentation of the surfaces is zero, i.e., δ = 0. Following the JKR model, δ is positive at the maximum load (compression) and negative at the jump-out (dilatation). The point of zero indentation is located in between and does not correspond to the point of zero force, i.e., F = 0. Considering that the measured a(F) relationship is reasonably fitted by the JKR model, we have taken the point δ = 0 from the fit as the mechanical zero for the measured D (as indicated in Figure 3a,b), corresponding to a mica thickness e mica,0 = 7.431 µm. Such a definition of mechanical origin then leads to a distance D that can be negative when the mica layers are compressed. This calibration procedure is significantly different from what is carried out usually. In general, the force profile in a dry atmosphere is not measured, but the surfaces are slowly approached until they jump-in to contact and it is at this point that the "mica thickness" is calibrated. In our experiment, the surfaces are already compressed by 1.4 nm just after the jump-in. The usual procedure therefore leads to an underestimation of the mica thickness by ∼1 nm for this ∼7 µm-thick mica, and an equivalent outward shift of the force profile measured after injecting the liquid (adhesion is generally much smaller across a liquid than dry atmosphere). If the mica is thinner or if the spring is stiffer, the shift is expected to decrease and there is less ambiguity regarding the definition of mechanical origin [28]. However, in the general case, this effect is present, and taking it into account is particularly important when investigating aspects at the molecular scale. In the following, we present three examples of situations where this may matter. • Since the conception of the SFA, negative distances from −0.2 nm to −1.3 nm have been reported when two mica surfaces separated by water jump-in to contact [11,28,[38][39][40][41][42]. This is due to the washing of gas molecules and organic contaminants (carbon compounds) that are spontaneously deposited on the mica surfaces in air [43] and to the dissolution in water of the potassium ions initially present on the mica surfaces. As adhesion is typically 10 times smaller in water than in a dry atmosphere, the mica is expected to be less compressed after the jump-in across water than during the calibration after the jump-in across a dry atmosphere. As compression of mica was not considered in these studies, the thickness of the contaminant layer may be underestimated, albeit not by more than a few angstroms given the much thinner mica used. Nevertheless, the dependence of this effect on the mica thickness and spring constant may explain-at least in part-the strong variability in the reported values. • In the case of molecular liquid giving rise to a structural force profile under confinement, a good accuracy on the distance D is needed in order to identify the absolute number of ordered layers composing the film (as illustrated in the subsection titled "Influence of surface deformations on structural force profile"). • Dynamic measurements can be performed with the SFA to determine the slip length associated with a flow of liquid in vicinity of a solid surface. By definition, the slip length is the distance between the hydrodynamic origin and the mechanical origin. Being able to measure nanometric slip lengths therefore requires having a subnanometric resolution on the mechanical zero [18,[44][45][46][47][48].
Finally, one can ask whether the JKR model can also describe the F(D) relationship. If we use Equation (4) of the supplementary materials and a reference D ref = δ + D = 0 (i.e., the origin D = 0 corresponds to δ = 0), with the values W = 131.79 mN/m and K = 47.0 GPa coming from the fit of the a(F) relationship, the model does not fit at all and predicts an indentation that varies much more than in the experiment (by ∼40 nm instead of ∼5 nm in the explored range of force, see green curve in Figure 3a). Qualitatively, this is because the effective parameters coming from the fit of the a(F) relationship correspond to the indentation of the whole mica/glue/glass system, while here we measure the indentation of the mica only. If we approximate the different layers composing the solids as springs in series, we expect that the ratio of the indentation of mica relative to the indentation of glue is of the order of the ratio of the Young's modulus of mica relative to the Young's modulus of glue, i.e., 3.4 GPa/62 GPa = 5.5%, which is indeed of the same order of magnitude than the measured ratio of 5 nm/40 nm = 12%. If we now relax the parameter K to fit the F(D) relationship (black curve in Figure 3a), it does not fit the a(F) relationship (for the same reason given just before, see black curve in Figure 3b) and it provides a value K = 900 ± 200 GPa, one order of magnitude larger than the elastic modulus of mica. This is because we probe a contact zone of size a ∼ e mica , while the validity of the JKR model would require a e mica to consider the mica as a semi-infinite solid. Qualitatively, the finite size of the mica layers cuts off the range of the elastic deformations, leading to an apparent stiffening of the solids compared to their bulk counterparts. Some analytical formulations exist for the opposite case of an infinitely thin elastic layer between two rigid solids [49,50] (applicable for a e mica ), but not for the intermediate case present here (a ∼ e mica ).
To summarize, the JKR model reasonably fits the variation of the contact radius with the force-considering the whole mica/glue/glass system as an effective homogeneous material-but completely fails to describe the variation of the force with the distance -because only the indentation of the mica is measured, which forms a layer that cannot be considered as semi-infinite. In order to simultaneously fit the a(F) and F(D) relationships with a coherent set of parameters, complex models that explicitly include the description of all the layers are needed. So far, only a few remarkable approaches have attempted to quantitatively fit the a(F) relationship with such a description, using approximated analytical formulations [29] or finite element methods [7,51]. In particular, they described how the effective elastic modulus K depends on the properties of the different layers and showed that these multi-layer aspects have to be taken into account to obtain a correct value of adhesion energy W from the jump-out force F s [52]. Similar approaches could be followed in the future to quantitatively fit the F(D) relationship while taking into account the compression of mica, which was neglected.
Contact Mechanics Across an Ionic Liquid
In Figure 3c,d are shown the force F, the distance D and the contact radius a measured when approaching (in red) then retracting (in blue) the top surface with the stepper motor at 10.5 nm/s across [C 4 C 1 Pyrr][NTf 2 ]. When the ionic liquid is confined at the nanoscale, a structural force profile is observed due to the organization of the ions in ordered layers. Initially far away, the surfaces are brought closer until they experience a repulsive wall at D ∼1.5 nm (hereafter reported as layer i = 2) for a load of up to F = 0.39 mN, then a layer is squeezed out and the surfaces jumped in to another repulsive wall at ∼0.5 nm (hereafter reported as layer i = 1); thereafter, the applied load is increased continuously and very large positive forces are reached. At some point (F = 17.80 mN), the direction of the motion is reversed, the applied load is decreased continuously and small negative forces are reached until the spring instability at F s = −0.47 mN leads to a jump-out to large distances. In this subsection, we focus on the contact mechanics of the system when the liquid is composed of a single layer of ions (layer i = 1). The influence of the mechanical deformations on the structural force profile will be detailed in the next subsection.
The variation of the contact radius a with the force F is plotted in Figure 3d. In contrast with the measurements across N 2 , the jump-out clearly happened at a zero value for the contact radius (a s within the systematic experimental error), which is a typical feature of the DMT model (see the corresponding FECO image and separation profile in Figure S1c,d).
The measured a(F) relationship is compared to the DMT prediction (Equation (3) of the Supplementary Materials) in the following manner. First, the force F s = −0.47 mN reached just before the jump-out is used to compute the adhesion energy W = − F s 2πR = 8.18 mN/m. Then, a fitting procedure provided the elastic modulus K = 16.7 ± 0.5 GPa. The data are reasonably fitted by the model (see the curve in green) and the extracted K is of the same order than the value previously obtained from the analysis of the variation of the contact radius with the force in N 2 . The effective elastic modulus changed by a factor of 3 after injecting the liquid, which can be attributed to a different glue thickness, as it cannot be ensured that the spots used on the surfaces are strictly the same. Our findings contrast with the widespread consideration that SFA measurements are always performed in the JKR regime, which is based on the calculation of the Maugis parameter Ma from Equation (6) of the Supplementary Materials [17]. The difficulty of such an approach is that Ma strongly depends on the range d of the attractive forces, which can be delicate to estimate. For this measurement, we have to assume d 5 nm for the range of the attractive force across the ionic liquid (plausible value, given the range of the structural force profile detailed in the next subsection), to obtain a Maugis parameter Ma 0.4 that corresponds to the DMT regime. On the contrary, looking at the value of the contact radius just before the jump-out is an extremely sensitive method, which does not rely on any estimation of microscopic parameters. For our experiment with the ionic liquid, the JKR model predicts that the surfaces would separate at a s = 3πR 2 W 2K 1/3 = 6.39 µm (for W = − 2F s 3πR = 10.91 mN/m and K = 16.7 GPa), while we clearly observe that the jump-out happened at a contact radius that is below the systematic experimental error of 1 µm. This unambiguously shows that we are in the present case not in the JKR regime but in the DMT regime.
Two situations have been addressed in the seminal paper of Horn, Israelachvili and Fribac [6]: the case of strong adhesion (W 100 mN/m) over a range of a fraction of nanometers-obtained with a dry atmosphere-which was well fitted by the JKR model, and the case of negligible adhesion (W 1 mN/m)-obtained with an aqueous electrolytewhich was well fitted by the Hertz model [3]. For our intermediate situation of moderate adhesion (W ∼10 mN/m) over a range of a few nanometers, obtained with an ionic liquid, we have shown that the variation of the contact radius with the force is well fitted by the DMT model. This finding is of general interest, as such a situation of moderate adhesion over a range of a few nanometers is not specific to ionic liquids only, but is frequently encountered in SFA experiments (for example, with apolar liquids, salt solutions, polymer melts or liquid crystals), and the use of a correct model of contact mechanics is necessary to interpret force measurements, in particular for the two situations listed below. • The jump-out force F s obtained with force measurement techniques is routinely used to deduce the surface energy W. As the relationship between these two quantities depends on the model (W = − 2F s 3πR in the JKR model, W = − F s 2πR in the DMT model), it is crucial to know the regime of contact in order to extract reliable values [53]. • When investigating friction in the boundary lubrication regime with smooth adhering surfaces, the applied load is generally large enough to flatten the sliding surfaces. These mechanical deformations have to be known in order to interpret the data, in particular to determine whether the friction is controlled by the area of contact or by the load, and to unravel the role of adhesion [10,39,[53][54][55][56][57][58][59][60][61].
The variation of the force F with the distance D is plotted in Figure 3c. When the surfaces are separated by a single layer of ions (layer i = 1), we clearly observe that D can be negativea possibility that was mentioned in recent studies [35][36][37]-and changes by ∼4 nm between the points of maximum load and of jump-out. This is due to the compressibilities of the mica layers and of the liquid film, i.e., the materials separating the two silver mirrors. As explained in the previous subsection, for such small changes in D it is not possible to disentangle the contributions coming from the two mica spacers (indentation 2e mica,0 − 2e mica = 2δe mica , defined as positive for compression and negative for dilatation) and the liquid film (thickness D liquid ) with the FECO. In fact, we effectively measure the distance between the silver mirrors D mirrors = D liquid + 2e mica , from which we subtract the undeformed mica thickness 2e mica,0 calibrated in a dry atmosphere, to finally obtain: Similarly to the calibrations in a dry atmosphere, one can ask whether the DMT model can also fit the F(D) relationship, supposing that most of the changes in D come from the indentation of the mica (taking a reference D ref = δ + D = 1.4 ± 0.4 nm). If we use Equation (3) of the supplementary materials with the values W = 8.18 mN/m and K = 16.7 GPa coming from the fit of the a(F) relationship, the model does not fit at all and predicts an indentation that varies much more than in the experiment (by ∼50 nm instead of ∼4 nm in the explored range of force-see green curve in Figure 3c). If we now relax the parameter K to fit the F(D) relationship (black curve in Figure 3c), it does not fit the a(F) relationship (black curve in Figure 3d) and it provides a value of K = 600 ± 100 GPa, one order of magnitude larger than the elastic modulus of mica. Again, this is because we measure only the contribution of the mica to the total indentation of the solids, and the mica layers cannot be considered as semi-infinite at such large loads where we probe a contact zone of size a ∼ e mica .
Influence of Surface Deformations on Structural Force Profile
In this subsection, we now focus on the detailed shape of the structural force profile. Due to the spring instability, the surfaces experience a jump-in on approach every time a layer is squeezed out, an a jump-out on retraction from a given layer. Therefore, several runs are necessary for the most comprehensive exploration of the force profile. Figure S2 shows the force profiles measured when approaching the top surface up to a given layer and retracting from this layer with the piezoelectric tube at v = 0.5 nm/s. Five layers can be distinguished and are labeled by i. From run to run, the whole force profile randomly shifts by a fraction of a nanometer, while the jump-in distances and the forces are reproducible. We consider that these shifts are non-physical, but result from imperfections of the set-up, such as fluctuations of the contact spots on the surfaces that lead to slight dealignment of the light from run to run. Therefore, in the present work, we have manually shifted the force profiles such that all the approaches are fitted to the approach up to layer i = 1 (in red). In Figure 4a is shown the resulting force profile with only the approach up to layer i = 1 (in red) and the retractions from the different layers for clarity (i = 1 in blue, i = 2 in green, i = 3 in orange, i = 4 in purple, i = 5 in yellow). Such structural force profile has been observed many times with ionic liquids and is attributed to the ordering of ions in the film, with a structure consisting of alternating layers of anions and cations (as sketched in the inset) [62][63][64][65][66][67][68][69][70]. For a given layer i, the distance D is not constant. In general, this is interpreted as a result of the change in the local liquid density for infinitely stiff surfaces, and the structural force profile is fitted with a semi-empirical exponentially decaying harmonic function of the form: where the four fitting parameters are the period of the oscillations λ, the decay length ζ, and the position D 0 and amplitude F 0 of the first layer. To characterize the structural force profile, we have measured, for each layer i, the distances D max where the slopes λ max , λ min represent the mean layer thickness and the intersects D max 0 , D min 0 correspond to the position of the extrapolated layer i = 0 (fitted values indicated in the figure). The mean layer thickness obtained from the maxima λ max = 1.0 ± 0.3 nm is significantly larger than the mean layer thickness obtained from the minima λ min = 0.64 ± 0.05 nm. This observation can be interpreted as a consequence of the mica compression. According to the DMT model for a given layer, the indentation of the solids is zero at the minimum force (jump-out point) and continuously increases up to the maximum force (jump-in point). When going from layer i = 1 to i = 5, the range of force explored decreases (due to repulsive maxima and adhesive minima that come closer to F = 0), as well as the amplitude of mica compression. The variation of D max i with i thus includes a systematic decrease in the mica compression, leading to an overestimation of the mean layer thickness. On the contrary, the variation of D min i with i does not include any influence of the mica compression, and provides the true mean layer thickness 0.64 ± 0.05 nm. Another method to determine the mean layer thickness consists in measuring the average jump-in distance, supposing an unchanged mica compression and a fast viscous relaxation during the squeeze-out events. This provides a consistent value of 0.64 ± 0.01 nm, confirming our interpretation of the effect of the mica compression on the mean layer thickness. Interestingly, this value is smaller than the mean ion pair diameter of 0.79 nm (given by with M being the molar mass of the ionic liquid, ρ its bulk density and N A the Avogadro number [62]), perhaps suggesting a denser packing of ions in confinement than in the bulk. However, our value is also smaller than the previous measurements performed with the same ionic liquid, reporting a mean layer thickness of 0.80 ± 0.04 nm between two mica surfaces with an SFB [71] and 0.79 nm between a mica surface and a Si 3 N 4 tip with an AFM [72]. A possible explanation for this difference is the inherent contribution from viscosity to the force profile, in particular, in the vicinity of the jump-in and jump-out instability. For the method using the positions of the minima, viscosity tends to move the point of minimum force towards larger distances, even more so if adhesion is larger; for the method using the jump-in distances, viscosity tends to reduce the jump-in distances. In both cases, viscous effects possibly lead to an underestimation of the mean layer thickness. Previous studies may be less affected by viscosity, as retractions were performed by slow steps in the SFB study [71] (with similar radii of curvature), and the radius of curvature was six orders of magnitude smaller in the AFM study [72] (with a velocity less than an order of magnitude larger). Figure 4c shows the variations of the forces F max i , F min i with the layer index i. In this log-lin representation, the two curves exhibit a good linearity and are exponentially fitted with the relations: where the slopes give access to the ratios λ max ζ max , λ min ζ min of the period of the oscillation on the decay length, and the intersects correspond to the amplitudes F max 0 , F min 0 of the extrapolated layer i = 0 (fitted values indicated in the figure). The curves obtained with the maxima and the minima are clearly distinct, as a consequence of the asymmetry of the envelope of the force profile with the horizontal axis. Such asymmetry cannot be due to the van der Waals contribution which is always attractive for symmetric systems. It cannot be explained by the anomalously long-range electrostatic force that has been observed with concentrated electrolytes [25,26], because exponentially fitting the mean amplitude F max i + F min i /2 gives a decay of ∼1 nm and an amplitude of ∼4 mN, respectively, one order of magnitude smaller and two orders of magnitude larger than for long-range electrostatic force reported for this system [18,26]. In fact, the exponentially decaying harmonic oscillation given by Equation (2) can be predicted theoretically in the asymptotic limit of large distances (far-field term), and an additional (non-oscillating) exponentially decay has been proposed as a correction at small distances (short-field term, with two additional fitting parameters) [68,73,74]. This second term, which is intrinsic to the liquid for infinitely stiff surfaces, could contribute to the asymmetry of the measured force profile. The significant deformations of the surfaces could also contribute to the asymmetry, in the following manner. According to the DMT model for a given layer, the surfaces are not deformed at the minimum force (jump-out point), and flattened at the maximum force (jump-in point). Compared to a situation without deformation of the solid surfaces, the squeezing out of the liquid layer intuitively requires a larger force, leading to a maximum that is larger than the minimum in absolute value. When going from layer i = 1 to i = 5, the range of forces explored decreases, as well as the amplitude of flattening, and the points of maximum and minimum forces are more and more symmetric around the horizontal axis. (Equations (3)). (c) Forces |F i | measured at the points of maximum and minimum forces (respectively, in red and blue), as a function of the layer index i, in log-lin representation. Straight lines are the corresponding exponential fits (Equation (4)).
An important aspect to interpret structural force profiles is to identify the composition of the layers. As the period is similar to the mean ion pair diameter, it is usually considered that one squeeze-out event corresponds to the squeezing out of an electroneutral "slab" of one cation layer and one anion layer. In the case of negatively charged surfaces, the first layer (i = 1) is then assumed to be composed of a monolayer of cations (as sketched in inset of Figure 4a). Direct solid-solid contact is never reached, due to the strong electrostatic attraction between the cations and the negatively charged surfaces. For our experiment, one can ask whether we really reached this single layer of cations within the explored range of loads. The position of the first layer at the point of minimum force (including no indentation of the surfaces) is D min 1 = 2.2 ± 1.0 nm, a bit larger than the cation sizes (given in Figure 1a). However, the measurement of the absolute distance D depends on many delicate steps (alignment of the optics, calibration of the mica thickness in a dry atmosphere, choice of a particular run to shift the force profiles laterally), and deducing the thickness of the monolayer from the cation sizes requires knowing their conformations, making accurate comparisons difficult. If we extrapolate the exponential fit of F max i to the layer i = 1 (see Figure 4c), it predicts that the next squeeze-out event would take place at a force F = 1.3 ± 0.6 mN, while we did not observe any additional jump-in for a force up to F = 17.80 mN (see Figure 3c). Thus, we think that the layer seen at the maximum load is indeed composed of a single layer of cations, and we identify it as i = 1.
In the SFA literature, the structural force profile is usually fitted with an exponentially decaying harmonic oscillation (Equation (2), with four fitting parameters), supposing that there are no surface deformations. In general, such a description seems reasonable because the eye measurements produce more scattered points and/or the retraction branches are not explored. However, in a few studies, it has been pointed out that the oscillation is not rigorously sinusoidal, notably in the first paper reporting a structural force for a simple liquid [75], and even more clearly in a recent study with an extended surface force apparatus [35]. In Figure S3, we have plotted two exponentially decaying harmonic oscillation (Equation (2)), the one in gray corresponding to the parameters F 0 = F max 0 , D 0 = D max 0 , ζ = ζ max and λ = λ max and the one in black corresponding to the parameters F 0 = F min 0 , D 0 = D min 0 , ζ = ζ min and λ = λ min . None of these curves fit the measured force profile, and adding an exponential, short-field, term would not improve the situation. Indeed, for an exponentially decaying harmonic oscillation, the stable branches are half convex and half concave, each minimum is located almost at the middle position between the surrounding maxima, and one value of D corresponds to a single value of F. In the experiment, the stable branches are always convex, the minima and maxima are not regularly spaced, and one value of D can correspond to several values of F (we can have D min i > D max i+1 ). So, it is clear that an exponentially decaying harmonic function is not appropriate to describe the detailed shape of the measured structural force profile. We propose a heuristic description, assuming on the contrary that the mica compression dominates the liquid compression, i.e., that the change in D within each layer comes only from the indentation of mica (elastic modulus K). For each layer, we suppose that the mica indentation is given by the DMT model (Equation ( Basically, the liquid controls the positions (distance at the point of minimum force) and the strengths (forces at the points of the minimum and maximum distances) of the layers, while the mica controls the shape of the profile within the layers. Excluding the radius of curvature R, which is measured independently, the force profile is described by seven fitting parameters, which is the exact number of parameters required to describe an asymmetric structural force profile with deformable surfaces characterized by a single elastic modulus. To compare with our measurements, we have used the values D min 0 , λ min coming from the linear fit of D min i , the values F min 0 , ζ min coming from the exponential fit of F min i , the values F max 0 , ζ max coming from the exponential fit of F max i , and we have fitted the single remaining parameter K. The fit shown in Figure 4a is very good at low loads, with K = 40 ± 5 GPa being remarkably close to the elastic modulus of bulk mica (value given in Figure 1c). This is because a contact zone of size a 3 µm e mica is probed when the force is close enough to the adhesion minimum, i.e., |F i − 2πRW i | 0.1 mN. The elastic deformations affect only the top mica layers, which can be considered as semi-infinite in these conditions [7,51]. At larger loads, the system enters in the regime described in the two previous subsections: the finite size of the mica layers cuts off the range of the elastic deformations, leading to an apparent stiffening of the solids compared to their bulk counterparts. To summarize, in contrast with previous considerations that neglected mica compression [17], we find for our system that the surface deformations have a strong influence on the force profile even at low loads. In particular, we expect this effect to be more important and independent of the mica thickness at low loads and smaller and reduced for thinner mica at large loads.
For our specific ionic liquid/mica system measured with SFB, we have shown that the detailed shape of the structural force profile is strongly affected by the mechanical deformations of the surfaces, with a mica compression that is dominant compared to the ionic liquid compression. A convolution of the surface forces across the liquid and the mechanical response of the confining solids may be present for other solid/liquid systems and force measurement techniques. In a typical AFM experiment (see, for example, [68]), the deflection ∆ of a cantilever is measured as a function of the approach position z imposed on the base of this cantilever. The force is deduced by multiplying the deflection by the calibrated spring constant, and the distance between the tip and the substrate is calculated as the difference between the cantilever deflection and the linear fit of the ∆(z) relationship when the surfaces are in contact. In fact, this standard procedure supposes ideal surfaces that are infinitely stiff in the fitting region. In practice, real solids are compliant, and this method leads to subtracting the indentation of the surfaces, linearized in the fitting region. As the relationship between the force and the indentation is generally not linear, the calculated distance is not exactly equal to the distance between the tip and the substrate. In addition, the amplitude of the force depends on the geometry of the surfaces and is therefore affected by their deformation. For these two reasons, the mechanical deformations of the confining surfaces are expected to influence the measured force profile also in the case of AFM. Irrespective of the particular system or technique used, it is very important to know the degree of convolution, i.e., whether the solid compression is negligible or dominant compared to the liquid compression, in order to interpret properly the structural force profile regarding the compressibility of the liquid layers [33,70,[75][76][77][78][79]. Indeed, this question of the "elasticity" of a thin liquid film is connected to a strong debate in the community, to understand how a liquid can exhibit a solid-like behaviour in nanoconfinement [8,[80][81][82][83]. Our study suggests that the finite slope of the structural force profile in each layer is not necessarily due to a change in load of packing fraction or structure of the molecules in the structured film, but can be attributed, in some cases, and at least in part, to the indentation of the confining solids. For these reasons, we propose a general criterion to distinguish the two opposite regimes of convolution for a generic system exhibiting a structural force profile. We consider the mechanical response of a liquid confined between infinitely stiff solid surfaces (taken as an exponentially decaying harmonic oscillation F osci (D)), the mechanical response of two deformable solid surfaces in direct contact (assumed to be a simple Hertz force F K (D)), and the two responses being measured with an external spring (imposing a restoring spring force F k (D)): The pure liquid response is measured without spring instability if the gradient of the oscillating force is smaller than the gradient of the restoring spring force: dF osci dD < dF k dD .
Similarly, the convoluted response of the liquid and the solids is only weakly affected by the solids if the gradient of the oscillating force is much smaller than the gradient of the Hertz force: dF osci dD dF K dD . For convenience, we define two dimensionless parameters N k and N K as the ratios of these gradients and we estimate them with simple scalings: The different parameters have analogous roles, even if the exact exponents and numerical factors are not the same. There is no spring instability (resp. small influence of surfaces deformations) when N k < 1 (resp. N K 1), which is fulfilled for systems with small adhesion W 0 and large period λ, measured with a spring of large stiffness k (resp. with solids of large elastic modulus K) and, less intuitively, with surfaces of small radii of curvature R. In the following, we test these criteria with studies in the literature that use different systems and instruments.
•
For this SFB study with an ionic liquid (R ∼1 cm, k ∼3000 N/m, K ∼50 GPa, W 0 ∼1 mN/m, λ ∼0.6 nm), we obtain N k ∼2·10 2 and N K ∼4, in agreement with the fact that we have spring instabilities and a strong effect of the surface deformations on the structural force profile. • For previous AFM studies with ionic liquids [63,68,72] (R ∼20 nm, k ∼0.1 N/m, K ∼50 GPa, W 0 ∼50 mN/m, λ ∼0.8 nm), we obtain N k ∼5 · 10 2 and N K ∼2 · 10 −1 , in agreement with the fact that they have spring instabilities but a small effect of the surface deformations on the structural force profile. • For a previous SFA study with liquid crystals [84] (R ∼0.3 cm, k ∼2000 N/m, K ∼50 GPa, W 0 ∼0.1 mN/m, λ ∼6 nm), we obtain N k ∼1 and N K ∼6 · 10 −3 , in agreement with the fact that they have no spring instabilities and a small effect of the surface deformations on the structural force profile. • For a previous AFM study with polyelectrolytes [73] (R ∼2 µm, k ∼0.3 N/m, K ∼50 GPa, W 0 ∼0.02 mN/m, λ ∼50 nm), we obtain N k ∼10 −1 and N K ∼10 −6 . Additionally, for a previous AFM study with colloidal suspensions [74] (R ∼7 µm, k ∼0.03 N/m, K ∼50 GPa, W 0 ∼0.005 mN/m, λ ∼70 nm), we obtain N k ∼7 · 10 −1 and N K ∼4 · 10 −7 . This is in agreement with the fact that both studies have no spring instabilities and a small effect of the surface deformations on the structural force profile.
If the surface deformations have a small effect on the structural force profile (N K 1, i.e., solid compression is negligible compared to liquid compression), a fit with a semiempirical exponentially decaying harmonic oscillation (Equation (2)) or a variation from it can be attempted. On the contrary, if the surface deformations have a strong effect on the structural force profile (N K 1, i.e., solid compression is dominant compared to liquid compression), a fit with our heuristic formulation (Equation (5)) or a variation from it is preferable. For an intermediate situation where the surface deformations have a moderate effect on the structural force profile (N K ∼1, i.e., solid compression of the same order than liquid compression), more sophisticated models are required, such as the energy minimization approach proposed by Hoth et al. [68].
Conclusions
By simultaneously measuring interactions and characterizing, in situ, the geometry of the contact, we have shown that the mechanical deformations of the confining solids can have a strong influence on surface force measurements. Although this paper focus on the analysis of specific SFB experiments with dry atmosphere and an ionic liquid, we think some of the conclusions listed below may be of general interest for the community of surface force measurements.
• SFA experiments are not always in the JKR regime but can be in the DMT regime, typically for situations of moderate adhesion over a range of a few nanometers, as we have seen in the case of the ionic liquid. Using the correct model of contact mechanics is crucial, notably for quantitative investigations of adhesion or friction. The two regimes of contact are usually distinguished by calculating the value of the Maugis parameter from an estimate of the range of the attractive forces; it is in fact more accurate to look at the value of the contact radius before jump-out. • In classical SFA experiments using mica sheets glued on glass lenses, the mica does not only bend but can also experience a compression, which we observed for relatively thick mica. • This compression has to be taken into account for a proper calibration of the undeformed mica thickness in a dry atmosphere; for this purpose, we presented a method based on the fitting of the relation between the contact radius and the force with the JKR model. The usual procedure, which consists of taking the jump-in point as a reference, can lead to an underestimation of the mica thickness and an equivalent outward shift of the force profile measured after injecting the liquid. We found that this effect amounts to ∼1 nm for a ∼7 µm-thick mica, and we expect it to decrease with the mica thickness. • For any system showing a structural force profile with SFA or AFM, the exponentially decaying harmonic oscillation due to local variations of liquid density may be convoluted with the mechanical response of the confining solids. A correct interpretation of the detailed shape of the structural force profile is necessary to understand the behaviour of liquids in nanoconfinement. • We have proposed a simple scaling criterion to estimate the degree of convolution. Typically, compression in the solids is dominant over compression in the liquid for simple liquids (large energies, small length-scales) and easily deformable solids (small elastic modulus, large radius of curvature). For SFA experiments with mica sheets glued on glass lenses, the influence of mica compression is interpreted to be more important and independent of the mica thickness at low loads and smaller and reduced for thinner mica at large loads. This effect is expected to be even more important at all loads when mica is replaced by a softer layer (such as EPON glue) [85][86][87] or at high loads if the distance measurement includes the indentation of the whole solid bodies (not only the top layers) [70]. • When the solid compression is dominant compared to liquid compression, a fit of the structural force profile with an exponentially decaying harmonic function is not appropriate. We have illustrated that it can be useful to consider heuristic formulations, which are based on extensions of contact mechanics models to situations where the solid surfaces confine a structured liquid film.
Conflicts of Interest:
The author declares no conflict of interest.
Abbreviations
The | 2021-09-28T01:10:09.679Z | 2021-07-01T00:00:00.000 | {
"year": 2021,
"sha1": "133c5ab95a00287dfab698b9d12e94dbe275b813",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-4442/9/7/69/pdf?version=1626323362",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "ad748ec4e3764b24dd7819f9fdac8adec3c96790",
"s2fieldsofstudy": [
"Physics",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
248677792 | pes2o/s2orc | v3-fos-license | The Rapid Writing Strategy in Writing Classroom during the Covid-19 Pandemic
: Writing skill is an ability to share an idea in written text based on the genre of the text, the rapid writing strategy is one of appropriate strategy to make a good paragraph especially in descriptive text because most of the participants are from rural area and different writing achievements. The aim of this research is to investigate the procedure of the rapid writing strategy behaves in the teaching-learning of writing skill. Researcher descriptively used the qualitative method that specifically uses a case study design by observing the field via Whats App platform. The instrument of this research is observation checklist and taking notes to get a complete data either from personal or group Whats App. The teacher and students activities inside and outside the class becomes a primary source of data. Results, the rapid writing strategy is a fast writing practice strategy without paying attention to what has been written in advance so that writing is completed quickly and completely the rapid writing strategy helps students explore writing more without worrying about grammatical accuracy, content and vocabulary for students to do tasks in descriptive text. In addition, this strategy makes teachers more motivated, interested and fun in teaching-learning process of writing skill.
Introduction
Writing is the ability to express ideas, thoughts, and feelings. In terms of writing skills, there are some criteria for good writing. Good writing is how students can select topics, describe, generate and explore their ideas. Accurate and clear explanations are considered to be the main criteria for good writing, while the actual content of the communication, that is the meaning, is left for further discussion. This means that in addition to select topics, describe, generating, and exploring ideas, precise and clear explanations are the criteria for good writing (Anamaryanti, Syarifi, 2014;Susilawati, 2017). In addition, it requires thinking skills, logical skills, the use of punctuation, and the ability to combine words into meaningful sentences. Writing skills also require fast reading to understand the content of the ideas you are writing, so writing skills involve many aspects of skills (Komariyah, 2013;Nurgiyantoro et al., 2020;Ramadhani, 2013) Beside, writing skills focus on the learning process in classroom teaching and the way to express ideas in written form with letters, words, and media. Writing skills are one of the language skills that have to master by students at junior high school. Writing is not a natural skill, because it cannot be learned automatically and easily. According to Barret "to write well, you have to think a lot, and you have to start thinking about this topic before you start writing". When learning to write, students should practice enough. These practices can inspire students in writing (Pitrianti, 2017). Teaching writing is a method that allows students to write in the target language. Brown believes that in teaching writing, teachers need to learn more about building fluency, not just accuracy, how to use actual text and classroom context, focusing on the communication purpose of language, and to use the internal learning of the student such motivation, teachers not only pay attention to the result but also focus on the process of writing (Putri, 2017). Teaching writing can be expressed as an activity or process to help someone gain knowledge and learn to do something. And that includes the transmission of knowledge, skills, and attitudes. Transmission is a process of systematizing, conditions, task materials, and opportunities to help students acquire and construct new knowledge which is done by organizing and managing one's potential resources (Supiani, 2017). It means, teaching writing is pedagogical steps to write in the target language. In teaching writing activity, the teacher needs media to assist his/her in delivering the information and knowledge to students. So teaching is showing or helping someone to learn how to do something, giving instructions, guiding in the study of something, providing with knowledge, causing to know or understand. This study will use a rapid writing strategy. This strategy is expected to help the students stimulate their ideas and make it easier for them to write, especially, descriptive texts (Kusumawaty, 2017).
Descriptive text is one of types of social and factual genre of text to describe a particular person, place, or thing. It can be a process of creating visual and impressive images and sensory through words. More often, Woodson said description is a part of another piece of writing and is used to inform an audience about how something or someone looked or to persuade an audience to see something from the writer's point of view (Siregar & Dongoran, 2020). Descriptive Text is a type of text in English that clearly describes the characteristics inherent in something, be it humans, animals, plants, or inanimate objects. The purpose of this text is to provide clear information about the object described to the reader (Hidayah, 2015). Its context is the description of a particular thing, animal, person, or others, for instance: our pets or a person we know well. It differs from Report which describes things, animals, persons, or others in general. The Social Function of descriptive text is to describe a particular person, place, or thing (Apriliansyah, 2016;Mursyid, 1992).
In contrast, writing is one of the skills in learning English which is very complex and difficult to teach. There is no doubt that writing is the most difficult skill for students to master. Writing skills requires not only mastery of grammar and rhetorical devices but also conceptual and judgmental elements. So it means that writing understands not only the structure of the language but also vocabulary enrichment and good spelling (Wicaksono, 2016).
In addition, teaching writing is considered difficult because it takes time to manage the writing process. Because of this difficulty, teachers must help students generate ideas and provide text writing exercises. Teachers find that students writing difficulties mostly appear in the early stages of their writing. In this case, the role of teachers in helping students Hidayah, Hafidz, & Kiptiyah Tell : Teaching of English Language and Literature Journal Vol. 10, No.1, April 2022, Doi: http://dx.doi.org/10.30651/tell.v10i1.9887 is very important. When teaching writing activities, teachers need a medium to help them deliver information and knowledge to students. To teach descriptive writing texts, you need some rapid writing strategies that can make students feel interesting and make a good impression so that students always remember what they get from the teacher's explanation. Based on the curriculum, there are some texts to be learnt by the students at Junior High School including descriptive text, narrative text, procedure text, and recount text. Each text has a different social function, schematic structures, and language features (Kusumawaty, 2017).
Rapid writing strategy has a positive impact on students writing skills, starting from writing assignment, they access their previous knowledge, engage with content, review and reflect, and setting directions for writing letters, essays, and more subject-based assignments. In addition, it can generate ideas for writing on any topic, in any subject area, students start writing and eventually produce more writing and students begin to organize ideas (Ontarion Music Educators Association, 2008). Further, it also helps students explore grammatical accuracy aspect and also encourages critical thinking and communication skill by sharing ideas with their groups. In contrast, the students will be bored and the class will be monotonous in learning. (Faizah, 2015;Riyanti, 2018;Silva & Matsuda, 2012) Applying the rapid writing strategy when teaching writing descriptive text using the above procedure, students can escalate their ideas in their own words without hesitation and fear of making mistakes so that students are more interested in learning to write using the rapid writing strategy. The rapid writing procedure makes it easy for students to work, especially in writing descriptive text. In addition, students are more active in the teaching and learning process because rapid writing is a strategy that can empower fluency and spelling (Ahmad, 2015;Oktavianingrum, 2019).
The researcher chooses participants as an object to this research because they reluctant to express their ideas in written text, make students enjoy each step of learning process in a class and interested in current topic.
Method Research Design
The research design of this study used a descriptive qualitative case study. It shows especially how the researcher gets information-related research questions. The researcher in the study observed the teacher and the students' activity in the classroom and reported it from the beginning until the end of activities.
Research setting and subject
This study taken place in one of junior high schools in Arosbaya Besides, it conducted in one class of second-grade that was chosen by the English teacher of the school. The participants that have been participate in this study were 32 students. The participants had various level of writing achievement and motivation because they were from rural and urban area. The data that used in this study was the English teacher and students.
The instrument and procedure of the Research
Observation was used to determine the teacher's strategy in teaching descriptive text. This observation has two subjects, namely students and teachers. The researcher made the Hidayah, 16 observation checklist in the form of "YES" or "NO" answers and field notes. The researcher taken notes all of the important activities which happen during the class including students choose a topic, write as fast as possible, No corrections or erasing allowed, write until the teacher says "STOP"do not stop before, don't lift your pen or pencil from the paper or remove your hands from the computer or phone, if students lose idea, jump to start to the topic title and extending it to a sentence, when the teacher says "STOP," count and record the number of words you have written, and the last, be prepared to discuss your topic and use the writing you have done to start you off (Oktavianingrum, 2019).
To get the data the researcher observed the teaching learning process in classroom with use the observation checklist. Researcher found the real process of teacher and students' behavior of the implementation of rapid writing strategy in teaching writing descriptive text.
Data Analysis
In analysing the data the researcher paid attention on the data that have obtained from observation.. The researcher used data analysis model from that data analysed in qualitative research proceed hand-in-hand with other parts of the qualitative study, namely, the data collection and the write-up of findings. Then, reducing the data, not all the data result of the research is important. So that the important information had been taken in this research, and unimportant one had rejected. Next, presenting the data, it means described in descriptive display to arrange the appropriate information and draw the conclusion. Last, drawing conclusion, continuously interpreted throughout the references of the study (Royadi et al., 2019).
Findings and Discussion Findings
Based on the data gained there were some steps of rapid writing strategy in writing class included choosing a topic, writing descriptive text as fast as possible, No corrections or revision as long as writing process, writing before saying stop, keeping pens, computer, or phone in the hands, if losing the idea, jump start to the other topic or sentences, counting and recording the number of written words after the teacher stopped to write, and discussing the topic with friends before submitting.
During teaching descriptive text using rapid writing strategy, the teacher used the following procedures: First, students choose a topic, students chosen a topic before the teacher have explained the material, one of the topic that the students chosen is about pets. It also made students know about the important points in the English Writing process, the use of rapid writing and eight rapid writing procedures) after the students understood today's material the teacher asked the students to make an example of a descriptive text. The teacher asked students to choose some topics they wanted to choose such as when the teacher gave topics about pets, football, mom, and idol. Then, some students were more interested in choosing the topic of Pets. To make it easy for students to write in descriptive text. This is evidenced by the presence of student data such as some students choosing the topic of pets because this topic is common and in their environment, making it was easier for them to determine the topic to write a descriptive text. Second, write as fast as possible, the teacher asks students to write as much as they could without thinking about mistakes such as using simple presents, spelling, and others. Third, no corrections or erasing allowed, the teacher warns students not to delete words or sentences they have written. Students continuously wrote what had remembered until the teacher said "stop". And fourth, write until the teacher says "STOP"do not stop before the allotted time limit. For example, when the teacher gives time for students to write descriptive text, students must write without erasing what they have written, until the time limit has been determined. This aims to find out how many errors in the words they have compiled.
Fifth, don't lift your pen/pencil from the paper or remove your hands from the computer or phone. The teacher asks students to write as much as they can and focus on their writing, don't take the pen off the paper before time runs out. The results of students' writing on the topic Exo; Exo is a South Korean boy band consisting of 12 members. They are a boy band that I like. I really like Exo because the members are very handsome, tall, and good at singing, acting, and many others. They have released many songs and dramas.
One of the dramas that starred in Moon Lovers: Scarlet Heart Ryeo, Rich Man, Poor Woman, It's Okay That's Love, Dokgo Rewind, and Andante.
Sixth, if you lose the idea, jumpstart your brain by writing the topic title and extending it to a sentence. When students start to run out of ideas, try writing the topic title and developing it into a sentence. Seventh, when the teacher says "STOP," students count and record the number of written words. After the time is up, the teacher asks the students to count how many words they have written. And notice below an example of the descriptive text they have worked on.
Last, be prepared to discuss your topic: use the writing you have done to start you off. The teacher asks students to submit their assignments via Whatsapp group after that the teacher chooses one of the sample texts from students to be discussed. One of the results of student assignments such as Suga, He is my idol.
His name is Min Yoon-gi and better known as Suga, He was born in Hangul, March 9, 1993, He is a member of the Korean boy band BTS. He has the whitest skin among the other members. He can do anything, sing, rap, dance and many others. I really like him because he has a very handsome, face.
The researcher provided a file that is sent via personal Whatsapp. Before the teacher fills out the observation checklist, the researcher explains the contents and how to fill it in the observation checklist. The teacher only chooses between yes or no using a checklist. After the teacher fills in the observation checklist and then sends back the results of the checklist observations that the teacher has filled into the researcher via personal Whatsapp The eighth procedures observed in the checklist, the teacher answered yes to all of these procedures. So the procedure for applying the rapid writing above is very helpful for teachers in teaching descriptive text. By using the above procedure, it is easier for students Hidayah, Hafidz, & Kiptiyah Tell : Teaching of English Language and Literature Journal Vol. 10, No.1, April 2022, Doi: http://dx.doi.org/10.30651/tell.v10i1.9887 18 to write descriptive text using a rapid writing strategy. The teacher frees them to choose any topic they want to choose to describe so that students can their writing. Researcher also provided some data form of files sent via Whatsapp group. Before they filled out the observation checklist, the researcher explained the contents and how to fill out the procedure in the observation checklist. Students only choose between yes or no using the checklist and students are required to fill in the name and class that has been provided above the observation checklist. After they filled in the observation checklist, they sent back the results of the observation checklist they had filled into the researcher via personal Whatsapp.
The researcher also used an observation checklist for all students to find out the process. An observation checklist is given to students to choose "Yes" or "No". Based on the checklist that has been filled in, the researcher can describe the student responses which are described on the following figure: .
Discussion
Based on observations, a school has been used by teachers and students both inside and outside the classroom. In the online learning process, the teacher uses via Whatsapp group. In using the rapid writing strategy, the teacher asks students to make a descriptive text addressed to their classmates.
The implementation of rapid writing strategy, the teacher uses the following procedures: First; students choose a topic, students need at least five until ten minutes of each topic without stopping. It is a simple step to brainstorm students' interest, idea and experiences. The selected topic of a pet shows that the students' readiness appears in determining a pet, it also they have a good motivation to write because only one percent of students does not choose the topic. Second; write as fast as possible, Students are asked to focus on a particular topic or idea and write everything about their topic chooses. Then help with rapid writing strategies for students if they are stuck and don't know what is written. In other words, hurry writing makes it easier for students to write because by writing fast students can write down everything they know and understand about the topic subjects. Mostly, students' uses a simple sentences with various form such as The color is red. Red is the color of Liverpool. Liverpool is my favorite football team. It means that students perform the grammar usage well, even though they have errors in spelling, because three sentences only consist of ten words (Ahmad, 2015;Faizah, 2015;Oktavianingrum, 2019;Silva & Matsuda, 2012).
Third; No corrections or erasing allowed, with this strategy, students can escalate their ideas in their own words without hesitation and fear of making mistakes so that students are more interested in learning to write using the rapid writing strategy. This step makes students work easily especially in writing descriptive text. The students' problems is on spelling the word of favorite and color which the correct ones are favourite and colour, and an grammar error such as They are a boy, it should use is not are . They focus on paragraph organization that writing a paragraph fluently is one way to neglect the own worries while writing a paragraph because the students' worries is twenty percent. In addition, students are more active in the teaching and learning process because it can encourage fluency in spelling written words. It also enables students to memorize all the steps in writing activities. The data also have similarities with believes that some strategies are too complicated and require too many steps to remember and implement but this is the applicable step in writing class. It can be effective, if the teacher simplifies each part of strategies, and the students are easier to remember and use (Ahmad, 2015;Oktavianingrum, 2019) Fourth, writing until the teacher says "STOP"do not stop before, writing continuously even though what is written is "I don't know what is I must write!", Place When the timer is running, avoid: Brainstorming, sentence organization, Grammar check, Retry, Crossing out or deleting something, and finally Continue until time runs out and ready to stop. Although the students have limited time to write, they are succeed to avoid many mistakes or error either in the vocabulary or grammar. Their mistakes only two words and uses a good grammar. Fifth, don't lift your pen/pencil from the paper or remove your hands from the computer or phone. The students' pen ought to consistently exist in the hand as long as they were writing the sentences by sentences become a good paragraph, they find some problems during the online learning process such as low battery, unsuitable networks and calling incomes while writing process, it causes twenty percent of students remove the phone, and does not the step well enough. Sixth, if you lose the idea, jumpstart your brain by writing the topic title and extending it to a sentence, students write the other sentences that are considered as the more reluctant ones than the difficult sentences to be described. The student's paragraph mostly use the pattern of S + V + O, or noun, verb, articles (a, an and the). They rarely utilize the complex sentences, most of them write in compound sentences with simple conjunctions are like and in They have released many songs and dramas. The most numbers of the result of observation is this step which is until thirty percent. Seventh, when the teacher says "STOP," count and record the number of words you have written. Students calculate the number of words they have written and stored this information on a graph in their writer's notebook. The average of students' number in each paragraph are five until seven sentences with ten daily difference words. This average is excellent, because they are able to write paragraph in five minutes. The figure 1 shows that there are thirty percent of students who pass this step. The last; be prepared to discuss your topic: use the paragraph Hidayah, Hafidz, & Kiptiyah Tell : Teaching of English Language and Literature Journal Vol. 10, No.1, April 2022, Doi: http://dx.doi.org/10.30651/tell.v10i1.9887 20 you have done to start you off. The students' performance in in discussing a topic does not seriously doubt it well, because they just only asks the other opinion and the title of each students' paragraph. There are some of them check the spelling only such as becouse, hendsome, and amang become because, handsome and among.
The eighth procedures of rapid writing, the teacher helps students in taking ideas generally and provide exercises in compiling texts. The teacher realized that students' difficulties in writing mostly occurred at the beginning when they started writing. In this case, the teacher's role is very important in helping students. In learning to write, the teacher needs media to assist him in conveying information and knowledge to students. Learning to write descriptive text requires something that can make students feel happy and have a good impression so that students will always remember what they get from the teacher's explanation. This is under Brown (2017), teaching writing supervises, facilitates learners to learn and sets conditions in learning to produce a text. He also stated that teaching is motivate learners to do, giving instructions, guiding, giving knowledge, making students to know or understand. To help students escalate their writing, the teacher uses a rapid writing strategy to make it easier for students to write descriptive text and help them in writing fast (Ahmad, 2015;Oktavianingrum, 2019).
Conclusion
From the results, the researcher can conclude that applying the rapid writing strategy when teaching writing descriptive text using the above procedure, it really helps students to start writing and ultimately produce more writing. With this strategy, students can help their ideas in their own words without hesitation and fear of making mistakes so that students are more interested in learning to write using the rapid writing strategy. The procedure of rapid writing makes it easier for students to do assignments, especially in writing descriptive text. Proper teaching procedures have a positive effect on students' writing ability such as "before saying stop", students only focus on completing paragraph well in a little moment. By using the rapid writing strategy students can explore their ideas in their own words without hesitation and fear of making mistakes such as simple present tense, thus making students more interested in learning to write by using the rapid writing strategy and by using this strategy learning becomes more fun so that it makes the learning process different and more interesting.
Rapid writing strategy is a fast writing practice strategy without paying attention to what has been written in advance so that writing is completed quickly and completely. The use of a rapid writing strategy helps students explore writing more without worrying about grammatical accuracy, content and vocabulary for students to do tasks in descriptive text. In addition, this strategy makes teachers more motivated, interested and fun in teachinglearning process of writing skill. | 2022-05-11T15:21:31.666Z | 2022-04-30T00:00:00.000 | {
"year": 2022,
"sha1": "3d261314cdfc51ca03d9d394301941afbfe9c968",
"oa_license": "CCBYNC",
"oa_url": "http://journal.um-surabaya.ac.id/index.php/Tell/article/download/9887/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "455b6f3c71f0cc90bbcf525ff721a3f310fbd963",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
} |
16853609 | pes2o/s2orc | v3-fos-license | Face Verification Using Modeled Eigenspectrum
Face verification is different from face identification task. Some traditional subspace methods that work well in face identification may suffer from severe over-fitting problem when applied for the verification task. Conventional dis-criminative methods such as linear discriminant analysis (LDA) and its variants are highly sensitive to the training data, which hinders them from achieving high verification accuracy. This work proposes an eigenspectrum model that alleviates the over-fitting problems by replacing the unreliable small and zero eigenvalues with the model values. It also enables the discriminant evaluation in the whole space to extract the low dimensional features effectively. The proposed approach is evaluated and compared with 8 popular subspace based methods for a face verification task. Experimental results on three face databases show that the proposed method consistently outperforms others.
INTRODUCTION
In Biometrics, face recognition has two main applications, one is verification and the other is identification.Face verification is a task to determine whether a person claiming a given identity is the true claimant or an imposter.This can be done by computing the similarity between the probe sample and the samples of the claimed person in the gallery.The final decision is made based on a threshold.For face identification, however, the system has to determine the identity of a person by computing similarities between the probe and all the gallery samples in the database.The identity is determined based on the highest similarity score.Over the last decade numerous algorithms using linear as well as nonlinear techniques for face identification have been proposed and considerable good performance has been achieved [1][2][3].However, these algorithms focus on face identification and very few of them are evaluated for face verification task.Although there are some attempts to distinguish face verification from face identification [4,5] many works ignore the intrinsic difference between face verification and face identification tasks.
In general, similar training algorithms can be used for both face verification and identification.However, there are some intrinsic differences between the two cases.While an identification system could be limited to identify an input as one of its known users, a verification system must be able to reject the unknown imposters.
Consequently, while the training set could include some sample images of all test subjects for certain identification applications, no sample of test imposters should be included in the training set for a verification system.Hence, a learning algorithm may suffer more severe over-fitting problem (or poor generalization) for face verification task than for the face identification.Furthermore, decisions of a verification system depend on the operating points or thresholds while those of an identification system depend on the rank of the similarity.This leads to quite different evaluation methods for the verification system from the identification system.Thus, a method that has a high accuracy for the identification task may not necessarily achieve a high accuracy for the verification task.
Both face verification and identification are challenging as the human faces may undergo significant variations in appearance due to different facial expressions, illumination changes and different pose conditions.Subspace based methods such as the principal component analysis (PCA) [6], Bayesian maximum likelihood (BML) [7][8][9] and linear discriminant analysis (LDA) [10,11] have shown promising results for the face identification problem.This work explores some outstanding challenging problems of existing subspace based methods caused by the high dimensionality of the face image and the finite number of training samples in practice and proposes a new approach to alleviate these problems for the face verification task.
PCA maximizes the variances of the extracted features and hence minimizes the reconstruction error and removes noise residing in the discarded dimensions.The best representation of data may not perform well from classification point of view because the total scatter matrix is contributed by both the within-and between-class variations.To differentiate face images of one person from those of the others, the discrimination of the features is the most important.LDA is an efficient way to extract the discriminative features as it handles the within-and between-class variations separately.However, this method needs the inverse of the within-class scatter matrix.This is problematic in many practical face recognition tasks because the dimensionality of the face image is usually very high compared to the number of available training samples and hence the within-class scatter matrix is often singular.
Numerous methods have been proposed to solve this problem in the last decade.A popular approach called Fisherface (FLDA) [12] applies PCA first for dimensionality reduction so as to make the within-class scatter matrix nonsingular before the application of LDA.However, applying PCA for dimensionality reduction may lose discriminative information [13][14][15].Direct-LDA (DLDA) method [16,17] removes null space of the between-class scatter matrix and extracts the eigenvectors corresponding to the smallest eigenvalues of the within-class scatter matrix.It is an open question of how to scale the extracted features as the smallest eigenvalues are very sensitive to noise.The null space approach (NDA) [15,18] assumes that the null space contains the most discriminative information.Interestingly, this appears to be contradicting the popular FLDA that only uses the principal space and discards the null space.A common problem of all these approaches is that they all lose some discriminative information, either in the principal or in the null space.
In fact, the discriminative information resides in the both subspaces.To use both the subspaces, dual-space approach (DSL) [14] extracts features separately from the principal and its complementary subspaces of the within-class scatter matrix.It scales features in the complementary subspace by the average eigenvalue of the within-class scatter matrix over this subspace.As eigenvalues in this subspace are not well estimated [14], their average may not be a good scaling factor relative to those in the principal subspace.Features extracted from the two complementary subspaces are properly fused by using summed normalized-distance [19].Open questions of these two approaches are how to divide the space into the principal and the complementary subspaces and how to apportion a given number of features to the two subspaces.Furthermore, as the discriminative information resides in the both subspaces, it is inefficient and only suboptimal to extract features separately from the two subspaces.
The above approaches focus on the problem of singularity of the within-class scatter matrix.In fact, the instability and noise disturbance of the small eigenvalues cause great problems when the inverse of the matrix is applied such as in the Mahalanobis distance, in the BML estimation and in the whitening process of various LDA approaches.Problems of the noise disturbance were addressed in [20] and a unified framework of subspace methods (UFS) was proposed.The good recognition performance of this framework shown in [20] verifies the importance of the noise suppression.However, this approach applies three stages of subspace decompositions sequentially on the face training data and the dimensionality reduction occurs at the very first stage.As addressed in the literature [13][14][15], applying PCA for dimensionality reduction may lose discriminative information.Another open question of UFS is how to choose the number of principal dimensions for the first two stages of subspace decompositions before selecting the final number of features in the third stage.The experimental results in [20] show that the recognition performance is sensitive to these choices at different stages.
In this work, we propose a three subspace based eigenspectrum decomposition methodology which uses two control points to differentiate the reliable, unreliable and zero eigenvalues.An eigenspectrum modeling procedure is proposed that enables us to perform discriminant evaluation in the whole space.This eigenspace decomposition is not used to limit the discriminant evaluation in one subspace but to enable the evaluation in the whole space.The extracted feature hence contains both the reconstructive and the discriminative information of training samples.The replacement of unreliable small and zero eigenvalues by the modeled values reduces the sensitivity of discriminative methods to the number of training samples, high dimensionality of the face images and noises in the data.It provides better generalization or less over-fitting as compared to the existing methods for face verification tasks.
In addition, many subspace methods are evaluated only for face identification task.Their performances for face verification are unknown in the literature.This work experimentally evaluates eight popular subspace based approaches for the face verification task and compares them with the proposed method.Experimental results on three face databases demonstrate that the proposed method consistently outperforms others.In the following section, we present the problems of feature scaling, the subspace decomposition and eigenspectrum modeling.Following the above section, we discuss the eigenfeature scaling and extraction in the whole space using the proposed eigenmodel.Experimental results and discussions that compare our method with others are presented before drawing conclusions.
Problems in Feature Scaling and Extraction
Given a set of properly normalized h -by-w face images, we can form a training set of column vectors {X ij }, where is called image vector, by ordering the pixel elements of image j of person i.Let the training set contain p persons and i q sample images for person i.The number of total training samples is l = i=1 p q i .Letting i c be the prior probability of person i, the within-class scatter matrix is defined by where Xi = 1 q i j=1 q i X ij .The between-class scatter matrix b S and the total (mixture) scatter matrix t S are defined by .This implies that if any one of the eigenvalues in (4) of these matrices is zero or close to zero then the corresponding eigenvector w i gets an infinite or semi-infinite weighting factor.If we discard the eigenvectors corresponding to the zero eigenvalues (that appear beyond the rank of w S ), these eigenvectors are equivalently multiplied by zero.This can be viewed as an n -dimensional pattern vector ij X is first represented by an n -dimensional eigenfeature vector Y ij = w T X ij , and then multiplied by a weighting function as shown in Fig. (1), where w r is the rank of S w .
Conventional FLDA applies PCA first for dimensional reduction (DR) and then LDA is used for discriminant analysis [12].Several questions arise pertinent to the amount of basis vectors or principal components to be retained in this DR and how they affect the performance from discrimination point of view.Detail discussions and experimental results can be found in [21].To facilitate analysis one has to consider three important factors: limited number of training samples, dimensionality of the data and the presence of noise in the data.For image representation PCA is optimal and gives the most compact representation.Its reconstruction performance improves when more number of principal components are used.However, it is well known that keeping more components may lead to decreased classification performance in FLDA [11] and also for BML [22].In the PCA step of FLDA, if more components are kept corresponding to the small eigenvalues (which encode noises), LDA process has to cater for noises as well, consequently over-fitting problem occurs and classification performance decreases [23].For face verification applications, this problem can be more severe as compared to face identification because in the former, the probe imposter subjects are distinct and unknown during the training session.Therefore, the algorithm used for verification purpose should be designed to have more generalization capability (or less over-fitting problem) as compared to face identification algorithms.
Two major problems arise which are visible in the graph (Fig. 1).Firstly, although the eigenvalues in the region , where m 1 is a control point, are within the range space (rank) of w S matrix, they are very small and noise component may dominate the eigenvalues.Furthermore, the finite number of training samples results in faster decay of the eigenvalues.When their inverses are used for scaling (5), the corresponding eigenvectors get undue heavy or semi-infinite weightage in this range, as shown in Fig. (1).These small eigenvalues cause misleading scaling in the whitening step and thus generalize poorly when exposed to entirely new (subject) data -a scenario which is commonly encountered in face verification and other pattern recognition applications.is small (m 2 < k r w ), where m 2 is a control point.The small and zero eigenvalues are training-set-specific and very sensitive to different training sets [24].Adding new samples to the training set or using different training set may easily change some zero eigenvalues to nonzero and make some very small eigenvalues several times larger.Therefore, these eigenvalues of the within-class scatter matrix are unreliable and need to be replaced by suitably modeled eigenvalues.Before proposing the eigenspectrum model, we first need to decompose the range space into reliable, unstable noise variation dominating subspace and null spaces.
SUBSPACE DECOMPOSITION
We propose a methedology to estimate two control points 1 m and 2 m which will segregate the reliable eigenvalues from the unreliable and near zero ones.As the inverse of the eigenvalues are used in the feature scaling ( 5), the noise disturbances and the limited training samples have little effect on the initial portion of the eigenspectrum (Fig. 1) but may substantially affect the feature stability in the latter portion of the range space where the eigenvalues are small or close to zero.Hence, the whole eigenspace n R spanned by is decomposed into three subspaces: a reliable face variation dominating subspace (or simply face space) F = { k w } k=1 m 1 , an unreliable noise variation dominating subspace (or simply noise space) as illustrated in Fig. (1).
Estimation of m 1
The rank of w S is w r min(n, l p).In practice, the rank of a scatter matrix usually reaches this maximum values unless some training images are linearly dependent.Even in this rare case, the rank w r can be easily determined by finding the maximal value of k that satisfies k w > , where is a very small positive value comparing to 1 w .
where μ is a constant.The optimal value of μ may be slightly larger or smaller than 1 for different applications.To avoid exhaustive search for the best parameter value, μ is fixed to be 1 in all experiments of this paper for fair comparisons with other approaches.
Estimation of m 2
To differentiate the unreliable eigenvalues from the larger ones we employ the ratios of the successive eigenvalues of the eigenspectrum to decompose the whole eigenspace.The phenomenon that the eigenspectrum accelerates its decrease is caused by the limited number of training samples and noises present in them.To study this, we define eigenratios as . 1 , The plot of eigenratios w k against index k is called eigenratio-spectrum. Fig. (2) shows a typical eigenratiospectrum of a real face training database.The limited number of the training samples causes the increase of the eigenratios.The corresponding eigenvalues are thus unreliable.
We examined several different face databases, the eigenratio plots shown in Fig. (2) is a general behavioral pattern that all the eigenratios of different databases portray.It is apparent from the graph that the eigenratios first decreases very rapidly, then stabilizes and finally increases.The increase of the eigenratios should not be the behavior of the true variances.Therefore, the start point of the unreliable region 1 2 + m is estimated by
EIGENSPECTRUM MODELING
If we regard ij X as samples of a random variable vector X, the eigenvalue w k is a variance estimate of X projected on the eigenvector w k estimated on the training samples.It usually deviates from the true variance of the projected random vector X due to noise and the finite number of training samples.Thus, unreliable eigenvalues need to be replaced by some model value to alleviate the over-fitting problem.As the eigenspectrum typically decays rapidly and stabilizes, we can model it by a function of the form f 1/ that can well fit to the decaying nature of the eigenspectrum.The function form f 1/ was used in BML approach [7] to fit the eigenspectrum in the whole range subspace
EIGENFEATURE SCALING AND EXTRACTION
According to LDA criteria, the optimal discriminative features should have smallest within-class variations and largest between-class variations.As pointed out in [25], LDA can be carried out in two operations: first eigendecomposition of the with-class scatter matrix, followed by the eigenvector scaling or whitening and the second eigendecomposition of the transformed between-class variation matrix.The biggest hurdle comes in the whitening process where the inverses of the eigenvalues of the within-class scatter matrix are used to scale the eigenvectors or features.Over-fitting problems occur here due to the high dimensionality of face image and the limited number of training samples.As elaborated before the problems of feature scaling and extraction, the existing scaling function (5) gives undue heavy weightage to the corresponding eigenvectors in the range {k | m 1 < k r w }, resulting in poor generalization to the new data.Moreover, eigenvectors corresponding to the zero eigenvalues are lost and they fail to contribute to the discriminant evaluation.Hence, the small and zero eigenvalues need to be replaced by the modeled eigenvalues.This replacement of eigenvalues alleviates overfitting problems and also enables us to perform discriminant evaluations in the whole space.
Eigenfeature Scaling
Although there is always noise component in F as noise affects every element of the image vector, its variance is very small comparing to the large variance of the face structural component in F. In N, however, noise component may dominate in the variance changes and the finite number of training samples results in faster decay of the variances.Therefore, the decay of the eigenvalues should be slowed down to compensate the effect of noise and the finite number of training samples.This can be done by replacing the eigenspectrum with the proposed model (10).In the null space, we have no information about the variation of the eigenvalues and hence all features are treated in a same way.The zeros variance in the null space is only an estimate on one set of the training data.Another set of training data may easily make them nonzero, especially when larger number of training samples are used.Therefore, we should not trust the zero variance and derive an infinite or very large feature weights in this space.However, based on the available training data that result in zero variances in the null space, the feature weights in the null space should not be smaller than those in the other subspaces. where is a full rank matrix that transforms an image vector to an intermediate feature vector.There is no dimension reduction in this transformation as ij Y ~ and ij X have the same dimensionality n.
After the above feature transformation and scaling, a new between-class scatter matrix is formed by vectors
where The transformed features ij Y ~ will be de-correlated for b S ~ by solving the eigenvalue problem as (4).Suppose that the eigenvectors in the eigenvector matrix n b = [ 1 b ,..., n b ] are sorted in descending order of the corresponding eigenvalues.
The dimensionality reduction or feature extraction is performed here by keeping the eigenvectors with the d largest eigenvalues where d is the number of features usually selected by a specific application.Thus, the proposed feature scaling and extraction matrix U is given by which transforms a face image vector X, X R n , into a feature vector We witness that the three-subspace based eigenspectrum decomposition proposed in this work is only used to replace the unreliable small and zero eigenvalues by the model values.The discriminant evaluation (here the evaluation of the eigenvalues of b S ~) is performed in the full space R n .
This approach extracts the discriminative features from the whole space by searching the most discriminative features in the full space.Thus, the proposed method is based on the global optimization, different from the local optimization in a subspace of approaches FLDA [12], DLDA [16,17], NDA [15,18] and UFS [20] and different from the two separate local optimizations in two subspaces of the dual-space approaches in [14,19].
THE PROPOSED ALGORITHM
The proposed face verification using modeled eigenspectrum (MES) approach is summarized below: At the training stage: 1.
Given a training set of normalized face image vectors {X ij }, compute w S by (1) and solve the eigenvalue problem as (4).
Compute
b S ~ by (17) with ij Y ~ and solve the eigenvalue problem as (4).
7.
Obtain the final feature scaling and extraction matrix by ( 16), ( 18) and ( 19) with a predefined number of features d.
At the enrollment or registration stage: 1.
Extract d -D feature vector F from the enrolled n -D normalized face image vector X by ( 20) using the feature scaling and extraction matrix U obtained in the training stage (19).
2.
Store the extracted feature vector and the registration ID into the gallery feature vector set.
At the verification stage: 1.
Extract d -D feature vector F from the n -D normalized probe face image vector X by ( 20) using the feature scaling and extraction matrix U obtained in the training stage (19).
2.
Compare or match the probe feature vector with that in the gallery feature vector set corresponding to the claimed ID.
In the experiments of this work, cosine distance measure between a probe feature vector P F and a gallery feature vector G F is applied in matching two feature vectors, where 2 • is the norm 2 operator.
ANALYSIS AND COMPARISONS
Subspace methods like FLDA, DLDA, UFS and NDA approaches discard a subspace before the discriminant evaluation.Therefore, they perform a local optimization process and hence their extracted features are suboptimal or are the most discriminative only in a subspace.Although BML works in the whole space, it does not evaluate the discriminant value and hence the whole face image must be used as features in the verification process.It performs well for the identification problem but as we will see in the experiments, it suffers from severe over-fitting problem for the verification task.Although the dual-space based approaches do not throw away any subspace before the discriminant evaluation, it is inefficient or only suboptimal to evaluate the discriminant value and extract features separately in two subspaces and then to combine them.Other open questions of the dual-space based approaches include how to divide the space into the two subspaces and how to apportion the given number of features to the two subspaces.
The proposed algorithm (MES) in this work has two important and novel ingredients.First, the three subspace based eigenspectrum decomposition differentiates the reliable, unreliable and zero eigenvalues.Two reasonable decomposition points are determined by the algorithm.This eigenspace decomposition is not used to limit the discriminant evaluation in one subspace but to enable the evaluation in the whole space.Thus, the proposed method is based on the global optimization that extracts features by searching the most discriminative ones in the whole space.Second, the parameters of the eigenspectrum model are determined by the reliable portion of the real eigenspectrum and the modeled eigenvalues are then used to replace the unreliable small and zero real eigenvalues.This reduces the sensitivity of the extracted features to the dimensionality of face image, the number of training samples and noise disturbance.Consequently, the proposed approach alleviates the over-fitting problem that often occurs in the training process with limited number of high dimensional samples.As discussed before, the face verification task may suffer from more severe over-fitting problem (or poorer generalization) than the face identification task.The proposed MES approach facilitates a discriminative and stable low-dimensional feature representation of the face image, which is verified in the following experiments on the face verification task.
EXPERIMENTS
Three popular face databases: AR database, FERET database 1 and FERET database 2 are used in the experiments.Each database is partitioned into training, gallery and probe sets.In all experiments reported in this work, images are preprocessed, aligned and normalized following the CSU Face Identification Evaluation System [26].Face verification is performed by accepting a claimant if the subject's matching score is greater than or equal to a threshold and rejecting the claimant if its matching score is lower than the threshold.Verification performance is evaluated using two measures: false acceptance rate (FAR) and the false rejection rate (FRR).FAR is the ratio of the number of accepted imposter matchings to the total number of imposter matchings.FRR is the ratio of the number of rejected genuine matchings to the total number of genuine matchings.The plot of FRR against FAR is called the receiver operating characteristics (ROC) curve.The system performances at various different operating points (thresholds) are characterized by the ROC curve.The equal error rate EER, defined by EER=FAR=FRR at a specific threshold, serves as a single number indicator of a verification system's performance.
The proposed MES method is tested and compared with 8 other popular subspace based approaches: PCA [6] with Euclidian distance (PCAE), PCA with Mahalanobis distance (PCAM), FLDA [12], DLDA [16], BML [8], NDA [18], UFS [20] and DSL [14] approaches.The proposed MES approach has only one free parameter μ in (7).To avoid exhaustive search for the best parameter value, μ is fixed to be 1 in all experiments of this paper for fair comparisons with other approaches.The parameters of UFS are applied that result in the best performance through an exhaustive search in the experiments of [20].We conduct the experiments starting with the number of features d = 10, incremented by 2 each time up to p 1, where p is the number of training subjects.Experimental results are presented in this paper for each approach where the minimum EER is obtained.
Results on AR Database
The color images in AR database [27] are converted to gray-scale and cropped into the size of 120 170, same as the image size used in [27,28].The pictures of most subjects were taken in two sessions (separated by two weeks).In our experiment, 75 subjects with 14 non-occluded images per subject were selected from the AR database.The first 7 images (numbers 1-7, first session [27]) of 60 subjects are used in the training and also serve as gallery images.The second 7 images (numbers 14-20, second session [27]) of the 60 subjects serve as probe genuine images.The remaining 15 subjects with 14 images per subject are used as probe imposters.The total number of genuine matches is 2,940 = 60 7 7 and the total number of imposter matches is 14 7 15 60 = 88, 200.For this large image size, we first apply PCA to remove the null space of t S and then apply the MES approach on the 419-dimensional feature vectors.Fig. (3) shows the ROC curve that plots the false rejection rate (%) against the false acceptance rate (%).The ROC curves of PCAE and PCAM do not appear in Fig. (3) because their FRRs and FARs are so high that their values are out of the range of Fig. (3).Their EERs are numerically recorded in Table 1.We see that BML approach does not perform well for the face verification task although it is one of the best approaches for the face identification task.The problem of BML for face verification was also addressed in [29].Fig. (3) shows that the proposed MES method consistently outperforms all other 8 approaches for all different operating points (thresholds).
Results on FERET Database 1
In FERET database, the face image variations include facial expression and other details (like glasses or no glasses), illumination, pose, and aging [30].2388 images comprising of 1194 subjects (two images FA/FB per subject) are selected from the FERET database.Images are cropped into the size of 33 38.Images of 497 subjects are randomly selected for training and the remaining images of 697 subjects are used for testing.For this database, the subjects used for training are different from those used for the testing.There is no overlap in subjects between the training and the testing data sets.The gallery data set contains 697 subjects with 1 image per subject.The remaining 697 images of the same subjects as in the gallery serve as both the probe genuine images (when matched with gallery images of same subjects) and the probe imposters (when matched with gallery images of different subjects).The total number of genuine matches is 697 = 1 697 and the total number of imposter matches is 697 696 = 485,112.
Fig. (4).
The ROC curve that plots the false rejection rate (%) against the false acceptance rate (%).False rejection rate against the false acceptance rate on the FERET database 1 of 994 training images (497 subjects), 697 gallery images (697 subjects) and 697 probe images (697 subjects).The total number of genuine matches is 697 1 = 697 and the total number of imposter matches is 697 696 = 485,112.
The ROC curves of FLDA and DLDA do not appear in Fig. (4) because their FRRs and FARs are too high to be included in Fig. (4).Their EERs are numerically recorded in Table 1.This experiment shows that FLDA and DLDA suffer from severe over-fitting problem.Although BML performs better than in the first experiment, it still underperforms the traditional PCAE for some operating points.Fig. (4) shows again that the proposed MES method consistently achieves the lowest FAR and FRR among the 9 tested approaches for all different operating points.
Results on FERET Database 2
This database is constructed, similar to one data set used in [31], by choosing 256 subjects randomly with at least four [31].For such a large image size, we first apply PCA to remove the null space of t S and then apply the proposed MES approach on the 599- dimensional feature vectors.The ROC curves of FLDA and BML do not appear in Fig. (5) because their FRRs and FARs are so high that their values are out of the range of Fig. (5).Their EERs are numerically recorded in Table 1.Although the second lowest ROC curve is different from the previous two experiments, Fig. (5) shows once more that the proposed MES method consistently delivers the most accurate face verification for all different operating points.
For an accurate record, various ERRs (in %) obtained from the above three experiments are numerically recorded in Table 1.It clearly demonstrates the superior performance of the proposed MES approach to all other approaches tested in the experiments on three different face databases.
Summary of the Experimental Results
We have conducted 3 sets of experiments on 3 different face databases that evaluate 9 subspace based approaches for the face verification task.Unlike face identification experiments where some sample images of all probe subjects can be included in the training, in all verification experiments of this work, the subjects of the probe imposters are excluded in the training.Moreover, in FERET database 1, the training subjects are different from those in the gallery and probe sets.The experimental results verify the difference in terms of accuracy between the face verification and the face identification.Methods that work well for the face identification may not necessarily do the same for the face verification task.BML is a good example for this.It is thus useful to test the verification performances of various approaches that were developed and tested for identification task.
The experiments show that UFS, NDA and DSL approaches perform better than PCAE, PCAM, FLDA, DLDA and BML approaches.UFS keeps only a small principal subspace with largest eigenvalues for the discriminant evaluation.It suppresses more noise and thus has less over-fitting problem comparing to the FLDA and DLDA that perform the discriminant evaluation in the whole range space.The good performance of NDA verifies that the null space contains important discriminative information and should not be simply discarded in the feature extraction.Another property of NDA is that it does not scale the features by the eigenvalues.This is one possible reason why NDA has better generalization than FLDA, DLDA and BML.DSL extract two sets of features, one from a principal subspace and the other from its complementary subspace including the null space.Its relative good performance shows that the discriminative information resides in the both subspaces.
However, none of the three better approaches, UFS, NDA and DSL can consistently achieve the second best performance in the three experiments.One reason could be that all of them are suboptimal that extract features by the discriminant evaluation in a subspace or separately in two subspaces.The proposed MES method shows superior verification performances to all the other 8 subspace based approaches.In all three experiments on the different face databases, the proposed MES method consistently achieves the lowest error rates at all different operating points.It is important to test a verification system at different operating points because there is no optimal threshold for a verification system and different applications in practice has different requirement of FAR and FRR.The superior verification performance of the proposed method is attributed to the eigenspectrum modeling that enables a global optimization by the discriminant evaluation in the whole space and alleviates the over-fitting problem by replacing the unreliable or noise sensitive small and zero eigenvalues by the modeled values.
CONCLUSIONS
There are some intrinsic differences between the face verification and face identification.A method that performs well for the identification task may not necessarily achieve a high accuracy for the verification task.This paper addresses the face verification problem and explores several popular subspace based approaches for face verification.Experiments on three face databases compare the verification performances of eight well known approaches, PCAE, PCAM, FLDA, NDA, DLDA, BML, UFS and DSL.The verification performances of some of these approaches are indeed quite different from those in the identification evaluation reported in the literature.
This work shows problems of feature scaling and extraction from high dimensional data such as face image that are often encountered in computer vision and pattern recognition where the within-class scatter matrix degenerates due to the very small and zero eigenvalues.To alleviate these problems we decompose the eignespace into three subspaces and generate an eigenspectrum model for face data.The proposed eigenspectrum model alleviates the over-fitting problem by replacing the small and zero eigenvalues in the noise dominating and null spaces with the model values.It also enables a global optimization in the feature extraction by performing the discriminant evaluation in the whole space.Therefore, the extracted features are the most discriminative in the whole space and stable or less sensitive to the noise disturbance, the data dimensionality and the number of training samples.Extensive experiments on three face databases demonstrate that the proposed approach consistently outperforms other 8 popular subspace based approaches tested in this work.
Fig. ( 1 ).
Fig. (1).Weighting functions of (5) and (14) in the face-, noise-and null-subspaces based on a typical real eigenspectrum.Secondly, it is apparent from Fig. (1) that the eigenvectors n w r k w k 1 } { = + in the null space of w S are weighted by zero and thus this subspace is excepted from the discriminant evaluation.This is unreasonable because features in the null space have zero within-class variances based on the training data and hence should be more heavily weighted.It seems anomalous that the weighting function
1 /
computing an average eigenvalue.Different from BML approach[7], this work uses function form f to fit only the reliable part of eigenspectrum extrapolate eigenvalues in thenoise subspace { k w | m 1 < k m 2 }.We propose to model the eigenspectrum by .As the eigenspectrum in the face space is dominated by the face structural component, the parameters of and is determined by fitting the model to the real eigenspectrum in the reliable face space F. While not limiting ourselves from other possible fitting methods, in all experiments of this work, we simply still call them eigenspectrum for simplicity).A typical real eigenspectrum w k and its model w k ˆ are shown in Fig. (1).We see that the model w k ˆ fits closely to the real w k in the face space F but has slower decay in the noise space N. The faster decay of the real eigenspectrum w k in N due to noise and the limited number of training samples is what we want to slow down (Fig. 1).
Fig. ( 5 ).
Fig. (5).False rejection rate against the false acceptance rate on the FERET database 2 of 600 training and gallery images (200 subjects), 200 probe genuine images (200 subjects) and 224 probe imposter images (56 subjects).The total number of genuine matches is 200 3 = 600 and the total number of imposter matches is 4 3 56 200 = 134, 400.For this database we conducted 4 runs of training and testing with distinct probe genuine image set in each run.More specifically, the th i images ( 1,2,3,4 = i ) of all training subjects are chosen to form probe genuine set and the remaining 3 images per subject serve as the training and gallery images.The total number of genuine matches is 600 = 3 200 and the total number of imposter matches is 4 3 56 200 = 134, 400.Fig. (5) shows the ROC curves of the 4 runs of training and testing.
we first find a point near the center of the noise region by Thus, in many practical face verification training tasks we usually have m 1 = r w = n.As the face component typically decays rapidly and stabilizes, eigenvalues in the face dominant subspace, which constitute the initial portion of the eigenspectrum, are the outliers of the whole spectrum.It is well known that median operation works well in separating outliers from a data set.To determine the start point of the noise dominant region m 1 + 1, form FERET database.However, we use the same number of images (four) per subject for all subjects.Three images per subject of the first 200 subjects are used for training and also serve as gallery images.The remaining 200 images of the 200 subjects are used as probe genuine images.All 4 images of the remaining 56 subjects serve as probe imposter images.The size of the normalized image is 130 150, same as that in | 2015-12-23T19:26:37.933Z | 2008-06-09T00:00:00.000 | {
"year": 2008,
"sha1": "72b4b8f4a9f25cac5686231b44a2220945fd2ff6",
"oa_license": "CCBY",
"oa_url": "https://openartificialintelligencejournal.com/VOLUME/2/PAGE/35/PDF/",
"oa_status": "HYBRID",
"pdf_src": "Grobid",
"pdf_hash": "72b4b8f4a9f25cac5686231b44a2220945fd2ff6",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
237522136 | pes2o/s2orc | v3-fos-license | Commentary: Anthropometric Indicators as a Tool for Diagnosis of Obesity and Other Health Risk Factors: A Literature Review
We read with great interest the recently published article ≪ Anthropometric Indicators as a Tool for Diagnosis of Obesity and Other Health Risk Factors: A Literature Review ≫ (Piqueras et al., 2021). We felicitate the Authors as this a comprehensive literature review of the tools currently available to assess or diagnose obesity and potential related health issues. Obesity is a chronic multifactorial disease and a global public health challenge, due to its high prevalence worldwide (World Health Organization, 2020) and the potential associated comorbidities, in particular in the presence of metabolically unhealthy obesity (MUO) (Samouda et al., 2019). Body mass index (BMI) has widely been used to assess body fat accumulation characterizing obesity (WHO, 2000). However, BMI does not distinguish between fat mass and fat free mass, and therefore constitutes a poor diagnosis tool of obesity (Garn et al., 1986; Frankenfield et al., 2001; Adab et al., 2018). Yet, in order to manage a disease, we need first to diagnose it. Obesity diagnosis requires to have available easy to use and accurate tools to measure total and regional body fat storage, in particular visceral adiposity, major risk factor of developing obesity related comorbidities (Nicklas et al., 2006; Van Gaal et al., 2006; Cereda et al., 2007; Anan et al., 2010; Fontes-Carvalho et al., 2014; Yu et al., 2015; Brown et al., 2017, 2018; Han et al., 2017; Dan Lantsman et al., 2018; Kuritzkes et al., 2018; Magro et al., 2018; Tian et al., 2018).
INTRODUCTION
We read with great interest the recently published article ≪ Anthropometric Indicators as a Tool for Diagnosis of Obesity and Other Health Risk Factors: A Literature Review ≫ (Piqueras et al., 2021). We felicitate the Authors as this a comprehensive literature review of the tools currently available to assess or diagnose obesity and potential related health issues.
Obesity is a chronic multifactorial disease and a global public health challenge, due to its high prevalence worldwide (World Health Organization, 2020) and the potential associated comorbidities, in particular in the presence of metabolically unhealthy obesity (MUO) (Samouda et al., 2019).
Body mass index (BMI) has widely been used to assess body fat accumulation characterizing obesity (WHO, 2000). However, BMI does not distinguish between fat mass and fat free mass, and therefore constitutes a poor diagnosis tool of obesity (Garn et al., 1986;Frankenfield et al., 2001;Adab et al., 2018). Yet, in order to manage a disease, we need first to diagnose it. Obesity diagnosis requires to have available easy to use and accurate tools to measure total and regional body fat storage, in particular visceral adiposity, major risk factor of developing obesity related comorbidities (Nicklas et al., 2006;Van Gaal et al., 2006;Cereda et al., 2007;Anan et al., 2010;Fontes-Carvalho et al., 2014;Yu et al., 2015;Brown et al., 2017Brown et al., , 2018Han et al., 2017;Dan Lantsman et al., 2018;Kuritzkes et al., 2018;Magro et al., 2018;Tian et al., 2018).
ADDITIONAL ASSESSMENT METHODS OF OBESITY
Dual-Energy-X-ray-Absorptiometry (DEXA) is the reference method for body composition assessment, providing a gold standard measurement of fat mass at both total and regional levels. Computed tomography scan (CT-scan) provide a reproducible and accurate measurement of visceral adipose tissue (VAT). Nevertheless, DEXA and CT-scan are considered as prohibitive techniques owing to the limited accessibility for the machines, the high cost of the imaging examination and the radiation exposure delivered by the multi-slice CT-scan protocols (Kvist et al., 1988;Jensen et al., 1993;Pritchard et al., 1993;Heymsfield et al., 1995). Piqueras et al. (2021) reported 17 indices or health indicators to assess obesity using non-invasive and low-cost anthropometric measurements, which constitute a good alternative to expensive and less accessible biomedical imaging.
Regarding the anthropometric assessment of visceral adipose tissue, the authors stated that ≪ The model proposed by Samouda et al. (2013) correlates Visceral Adipose Tissue Area with the waist circumference and proximal thigh circumferences, BMI, and age, for adult men and women ≫ (Piqueras et al., 2021).
HEALTH ISSUES ASSOCIATED WITH OBESITY. RELATIONSHIP TO THE ANTHROPOMETRIC ASSESSMENT TOOLS
Independently of the general fat mass accumulation, VAT depot has been highlighted as a major risk factor for developing several metabolic, cardiovascular, autoimmune, neurodegenerative, and oncological diseases, as well as an increased risk for early mortality (Van Gaal et al., 2006;Cereda et al., 2007;Fontes-Carvalho et al., 2014;Yu et al., 2015;Brown et al., 2017Brown et al., , 2018Dan Lantsman et al., 2018;Kuritzkes et al., 2018;Magro et al., 2018). The Authors reported a significant relationship between the identified anthropometric tools assessing total and regional adiposity and several health issues, including glucose dysregulation, insulin resistance, metabolic syndrome, cardiometabolic dysregulation, type 2 diabetes, hypertension, cardiovascular diseases, cancer, as well as a higher risk of early cardiovascular and all-cause mortality (Piqueras et al., 2021).
DISCUSSION
Following these statements, we would like to clarify that our previous work related to the ≪ Innovative anthropometric model to predict visceral adipose tissue without resort to CT-Scan or DXA ≫ (Samouda et al., 2013) did not aim to simply correlate the visceral adipose tissue with the waist circumference (Waist C), proximal thigh circumference (Proximal Thigh C), body mass index (BMI) and age.
In order to develop our tools, multiple linear regressions with an empirical selection of the variables were developed by no controlled stepwise regressions (Samouda et al., 2013). The anthropometric VAT model we developed assumed that if we subtract the most correlated anthropometric measurement with subcutaneous abdominal adipose tissue (SAAT) from the most correlated anthropometric measurement with total abdominal adipose tissue (TAAT) and VAT as assessed by CT-Scan, we obtain the most accurate prediction of VAT by anthropometry. In our tools, Proximal Thigh C was the most correlated (R Pearson) anthropometric measurement with SAAT as assessed by CT-Scan, respectively Waist C was the most correlated anthropometric measurement with TAAT and VAT (Samouda et al., 2013).
These The Authors also stated that there was no specific cut-offs for VAT accumulation (Piqueras et al., 2021). Actually, the cut-off of 130 cm 2 of VAT has previously been defined by Hunter et al. (1994) as associated with an increased risk of hypertension and dyslipidemia. The ability of our selected anthropometric tools for the diagnosis of a VAT excess ≥ 130 cm 2 was considerably high: sensitivity (97.7% in women, 100% in men), specificity (75% in men, 85.7% in women), positive predictive values (91.3% in women, 90.9% in men), and negative predictive values (96% in women, 100% in men) (Samouda et al., 2013). In addition, the VAT anthropometric tools we developed have been validated as being the most accurate predictors of cardiometabolic abnormalities, cancer and early mortality (cardiovascular, cancer, and all causes), compared to BMI and Waist C, when biomedical imaging are not available. This validation was conducted in 10.624 participants of European descent to the National Health and Nutrition Examination Survey followed for 20 years (Brown et al., 2017(Brown et al., , 2018. Similar findings were observed in a population-based study in North Europe (Ruiz-Castell et al., 2021).
Finally, we emphasize the potentialities offered by our innovative anthropometric method, which offer accurate and easy to use tools to predict VAT without resort to biomedical imaging.
AUTHOR CONTRIBUTIONS
HS drafted the Commentary. | 2021-09-16T13:26:08.649Z | 2021-09-16T00:00:00.000 | {
"year": 2021,
"sha1": "5434c6fcabf4ae444053f39e0f4a41c4e35d9829",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2021.750613/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5434c6fcabf4ae444053f39e0f4a41c4e35d9829",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
205197948 | pes2o/s2orc | v3-fos-license | Enhanced magnetism of Cu$_n$ clusters capped with N and endohedrally doped with Cr
The focus of our work is on the production of highly magnetic materials out of Cu clusters. We have studied the relative effects of N-capping as well as N mono-doping on the structural stability and electronic properties of the small Cu clusters using first principles density functional theory based electronic structure calculations. We find that the N-capped clusters are more promising in producing giant magnetic moments, such as 14 $\mu_B$ for the Cu$_6$N$_6$ cluster and 29 $\mu_B$ for the icosahedral Cu$_{13}$N$_{12}$ clusters. This is accompanied by a substantial enhancement in their stability. We suggest that these giant magnetic moments of the capped Cu$_n$ clusters have relevance to the observed room temperature ferromagnetism of Cu doped GaN. For cage-like hollow Cu-clusters, an endohedral Cr-doping together with the N-capping appears as the most promising means to produce stable giant magnetic moments in the copper clusters.
I. INTRODUCTION
The low coordinated surface atoms of transition metal nano-clusters play a significant role in deciding their magnetic properties.Surface adsorption of gaseous molecules directly affects the electronic properties of the surface atoms in clusters and influences their overall magnetic properties.Chemisorption of gaseous molecules on the surface of transition metal nano-clusters and its effects, particularly on magnetic properties, has attracted attention in recent experimental as well as theoretical works.Several studies have revealed spectacular variations of cluster magnetism depending on the types of chemical species and the nature of interactions between the surface atoms and the adsorbed species.For example, the molecular beam deflection experiment by Knickelbein's group revealed enhanced magnetic moment for hydrogenated Fe n clusters as compared to pristine ones, for sizes ranging n = 10 − 25. [1] Interestingly, this result is in contrast to those for larger iron nano-particles, as well as for thin films, in which quenching of magnetic moment has been found upon hydrogen absorption.Similar trend of quenched magnetic moments upon chemisorption has also been found in the case of Benzene-capped and CO molecule coated Co clusters, [2] whereas mere chemisorption of Oxygen does not affect the magnetization of Co clusters at all.[3] The adsorption induced magnetism for Ni-clusters exhibits rather interesting variations.As for example, the full coverage of Ni n clusters by N 2 molecules leads to the quenching of magnetic moment, but it survives at partial coverage.[4] Similarly, the quenching of magnetic moment results with a partial adsorption of NH 3 molecules and a complete quenching occurs when the number of NH 3 molecules adsorbed equals to the number of surface Ni atoms.Surprisingly, further adsorption leads to reappearance of magnetic moment as a consequence of the variation of Ni-N distance with coverage of the absorbent.[5] Again, while the adsorptions of CO and H 2 on Ni clusters decrease their magnetic moments, [6,7] the adsorption of atomic O on the small Ni n clusters induces either enhanced or reduced magnetic moments depending on cluster size, an effect attributed to adsorption induced reconstruction of cluster structures.[8] Besides the 3d late transition metal clusters, some limited studies of chemisorption on the clusters of 3d early transition metal elements, have also been reported.Like, adsorption of a single CO molecule on Sc n clusters results quenching in magnetic moment depending on the size of clusters.[9] In the case of V n clusters, adsorption of N 2 and NO, O 2 , C 2 H 4 leads to oscillation or quenching/enhancement in magnetic moments.[10] Cu atom has an electronic configuration of 3d 10 4s 1 among the 3d transition metal series.It is therefore, characterized by a closed shell of d electrons and one s electron in the outermost atomic shell, similar to simple alkali metal atoms.Cu clusters are, therefore, expected to share similar properties with alkali metal clusters and these properties are likely to be described by the shell model [13] in a first attempt.On the other hand, the energy level separation between the valence s and d levels in a copper atom is small, giving rise to significant s-d hybridization.In contrast to alkali metal clusters, this s-d hybridization also plays important role for deciding many properties of Cu.Moreover, its role as a magnetic impurity in a semiconductor host, has attracted special attention for designing diluted magnetic semiconductors and spintronic systems because of its filled 3d shell and therefore of intrinsic nonmagnetic character.[11,12] Recently, the possibility of ferromagnetism of Cu-doped GaN with a Curie temperature above the room temperature, has been reported.[31] However, the origin of this ferromagnetism is still being debated.
In this work, we have studied the effect of N-capping as well as N mono-doping on the structural and magnetic arXiv:1512.07964v1[cond-mat.mes-hall]25 Dec 2015 properties of small Cu n clusters using first principles density functional theory (DFT) based electronic structure calculations.We find that the effects of N-capping are remarkable with respect to magnetic applications of the capped systems as the N-capping induces giant magnetic moments to the capped Cu n clusters.Our study reveals that this enhanced magnetic moment results from the fact that the N-capping first induces spin polarization to the Cu-atoms through hybridization of the s-d hybridized orbitals of Cu atoms with the p orbitals of the capping N atoms and couples all the atom-centered magnetic moments ferromagnetically.We also note that such mechanism of induced ferromagnetism of the capped Cu clusters plays a major role in the observed ferromagnetism of the Cu-doped GaN semiconductors.
In addition, we propose an endohedral doping mechanism to enhance the ferromagnetism of the N-capped systems and thereby using them as building blocks for constructing highly magnetic materials.A study of the effect of capping on the structural rearrangement of the parent cluster would be an important issue if we wish to examine the origin of the modified properties of the capped system with respect to its parent cluster.[15] Surprisingly, it may sometimes lead to the formation of cagelike hollow structures.[16,17] With respect to such cage structures, an endohedral doping process i.e. introducing dopants within the hollow cage, is considered to be another promising means of tailoring its intrinsic magnetic character.This can be seen in various clusterfullerene systems, as well as for doped icosahedral-like structures of many metal clusters.[18,19] It actually performs a synergistic role in most cases, where it helps to further stabilize the cage-like structure along with engineering its magnetic properties.For example, while there is no appreciable spin density on an empty cage of C 80 fullerene, the cluster-fullerene Gd 3 N@C 80 forms a stable magnetic unit with high magnetic moment which is largely contributed by the localized f electrons of the Gd atoms.[18] Enhanced magnetic moments of a cagelike icosahedral M 13 clusters ( M = metal atom ) upon appropriate substitution at the central site, has been examined extensively.[19] Since the N capped Cu n clusters are interesting for magnetic applications, it is essential to explore the possibilities of engineering its magnetic properties further.We have considered a case of cagelike icosahedral nitrogenated Cu 13 cluster and scrutinized the possibility of using endohedral doping by a Cr atom for constructing materials with even higher magnetic moments.
The interesting point of this study is that while the bare Cu 12 Cr cluster maintains an icosahedral structure and has overall zero net magnetic moment, both the capped systems -Cu 13 N 12 and Cu 12 CrN 12 possess giant magnetic moments.The maximum value is obtained in case of the Cu 12 CrN 12 cluster.In the following, we have first given a brief outline of the background of applied methodology in the Section II, then a detailed analysis of the results on stability and magnetism in terms of bonding characteristics and electronic properties in the Section III.The paper ends with a conclusion in the Section IV. II.
COMPUTATIONAL DETAILS.
The calculations reported in this study, were based on DFT within the framework of pseudo-potential plane wave method, as implemented in the Vienna ab initio Simulation Package (VASP).[20] We used the Projected Augmented Wave (PAW) pseudo-potential [21,22] coupled with the generalized gradient approximation (GGA) to the exchange correlation energy functional as formulated by Perdew, Burke and Ernzerhof (PBE).[23] The 3d as well as 4s electrons for Cu/Cr atoms and 2p as well as 2s electrons for N atoms were treated as valence electrons and the wave functions were expanded in the plane wave basis set with the kinetic energy cut-off of 280 eV.The convergence of the energies with respect to the cut-off value were checked.Reciprocal space integrations were carried out at the Γ point.For the cluster calculations, a simple cubic super-cell was used with periodic boundary conditions, where two neighboring clusters were kept separated by around 12 Å vacuum space, which essentially makes the interaction between cluster images negligible.Symmetry unrestricted geometry optimizations were performed using the conjugate gradient and the quasi-Newtonian methods until all the force components were less than a threshold value of 0.001 eV/ Å.To determine the magnetic moment of the minimum-energy structure in spin polarized scalar relativistic calculations, the geometry optimization was performed for all the possible spin multiplicities for each structure under the approximation of collinear atomic spin arrangements, as followed in our recent works.[24] We also considered different spin arrangements among the atoms for a particular spin multiplicity.After all, special care has been taken to suppress the N-N interactions among the capping N atoms, in a similar manner of obtaining steric conformations for surface ligands in case of passivated quantum dots.[15,25]
III. RESULTS AND DISCUSSIONS
We have first analyzed the optimized structures, stability and magnetic properties for the N-capped as well as N-monodoped Cu n clusters in the size range n = 2−6 in Section III A. We have then considered a particular case of a Cu 13 cluster and examined both the effects of N-capping as well as endohedral Cr-doping for further tuning the magnetic properties of the Cu 13 cluster.The results related to Cu 13 clusters have been discussed in the Section III B. Any theoretical study on the precise determination of the properties of an atomic cluster, first demands the determination of its minimum energy structure (MES) in complex potential energy surface.We have, therefore, first determined the MES for the pure Cu n clusters with n = 2−6 by relaxing the atomic positions and by minimizing both the total energy and spin.Several most probable initial configurations were tried to ensure that the optimized structure does not correspond to a local minimum.Our results indicate planar structures for each of the pure clusters in accordance with the previous reports.[26] For example, the calculated MES of the Cu 3 cluster, has a triangular shape and this triangular unit constitutes the building block for the MES of the subsequent higher sized clusters.Likewise, the Cu 4 cluster adopts a rectangular shape, consisting of two triangles, the Cu 5 cluster adopts a trapezoidal shape consisting of three triangles.Finally, the pure Cu 6 cluster has an overall triangular shape, where two interpenetrating triangular Cu 3 clusters are clubbed together and therefore, the composite has four triangular building blocks.Though bulk copper is nonmagnetic, small copper clusters are spin polarized and possess some week magnetic moments due to the presence of the low-coordinated surface atoms.We found that each of the even-numbered pure Cu n clusters has zero magnetic moment as all electrons are paired.On the other hand, the odd-numbered pure Cu n clusters possess a total magnetic moment of 1 µ B each due to presence of a single unpaired electron.Note that such odd-even alterations in the magnetic moments has also been reported previously for the small pure Au n and Cu n clusters.[27] Unlike the pure clusters, the capped as well as monodoped clusters show a structural transition from a 2D planar to 3D geometries with increasing size for the optimized systems.Fig. 1 shows the MESs for the N-caped as well as N-mono doped Cu n clusters for n = 2−6.For the capped systems, the Cu n core adopts a planar geometry for n =2−3 and it is of 3D geometry for n =4−6.As seen from the Fig. 1 that the ground state structures for each of the Cu 4 N 4 , Cu 5 N 5 and Cu 6 N 6 clusters, contain a tetrahedral Cu 4 , triangular bi-pyramidal Cu 5 and tetragonal bipyramidal Cu 6 cores respectively.The first isomer of the Cu 4 N 4 cluster contains a rectangular Cu 4 core which lies 0.2 eV higher in energy from the ground state structure.Likewise, the first isomers of the Cu 5 N 5 and Cu 6 N 6 clusters contain a square pyramidal Cu 5 core and a capped triangular bi-pyramidal Cu 6 core respectively.Our calculated energy differences between the ground state and the first isomer of the Cu 5 N 5 and Cu 6 N 6 clusters, are 0.25 eV and 0.15 eV respectively.Interestingly, the isomers having a planar Cu n core, are 0.8 eV and 1.02 eV above the corresponding ground states for the Cu 5 N 5 and Cu 6 N 6 clusters, respectively.Table I provides more information about the structural parameters of the ground state and first isomer of the capped clusters.Note that the preferred spin multiplicities of the first isomers are the same as that of the ground state structure in case of the Cu 4 N 4 , Cu 5 N 5 and Cu 6 N 6 clusters.For the case of N-monodoped systems i.e the Cu n N clusters, the Cu n core adopts planar shape up to n = 4 and it is of 3D square pyra-midal and tetragonal bi-pyramidal shapes for the Cu 5 N and Cu 6 N clusters, respectively.For the later two monodoped systems, the isomers having a planar Cu n core, are 0.4 eV and 0.89 eV higher in energy from the respective ground states.A detailed analysis of the ground state structures of the Cu n N n clusters, reveals that the average number of nearest neighbor Cu atoms for a given Cu atom, is varying as 1, 2, 3, 3.6 and 4 for n = 2, 3, 4, 5 and 6 respectively.The average value of Cu-Cu-N angles also show interesting variations along the increasing size of the capped systems, such as Cu − Cu − N = 150 • , 145 We analyzed the energetics of the Cu n N and Cu n N n clusters in terms of the total energies of the pure Cu n and capped/doped systems corresponding to their ground state structures.For the capped systems i.e Cu n N n clusters, we have calculated the energy gain, ∆ 0 c in adding n number of N-atoms to an existing Cu n cluster, as Similarly, the energy gain ∆ 0 m for the mono doped systems corresponds to the energy gain for a single N atom doping.First, the energetics of the pure Cu n clusters have been revised by analyzing their binding energies which are calculated as E b = -[E(Cu n )-nE(Cu)].Our calculated binding energies for the optimized pure Cu n clusters are E b = 2.24 eV, 3.66 eV, 6.33 eV, 8.57 eV and 11.44 eV for n = 2, 3, 4, 5 and 6 respectively.The calculated values of ∆ 0 c and ∆ 0 m for the optimal structure of the respective capped and doped systems are shown in Fig. 1.It is seen that ∆ 0 c values increase monotonically with increasing cluster size, which demonstrates increasing stability of the capped systems.On the other hand, the ∆ 0 m values for the N-mono doped systems, show an overall decreasing trend with increasing cluster size.However, these values of ∆ 0 m are always +ve for all the mono doped systems, indicating mono dopings are also energetically favorable.In order to understand the enhancement in stability, specially of the capped systems, we have analyzed the nature of the Cu 2 and CuN dimers bonding as well as the overall variation in the average Cu-Cu and Cu-N bond lengths in the optimized structures of the pure, mono-doped and capped systems as shown in Fig. 2. In the atomic structure of a copper atom, the valence 3d shell is completely filled with a single 4s electron outside it.Therefore, two Cu atoms in a Cu 2 dimer, are coupled by week σ bond resulting in a larger bond length of 2.22 Å.Note that the estimated bond length of the Cu 2 dimer is in excellent agreement with its experimental value.[28] When a nitrogen atom is attached to form a CuN dimer, the s-d hybridized electrons of the copper atom interacts with the 2p 3 electrons of nitrogen atom resulting in a stronger bond and thereby a shorter bond length of 1.75 Å for a CuN dimer.The charge density contour plot of the Cu 2 and CuN dimers as shown in the Fig. 2 demonstrates this type of bonding.Fig. 2 also shows that the average Cu-Cu bond lengths is larger in the mono-doped as well as capped systems compared to that in the pure Cu n clusters.It indicates that the Cu-Cu bond strength decreases upon the capping/doping.The enhanced stability of the capped systems, is therefore mainly caused by the enhanced stability of the Cu-N bonds, as the number of Cu-N bonds in the ground state structures of the capped systems, increases with increasing cluster size.
Finally, we address the the main issue of the magnetic behavior for the three different classes of systems.First, for the pure clusters, the magnetic moments show an even-odd alternation in accordance with the previous reports.[27] For each of the even numbered Cu n clusters, all valence electrons are paired, giving a closed shell electronic structure which results in a zero net magnetic moment.Conversely, each odd numbered Cu n cluster, has one unpaired electron, which attributes a net magnetic moment of 1 µ B .Our calculated magnetic moments for the ground state structures of the N-capped as well as Nmonodoped systems, are given in Fig. 1.It is seen that the magnetic moments of the mono-doped systems, also show an even-odd alteration, but with reversed period with respect to that of the corresponding pure systems.In other words, a N@Cu n cluster has net magnetic moment of 1 µ B for the even values of n and zero for the odd values of n.Considering the case of the ground state structure of the Cu 2 N cluster, there is some charge transfer from N to Cu atoms due to hybridization between N-2p and Cu-sd orbitals.This results into net magnetic moment of 1 µ B which is solely contributed by the N atom.Similarly for the MES of Cu 3 N cluster, the s-electron of each Cu atom gets paired with a p-electron of the N-atom as there is significant overlap between the Cu and N atoms (cf.Fig. 1), which now results into its zero net magnetic moment.The trend in the net magnetic moments in the subsequent Cu n N clusters with n > 3, would be readily understandable.This is because of the fact that the systems having odd number of Cu atoms in excess of 3, such as for the Cu 4 N and Cu 6 N clusters, each also has one unpaired s electron, which gives rise to 1 µ B net magnetic moment.Conversely, all the valence electrons are paired for a mono-doped system having even number of Cu atoms in excess of 3 Cu atoms such as Cu 5 N cluster and it results again zero net magnetic moment.Note this finding of the low magnetic moments of the N-monodoped Cu n clusters, is in direct contrast with the case of N-monodoped Mn n clusters, where giant magnetic moments have been predicted.[29] In contrast to the mono-doped systems, the optimal Ncapped Copper clusters show significantly enhanced magnetic moments.Among the capped systems, first the optimized Cu 2 N 2 cluster has a special structure where each Cu atom is directly connected with both the N atoms and fulfills the perfect charge balance which results a zero net magnetic moment for it.However, the ground state structures of the other Cu n N n clusters have net magnetic moments of 8 µ B , 10 µ B , 12 µ B and 14 µ B respectively for n = 3−6.We find that the enhanced magnetic moments of the capped systems, result from the ferromagnetic coupling among the all N-atoms as well as Cu-atoms centered magnetic moments.In the N-capped Cu n clusters, Cu atoms become spin polarized because of charge transfer among the neighboring atoms.This spin polarization further magnetizes p-electrons of the N-atoms through p-d hybridization.This hybridization in turn renders a ferromagnetic coupling state among all the constituent atoms.In order to understand the microscopic origin of the enhanced magnetic moments of the N-capped systems, we have performed the Mullikan population analysis of spins.[30] In Table I, we provide information on the magnetic moments at the Cu and N sites for the ground state as well as first isomer of the N-capped clusters.We have also listed in Table I, their total spin, energy difference measured with respect to the ground state energy, values of Cu − N bond-lengths and Cu − Cu − N bond angles.The Mullikan analysis yields an averaged spin magnetic moments of 0.3−0.4µ B at the Cu site and 1.3−1.5 µ B at the N-site.The important point to note is that the moments at Cu-sites arise mainly from Cu-d orbital and from N-p orbital in case of the N-sites.However, the major contribution to the enhanced magnetic moments of the N-capped clusters, arises from the N atoms.Also note that the averaged magnetic moments at Cu sites of the ground state structures of the capped clusters, though very small, show an overall decreasing trend with increasing cluster size.It, therefore, allows us to go one step forward in generalizing the net magnetic moments of the ground state structure of the four N-capped clusters by a formula 2+2x, where the first '2' denotes the magnetic moment contributed by the core of Cu atoms, while '2x' is the magnetic moment contributed by the x number of N atoms attached as capping.This is a reasonable approximation because the N-capping allows charge transfer from N-atoms to the Cu-atoms and induces some magnetic moments to the Cu atoms totaling around 2 µ B for each capped system.For better understanding of magnetic coupling, we have also plotted in Fig. 3 the spin density surface corresponding to the ground state of the Cu n N n clusters in the upper panel and charge density contour plot corresponding to the ground state of the Cu n N clusters in the lower panels for the size range n = 3−6.It again clearly indicates the ferro magnetic coupling of the constituent atom-centered moments for the capped systems and enhanced hybridization tendency for the mono doped systems which results in their tiny net magnetic moments.We note that this enhanced ferromagnetism of the Ncapped Cu n clusters, has a closed relevance to the observed ferromagnetism of a Cu-doped GaN system.This is due to the fact that the p-d hybridization mechanism is identified as the main factor for the ferromagnetic coupling of Cu-dopants in GaN.[31] We have carried out separate studies of the magnetic coupling between two Cu atoms in a GaN crystal using a 32 atoms supercell.Two Cu atoms were substituted at different Ga sites considering the various possibilities such as the near/far separation of the two dopants as well as their ferromagnetic/anti-ferromagnetic couplings.We found that in the MES of the 2Cu@GaN system, the coupling to be ferromagnetic with magnetic moments of 0.13 µ B at each Cu site.
In order to verify our GGA-DFT results discussed above, we have also studied the small N-capped Cu n clusters using hybrid DFT functional.We used a screened form of hybrid functional HSE06 proposed by Heyd, Scuseria and Ernzerhof with screening parameter 0.206.[32] Note that the hybrid HSE06 functional has been used recently to correct GGA-DFT predictions of formation energy of transition metal oxide crystals [33] as well as of magnetic interaction for transition metal clusters.[34] We find that the overall trend of enhanced magnetic moments of the N-capped Cu n clusters persists as found in the case of PBE calculations.In HSE calculations, we find that the N-atom centered magnetic moments get slightly increased and Cu-atom centered magnetic moments decrease slightly.This is attributed to the less hybridization between N-p and Cu-d orbitals.This is also reflected in slight increase of Cu − N bondlengths of the capped clusters in the HSE calculations.
It is thereby confirming that our result of the enhanced magnetic moments of the N-capped Cu n clusters is very robust irrespective of the choice of DFT functionals.It is therefore, also very desirable to look for its experimental verification.Recently, molecular beam deflection experiments have been employed to study magnetic moments of several capped transition metal clusters.[1,2] Such experiments would be useful in this respect.
B. Capping and endohedral doping in Cu13 cluster
To study the effect of endohedral doping in a N-capped cage-like Cu cluster, we consider a particular case of a Cu 13 cluster.First, to determine the MES of a pure Cu 13 cluster, we have considered three most probable starting guessed structures, namely icosahedral (ICO), hexagonal bi-layered (HBL) and buckled bi-planar (BBP) structures.For the less symmetric HBL/BBP structures, all the 13 Cu atoms lie on the surface, thereby forming a cage-like structure.The morphology of an icosahedral Cu 13 cluster can also be regarded as a cage of the twelve surface Cu atoms, encapsulating another Cu atom at the center of the cage.It is to be noted that previous works on small copper clusters indicate a rich variety of structures and the icosahedral symmetry based structure is predicted not to be the true ground state structure for a Cu 13 cluster.[26,35] Our calculated total binding energy and magnetic moment for the locally optimized three structures of the Cu 13 cluster, are 29.05 eV and 5 µ B respectively for the optimal ICO structure, 29.53 eV and 1 µ B respectively for the optimal HBL structure and finally, 29.65 eV and 1 µ B respectively for the optimal BBP structure.So it is the optimal BBP structure of a pure Cu 13 cluster which appears as its MES in our calculations.Thereafter, each of the three structures was capped with N-atoms and we explored the potential energy surface to look for the MES of the N-capped Cu 13 cluster.To cap a HBL/BBP structure of the Cu 13 cluster, total 13 N atoms have been used to bind with all its 13 surface Cu atoms.On the other hand, total 12 N atoms have been attached with its 12 surface Cu atoms to cap the icosahedral Cu 13 cluster.Then each structure of the N-capped Cu 13 cluster, has been optimized for all possible spin multiplicities.After structural optimization, it is seen that the estimated values of ∆ 0 c /Natoms are 1.81 eV, 1.64 eV and 1.65 eV for the optimal ICO-Cu 13 N 12 , BBP-Cu 13 N 13 and HBL-Cu 13 N 13 clusters respectively.It is therefore, unambiguously showing that the N-capping to a Cu 13 cluster, favors an icosahedral Cu 13 core.
It is also interesting to note that the optimized structure of the icosahedral Cu 13 N 12 cluster has a total magnetic moment of 29 µ B which is very large compared to that of the pure optimal Cu 13 cluster and therefore, the sensitivity of enhanced magnetic moments of a Ncapped Cu n cluster to cluster size, is beyond questionable.Similar to the smaller N-capped Cu n clusters, the large magnetic moment of the Cu 13 N 12 cluster is contributed mainly by the spin polarized N atoms which carry in this case an average moment of around 1.4 µ B /Natom.We note that there is a net charge transfer from the N atoms to the Cu atoms, which creates partial occupancy in the 3d-orbital of the Cu atoms and thereby, inducing spin polarization to them.Mulliken population analysis shows that the N-capping in the icosahedral Cu 13 cluster creates a local magnetic moment of about 0.25 µ B /atom to the surface Cu atoms.Above all, the spin-polarized N atoms are ferromagnetically coupled among themselves and also with the Cu-atom centered moments which results the giant net magnetic moment.
The interesting point is that the magnetic moment of the Cu 13 N 12 cluster can be further improved by the substitution of the central Cu atom by a Cr atom.Recently, the role of Cr atom as an endohedral dopant for constructing molecular units of highly magnetic functionalized materials, has been investigated for hydrogenated Si fullerenes.[36] With this in view, we have explored the effects of Cr substitution at one Cu-site in the icosahedral Cu 13 N 12 cluster.We find that the optimized Cr-doped cluster prefers to have the Cr-substitution at the central site and it possesses a higher net magnetic moment of 34 µ B along with a substantial energy gain of ∆ 0 c = 18.1 eV.It is also important to note that a optimized bare Cu 12 Cr cluster has zero net magnetic moment.Therefore, the Ncapped Cr-encapsulated Cu 13 cluster shows significant improvement in magnetic moment as compared to that of the bare pure Cu 13 cluster as well as Cu 12 Cr cluster.Though the magnetic moments of the Cu and N atoms are enhanced after Cr-substitution, it is the moment of the Cr impurity which contributes the most to the extra magnetic moment of the Cu 12 CrN 12 cluster as compared to that of the Cu 13 N 12 cluster.Moreover, all the atomic moments are ferromagnetically aligned.Fig. 4 shows the spin density plots for the optimized structures of the Cu 13 N 12 and Cu 12 CrN 12 clusters.It clearly indicates the ferromagnetic alignments of the constituents in the two systems.Further, to understand the effects of N-cappings on the magnetic properties of the pure Cu 13 cluster and the endohedral Cr-doped Cu 13 cluster, we have also plotted in the Fig. 4, the projected density of states (PDOS) of the different species of atoms in the two systems.Comparing the plot of the PDOS of the optimized Cu 13 N 12 and Cu 12 CrN 12 clusters, we see that there is hardly any changes in the Cu-PDOS and N-PDOS between the two systems.For the Cr-PDOS of the Cu 12 CrN 12 cluster, there is, however, significant difference between its majority and minority spin channels, which indicates the large magnetic moment character of the Cr atom.Mulliken population analysis of spins shows that the unpaired electrons of the central Cr-atom, indeed reside predominantly with it and do not appreciably hybridize with the cage atoms.This is also reflected in the lower value of ∆ 0 c for the optimal Cu 12 CrN 12 cluster than that of the optimal Cu 13 N 12 cluster.The interaction of the central Cr-atom with the cage-atoms seems largely mediated via the Cr-4s state, which is possibly also manifested by the observed symmetric position of the Cr atom right at the center of the optimized cage.
To have a clear understanding of the nature of interactions of the central atom with the surrounding 12 Cu atoms and also that of the capping agents, N atoms with the surface Cu atoms, we have calculated the charge density difference, ∆ρ as ∆ρ = ρ(C 12 CrN 12 ) − ρ(Cu 12 ) − ρ(Cr) − ρ(N 12 ) where ρ(Cu 12 CrN 12 ) is the total charge density of the optimized Cu 12 CrN 12 cluster and the rest are charge densities of the isolated atoms for each species.The +ve value of ∆ρ indicates accumulated charge and −ve value for the depleted charge.Analysis of ∆ρ, therefore, gives us clues about the overall charge redistribution after introducing the Cr atom at the central site.Fig. 5 shows the plot of ∆ρ for the MESs of the Cu 13 N 12 and Cu 12 CrN 12 clusters.It clearly indicates more accumulation of charge at the central Cr site of the Cu 12 CrN 12 cluster, implying its high magnetic character and weak interaction with the surface Cu-atoms.Our Mulliken population analysis shows that it results in a Cr-atom-centered magnetic moment of around 3.2 µ B and it is predominantly contributed by its d orbital.
IV. SUMMARY AND CONCLUSIONS
In summary, N-capping is more promising than Nmonodoping in inducing giant magnetic moment in a Cu cluster.The N-capping is associated with enhanced hybridization of the valance p orbitals of N atoms with the s-d hybridized orbitals of the Cu atoms and it induces spin polarization to the Cu atoms.The ferromagnetic coupling among the constituent atoms gives rise to the giant magnetic moments for the capped systems.This study also shows that the magnetism of the N-capped cage-like structures of a Cu cluster, can be enhanced further upon Cr encapsulation.As the optimal Cu 12 CrN 12 cluster possesses both enhanced hybridization among the valence orbitals of the constituent atoms as well as larger net magnetic moments, a Crencapsulated cage-like structure of N-capped Cu cluster, may be a better candidate for dilute magnetic semiconductors and experimental investigations in this direction would be exciting.
FIG. 1 :
FIG. 1: (Color online) Geometries of the capped CunNn and mono-doped CunN clusters in their ground states for n = 2-6.The violet colored larger balls and red colored smaller balls represent Cu and N atoms respectively.The numbers in the parenthesis, represent the values of energy gain (∆ 0c for the capped systems and ∆ 0 m for the mono-doped systems) and the total magnetic moment respectively for the respective ground state.
FIG. 2 :
FIG. 2: (Color online)Plot of charge density contour (upper panels) of Cu2 and CuN dimers and variation of Cu-Cu and Cu-N bond lengths (lower panels) in the ground state structures of pure, mono-doped and caped systems.Bond lengths of dimers in the upper panel, are in Angstrom unit.
FIG. 3 :
FIG.3:(Color online) Plot of spin density surface (upper panels) of the capped systems, CunNn and charge density contours projected on to the xy plane (lower panels) for the mono doped systems, CunN in their ground state with n = 3 −6.The scale for charge density which is shown aside in the lower panel, is chosen to be the same for all the mono doped structures, while an isovalue of 0.1 e − / Å3 is used for plotting the spin density in the upper panel.
FIG. 4 :
FIG. 4: (Color online) Plot of spin density surface (upper panel) and projected density of states (lower panel) for the ground state structures of Cu13N12 and Cu12CrN12 clusters.The green colored ball in the Cu12CrN12 cluster structure, represents the Cr atom.Isovalue of 0.1 e − / Å3 and smearing width of 0.1 eV are used in the plots of spin density and PDOS respectively.
FIG. 5 :
FIG. 5: (Color online) Plot of charge transfers, ∆ρ for the ground state structures of Cu13N13 and Cu12CrN13 clusters.The isovalue is chosen at 0.03 e − / Å3 .The red (blue) colored surface represents the accumulated (depleted) charge.
• , 140 • and 135 • for the optimal Cu 3 N 3 , Cu 4 N 4 , Cu 5 N 5 and Cu 6 N 6 clusters respectively.This uniform variation of Cu − Cu − N angle establishes it as a guiding rule for the ground state geometries.On the other hand for the optimized structure of the N-monodoped clusters, the average value of Cu-N-Cu angles varies around 90 • , such as Cu − N − Cu = 91.5 • , 89.2 • , 81.7 • , 92.1 • and 84.9 • for the optimized Cu 2 N, Cu 3 N, Cu 4 N, Cu 5 N and Cu 6 N clusters, respectively. | 2015-12-25T06:10:23.000Z | 2015-01-12T00:00:00.000 | {
"year": 2015,
"sha1": "e36f12bcf82e0d9b624cf30974c8169db6ceac15",
"oa_license": null,
"oa_url": "https://arxiv.org/pdf/1512.07964",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "e36f12bcf82e0d9b624cf30974c8169db6ceac15",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine",
"Physics"
]
} |
6635301 | pes2o/s2orc | v3-fos-license | Early and late selection: effects of load, dilution and salience
The present issue focuses on the classic question of early vs. late selection and evaluates the current status of the perceptual load theory that has been offered as an intermediate solution. Several of the papers compare or contrast perceptual load with an alternative explanation- perceptual dilution. The first group of papers report evidence that is inconsistent with the perceptual load theory but is generally consistent with the dilution theory.
Roper and Vecera (2013) report that flanker effects can be found under high perceptual load. Extending the duration of the display, and particularly of the relevant target, produces reliable congruency effects even under high perceptual load. The authors therefore argue that factors such as stimulus and encoding demands contribute to the load effect and that visual short term memory serves as an additional bottleneck when stimuli are briefly presented. Yeshurun and Marciano's (2013) findings also challenge the perceptual load theory. The authors found that task difficulty, as manipulated by degradation of visual information, did not affect attentional selection and flanker interference. This is in contrast to the claim that increasing sensory load increases distractor interference. Furthermore, the basic load effect was not replicated in all 4 experiments, and flanker effects were found even under high perceptual load.
Mevorach et al. (2014) tested patients with unilateral neglect and found that contralesional neutral elements eliminated the interference presented by a distractor. The authors argue that given the notion that no attentional resources are allocated to the contralesional field, perceptual load should not be affected by presenting items in the contralesional field. Instead they suggest that neutral stimuli dilute the flanker effect and that attentional selection is determined by dilution rather than load. This is in line with Benoni and Tsal (2013) who present a critical review of perceptual load theory. They challenge the theory's assumptions and supporting evidence, and provide supportive arguments for the alternative dilution theory.
Chen and Cave (2013) further studied the dilution effect. They present data that is consistent with dilution but not with perceptual load. However, they argue that the current conception of dilution is simplified. In particular the processing of neutral items is not only dependent on the number of stimuli present but also on complex interactions between top-down and bottom-up processes. Thus, both distractor and neutral elements in a multi element display compete for the same limited attentional resources.
In their opinion paper Linnell and Caparos (2013) argue that in accord with the perceptual load theory the spatial profile of attention was more focused when perceptual load was high and less focused when it was low. However in contrast to the theory this holds only when cognitive resources were available. Indeed, the authors emphasize the role of cognitive engagement in the task at hand and suggest that variations in perceptual load modulate task difficulty and this in turn alters cognitive engagement and motivation, factors often neglected in the study of attention.
More evidence for a strategic component in the seemingly automatic processing of task-irrelevant information comes from Biggs and Gibson (2013). They show that prior experience and situational expectations modulate the degree to which irrelevant information is processed. As they argue, this might render the assumption of a broad versus narrow allocation of visual attention in explaining effects of irrelevant information processing superfluous.
The review of Scalf et al. (2013) presents a hybrid neural competition theory that is generally consistent with both perceptual load and dilution theories. This theory reinforces the original view that low perceptual load is associated with a stronger impact of task-irrelevant information. As the authors point out, this might reflect different processing strategies in conditions with high and low perceptual load: While low perceptual load might allow for bottom-up-driven target selection, high perceptual load might call for top-down regulation. The latter leads to stronger filtering, which reduces the impact of task-irrelevant distractors.
The remaining papers use perceptual load theory as a direct or indirect context for studying other aspects of attentional selection. The role of working memory in regulating the degree to which distractors can be ignored is the focus of de Fockert's (2013) review. In support of the original assumption, the review provides strong evidence that higher working memory load makes it more difficult to ignore task-irrelevant distractors. This fits with the idea that working memory has an active role in gating irrelevant information.
Forster (2013) takes the perceptual load theory a step forward into the realm of mind-wandering and thought distraction. In her review she carefully distinguishes between different types of task relevancy and between external and internal (e.g., task-unrelated thoughts) sources of distraction. She argues that perceptual load theory is a powerful and largely universal framework to study distraction effects.
Parks et al. (2013) used SSVEPs together with ERPS to study the effect of attentional load in a go-no task. The findings reveal a center-surround configuration of both facilitation and suppression in the visual field.
Swallow and Jiang (2013) bring in a novel perspective by relating findings on the impact of perceptual load to the attentional boost effect—the observation that distractor processing can benefit from temporal synchronicity with target presentation. As they point out, the seemingly automatic processing of distractors with high perceptual load might reflect a kind of “intentional automatization”: the cognitive system might be programmed to take in information automatically whenever being triggered by a target, suggesting that automatic processing might be a byproduct of intentional selection.
Folk (2013) makes an interesting conceptual distinction between processing costs produced by response-incompatible distractors on the one hand and search costs on the other. By combining aspects of the original perceptual-load paradigm and the classical singleton-search paradigm, he provides evidence that search costs remain even under conditions where the compatibility of distractors no longer affects processing.
Finally, Moher et al. (2013) tested selection processes without the explicit requirement of target identification. They found that detection performance remained high in spite of focal attention manipulations (i.e., target saliency, availability of cognitive resources, and familiarity) that eliminated identity-repetition effects. Thus, the authors conclude simple target detection is not dependent on focal attention.
We hope you will find this Research Topic interesting and informative. Enjoy your reading!
The present issue focuses on the classic question of early vs. late selection and evaluates the current status of the perceptual load theory that has been offered as an intermediate solution.
Several of the papers compare or contrast perceptual load with an alternative explanation-perceptual dilution. The first group of papers report evidence that is inconsistent with the perceptual load theory but is generally consistent with the dilution theory. Roper and Vecera (2013) report that flanker effects can be found under high perceptual load. Extending the duration of the display, and particularly of the relevant target, produces reliable congruency effects even under high perceptual load. The authors therefore argue that factors such as stimulus and encoding demands contribute to the load effect and that visual short term memory serves as an additional bottleneck when stimuli are briefly presented. Yeshurun and Marciano's (2013) findings also challenge the perceptual load theory. The authors found that task difficulty, as manipulated by degradation of visual information, did not affect attentional selection and flanker interference. This is in contrast to the claim that increasing sensory load increases distractor interference. Furthermore, the basic load effect was not replicated in all 4 experiments, and flanker effects were found even under high perceptual load. Mevorach et al. (2014) tested patients with unilateral neglect and found that contralesional neutral elements eliminated the interference presented by a distractor. The authors argue that given the notion that no attentional resources are allocated to the contralesional field, perceptual load should not be affected by presenting items in the contralesional field. Instead they suggest that neutral stimuli dilute the flanker effect and that attentional selection is determined by dilution rather than load. This is in line with Benoni and Tsal (2013) who present a critical review of perceptual load theory. They challenge the theory's assumptions and supporting evidence, and provide supportive arguments for the alternative dilution theory.
Chen and Cave (2013) further studied the dilution effect. They present data that is consistent with dilution but not with perceptual load. However, they argue that the current conception of dilution is simplified. In particular the processing of neutral items is not only dependent on the number of stimuli present but also on complex interactions between top-down and bottomup processes. Thus, both distractor and neutral elements in a multi element display compete for the same limited attentional resources.
In their opinion paper Linnell and Caparos (2013) argue that in accord with the perceptual load theory the spatial profile of attention was more focused when perceptual load was high and less focused when it was low. However in contrast to the theory this holds only when cognitive resources were available. Indeed, the authors emphasize the role of cognitive engagement in the task at hand and suggest that variations in perceptual load modulate task difficulty and this in turn alters cognitive engagement and motivation, factors often neglected in the study of attention.
More evidence for a strategic component in the seemingly automatic processing of task-irrelevant information comes from Biggs and Gibson (2013). They show that prior experience and situational expectations modulate the degree to which irrelevant information is processed. As they argue, this might render the assumption of a broad versus narrow allocation of visual attention in explaining effects of irrelevant information processing superfluous.
The review of Scalf et al. (2013) presents a hybrid neural competition theory that is generally consistent with both perceptual load and dilution theories. This theory reinforces the original view that low perceptual load is associated with a stronger impact of task-irrelevant information. As the authors point out, this might reflect different processing strategies in conditions with high and low perceptual load: While low perceptual load might allow for bottom-up-driven target selection, high perceptual load might call for top-down regulation. The latter leads to stronger filtering, which reduces the impact of task-irrelevant distractors.
The remaining papers use perceptual load theory as a direct or indirect context for studying other aspects of attentional selection. The role of working memory in regulating the degree to which distractors can be ignored is the focus of de Fockert's (2013) review. In support of the original assumption, the review provides strong evidence that higher working memory load makes it more difficult to ignore task-irrelevant distractors. This fits with the idea that working memory has an active role in gating irrelevant information.
Forster (2013) takes the perceptual load theory a step forward into the realm of mind-wandering and thought distraction. In her review she carefully distinguishes between different types of task relevancy and between external and internal (e.g., task-unrelated thoughts) sources of distraction. She argues that perceptual load theory is a powerful and largely universal framework to study distraction effects.
www.frontiersin.org
March 2014 | Volume 5 | Article 248 | 1 Parks et al. (2013) used SSVEPs together with ERPS to study the effect of attentional load in a go-no task. The findings reveal a center-surround configuration of both facilitation and suppression in the visual field. Swallow and Jiang (2013) bring in a novel perspective by relating findings on the impact of perceptual load to the attentional boost effect-the observation that distractor processing can benefit from temporal synchronicity with target presentation. As they point out, the seemingly automatic processing of distractors with high perceptual load might reflect a kind of "intentional automatization": the cognitive system might be programmed to take in information automatically whenever being triggered by a target, suggesting that automatic processing might be a byproduct of intentional selection. Folk (2013) makes an interesting conceptual distinction between processing costs produced by response-incompatible distractors on the one hand and search costs on the other. By combining aspects of the original perceptual-load paradigm and the classical singleton-search paradigm, he provides evidence that search costs remain even under conditions where the compatibility of distractors no longer affects processing.
Finally, Moher et al. (2013) tested selection processes without the explicit requirement of target identification. They found that detection performance remained high in spite of focal attention manipulations (i.e., target saliency, availability of cognitive resources, and familiarity) that eliminated identity-repetition effects. Thus, the authors conclude simple target detection is not dependent on focal attention.
We hope you will find this Research Topic interesting and informative. Enjoy your reading! | 2017-05-05T07:06:22.073Z | 2014-03-20T00:00:00.000 | {
"year": 2014,
"sha1": "2c359641ac59b84b0985f76783e8caecb33ebe53",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2014.00248/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2c359641ac59b84b0985f76783e8caecb33ebe53",
"s2fieldsofstudy": [
"Psychology",
"Biology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
55305009 | pes2o/s2orc | v3-fos-license | Using Basis , Futures Price , and Futures Price Spread as Barometers for Storage Decisions
The purpose of this paper is to determine the importance of the strength and weakness of basis, futures spread, and futures prices as barometers for producers to use in deciding whether to store or not. Basis is the most important market signal for wheat producers to use when deciding whether to store and hedge or sell their wheat at harvest.
Introduction
Every year grain producers and grain elevator managers must decide whether to store grain or sell it at harvest.This decision is complex and it is like a game of chance in which the probability of winning or losing changes each time the game is played (Heifner, 1966).The purpose of this study is to determine the importance of the strength and weakness of basis, futures price, and futures price spread, in predicting returns to storage.
Producers and managers want to know if there are market signals that they can use to a make a harvest time store/sell decision.They desire rules of thumb that can be used to make decisions.Extension economists and market advisors frequently use indicators such as basis, futures prices, and futures price spread as signals for a storage decision.Usset (2009), for example, in the context of a marketing plan for producers, includes those indicators in a marketing decision tree.(Note 1) Another example of rules of thumb is shown in table 1.The source of this table is unknown, but its author emphasized the use of two of these signals, futures price and basis, in making storage decisions.Some undergraduate agricultural marketing instructors teach their use as well.One textbook, for example, advocates using "basis as a barometer" when making storage decisions (Purcell and Koontz p.32).
From a theoretical perspective, Working (1953) suggested that basis should be a useful indicator of potential returns to storage.In contrast, futures price level is not expected to be a signal since past studies have shown that futures markets are mostly efficient (Kastens and Schroeder, 1996;Tomek, 1997).The empirical research, however, is inconsistent regarding whether basis, futures price spread, and/or futures price level can be used as a signal to store or not (Zulauf and Irwin, 1998;Kastens and Dhuyvetter, 1999;Yoon and Brorsen, 2002).Thus, we propose to revisit this issue in the hope of providing a more definitive answer to the empirical question of whether basis, futures price level, and/or futures price spread can serve as a barometer of whether to store or not.This study is quite similar to Zulauf and Irwin (1998) as well as Kastens and Dhuyvetter (1999).However, our regression approach should lead to a more powerful test than the simulation strategies in past research and thus will more clearly measure the usefulness of these signals for storage decisions.Simulation methods are roughly equivalent to regression against a dummy variable of whether or not a signal is above or below a threshold.Such simulation methods may have lower power than the regression approach used here.
In section 2, the theory of the price of storage is presented to provide an understanding of inter-temporal price relationships between spot and futures prices.Theory suggests that futures price level should be a worthless signal, but that the level of basis potentially has value as a storage indicator.Futures price spread is a market indicator of storage returns in that it provides market prices for a commodity to be stored and sold in the future relative to selling it earlier, but little empirical evidence exists to support its usefulness.In section 3, the data used in this study is described; in section 4, the econometric models are presented to measure various returns to storage against measures of the strength and weakness of basis, futures price level, and futures price spread;in section 5, the model misspecification tests are presented to verify that the underlying model assumptions hold; andin sections 6 and 7 results and conclusions are presented.
Theory
The theory of the price of storage was first proposed by Kaldor (1939) to explain the inter-temporal price relationship between spot and futures prices.Working (1949) viewed the returns to storage as being determined by the supply and demand for storage.Thus when wheat stocks are large, the demand for storage is large and the price of storage is expected to be relatively large.However, if wheat stocks are low, then the economic benefits/returns of storing wheat are small.Furthermore, the theory of the price of storage only holds for highly storable and continuous inventory commodities such as wheat (Brennan, 1958).Over the years, studies on the theory of the price of storage have evolved following Kaldor (1939) including Telser (1958), Williams and Wright (1982), Benirschka andBinkley (1995), andSeamon, Kahl, andCurtis (2001).
The theory of the price of storage includes two different but overlapping views.The first view explains the difference between the spot and futures prices in terms of interest forgone in storing a commodity, physical storage costs, and convenience yield on inventory.This view was shown in the works of Kaldor (1939), Brennan (1958, 1991), and Telser (1958).Under the second view, Cootner (1960), Dusak (1973), Breeden (1980), andHazuka (1984) show that the theory of the price of storage can be explained by dividing futures prices into an expected risk premium and predicted future spot price (Fama and French, 1987).
Combining these slightly different views, we define the following variables that explain theory of the price of storage.Define T F as the future price for delivery of a commodity and t S as the spot price.The price of storage at time t from holding a commodity until time Tis defined as Where R is the interest rate, and thus S t R is the opportunity cost of holding stocks, W is physical storage cost, P is risk premium, and C is convenience yield.
The concept of basis is important because it combines both the spot and futures prices, which reflects the current and expected demand and supply conditions, respectively (Leuthold and Peterson, 1983;Purcell and Koontz, 1999).While equation (1) relates the spot and futures prices (the basis), the key question that arises is "what does the theory of the price of storage say about basis and/or futures price as a market signal?"The theory of the price of storage suggests that initial basis can help producers and grain managers decide whether to store or sell their grain at harvest since the spot and futures prices should converge as t approaches T. The reliability of this prediction might be affected if physical storage costs (including costs of maintaining quality) or opportunity cost of storing grain change over time, or if risk premium or convenience yield vary, so that the ending basis at time T varies over time.Further, physical storage costs increase depending on the quantity of commodity stored.As more grain needs to be stored, it must be stored in higher cost facilities or locations.Moreover, if convenience yield is high then returns to storage should be low as a signal for grain owners to sell their stocks.A measure that compares the initial basis with expected basis at time T should provide a prediction of returns to storage.
Conversely, although a high futures price indicates that wheat will be valuable later, to the extent futures markets are efficient, that value should already be reflected in current prices so that futures prices cannot be used as a barometer by grain owners to decide whether to store or sell their grain.As Kastens and Schroeder (1996) stated, the futures market is expected to be mostly efficient, so futures price level should not help predict price changes.Alternatively, as Yoon and Brorsen (2002) explained, behavioral finance aspects such as overconfidence, anchoring, and regret by grain traders provide a possible theoretical reasoning to argue for mean reversion in futures prices, and thus for futures price level to aid in predictions.
Finally, futures price spread, the difference between the price of a distant contract and the price of a nearby contract, is a market signal of storage returns that some have recommended as a storage signal (see, for example, Usset).To the extent cash prices converge to futures prices at contract expiration, the distant futures price minus the nearby futures price should also provide a prediction of returns to storage.
Therefore, this study generates three testable hypotheses about the theory of the price of storage.First, grain producers and elevator managers can use basis as a market signal to decide whether to store or sell their grain; second, they can use futures price as a market signal to decide whether to store or sell their grain; and third, they can use futures price spread as a market signal to decide whether to store or sell their grain.
Data
The commodity chosen is Oklahoma wheat.Oklahoma monthly average cash wheat prices are obtained from the National Agricultural Statistics Service (NASS) of the United States Department of Agriculture (NASS/USDA) from 1975-2005.(Note 2) Monthly average Kansas City Board of Trade (KCBT) December wheat contract prices are obtained from the KCBT for the same periods as the monthly average cash wheat price series.The daily commercial storage costs represent the physical cost of storage charged by elevators and the opportunity cost of interest.The commercial grain storage rates were from Oklahoma Grain and Feed Association from 1975-2005.The monthly cost-of-carry loan interest rates were obtained from 1975-2005 Economic Research Service of the United States Department of Agriculture (ERS/USDA).Although an individual producer would find analysis using prices from a particular location more useful than state average prices, the purpose of the paper is to test usefulness of recommended rules of thumb.Using state averages reduces noise in the data, raising the power of the statistical tests.
Dependent Variables
The dependent variables considered are gross revenue, net gross revenue, basis change, basis change less cost of carry, and futures price change.It is assumed that a producer choosing to store grain at harvest in June stores until November.Klumpp, Brorsen, and Anderson (2007) note that on average during the years 1991-2001 Oklahoma wheat prices declined after December, so there is little likelihood of profitable storage after then, especially after storage costs are deducted.(In southern Oklahoma the net price declines after September, so a selling date of September is considered as well.)Gross revenue is defined as the difference between the November and June cash price for each year from 1975-2005. 2 Mathematically, gross revenue is expressed as where P N C is the November (or September) cash price and P J C is the June cash price.This is the equivalent of a producer's price gain from holding grain in storage, unhedged, until November (or September) rather than selling at harvest in June.Net revenue, where storage costs are deducted from gross revenue, was also considered, but since results using net revenue did not differ from those with gross revenue only results with gross revenue are presented here.
Basis change is the return to hedged storage as in Zulauf and Irwin (1998).It is defined as the difference between November and June monthly average December basis.Mathematically, basis change is expressed as where P N C is cash price in November, P N DF is December contract futures price in November, P J C is cash price in June, P J DF is December contract futures price in June, B N is the basis in November, and B J is the basis in June.This is the equivalent of a producer's gain from storing grain at harvest in June, hedging 100% of it on the December futures contract, and liquidating the hedge in November.
The variable Basis Change less Cost of Carry is basis change minus cost of carry from June through the sale date.This is the equivalent of a producer's gain net of storage cost from holding hedged grain in storage until November rather than selling at harvest in June.
Futures price change is defined as the difference between November and June monthly average December futures price.Futures price change is expressed as where P N DF is the November average futures price for December contract, and P J DF is the June average futures price for the December contract.
Independent Variables
Working (1953) suggested that initial basis could be used to forecast basis convergence.Initial basis is defined as June's observation of cash price minus the December futures contract price.Working's suggestion of using initial basis implicitly assumes that ending basis is approximately the same in every year, so that initial basis would be closely correlated with basis convergence.However, storage costs, transportation costs, and supply/demand fundamentals in the cash market relative to the futures market may have changed over time, affecting expected ending basis Some authors (e.g., Purcell and Koontz, 1999, p.35) have suggested using expected basis convergence as a storage signal.Basis deviation, or expected basis change, is an estimate of the expected amount of basis convergence from June through November, and is considered here as an alternative to Initial Basis.(Note 3) Dhuyvetter ( 2010) presents crop basis maps of both basis and basis deviation, showing that the two measures can differ over time and across locations.Basis deviation is defined here as June's observation of cash price minus the December futures contract price observed in June (B J ), minus an expectation of the basis in November.Here, that expectation is formed using an average of the previous five years' November observations of cash price less the December futures contract price observed in the same November ( B NAvg ), and is expressed as Similarly, futures price deviation is defined as the difference between June's price for the December futures contract ( P J DF ) minus the average of the previous five years' November observations of the December futures contract prices ( P N DFAvg ), and is expressed as This variable could capture mean reversion in futures prices.Kim, Brorsen, and Anderson (2010) argue that research on profit margin hedging and rollover hedging typically shows larger returns than selling at harvest, but the difference is rarely statistically significant.Thus, there is at least some weak empirical evidence in favor of considering the futures price deviation variable.
Futures price spread is defined as June's price for the December futures contract ( P J DFut ) minus June's price for the July futures contract ( P J JulF ), so that This difference reflects the market-forecasted gross return to storage, in that arbitraging those two contracts (e.g., simultaneously selling a December contract and buying a July futures contract, and then taking delivery on the July futures contract, and then liquidating the hedge as the December contract is nearing expiration) will provide that difference as revenue to the decision maker.This measure is also equivalent to using the harvest basis (July contract in June) to forecast the storage basis, which Taylor, Dhuyvetter, and Kastens (2006) argued provided better forecasts of basis than did moving averages.
The second step is to determine the relationship between the dependent variables (gross revenue, basis change, basis change less storage cost, and futures price change) and independent variables (initial basis or basis deviation, futures price deviation, and futures price spread).This will provide a way to test the hypotheses that grain owners can use basis, futures prices, and futures price spread as market signals to store or sell their wheat at harvest.
Ordinary least squares regression models are developed for each model using data from 1975-2005.Using equations ( 5), (6), and (7) the dependent variables are each regressed on the independent variables as either 8) is applied using basis change as the dependent variable.The regression model presented in equation ( 9) is applied using basis change, basis change less cost of carry, futures price change, and gross revenue as dependent variables.
Model Misspecification Tests
The third step is to conduct misspecification tests for all the regression models.The misspecification tests conducted are normality, static homoskedasticity, autocorrelation, joint conditional mean, and joint conditional variance (McGuirk, Driscoll, and Alwang, 1993).Each misspecification equation as well as its corresponding null and alternative hypotheses is presented.
The omnibus test (K 2 ) is used to detect deviation from normality as a result of either skewness or kurtosis.This test and its corresponding null and alternative hypotheses are expressed as where K 2 is the omnibus test statistic, Z( b 1 ) represents skewness and is asymptotically standard normal with mean zero and variance one, and Z(b 2 ) represents kurtosis and is asymptotically standard normal with mean zero and variance one.
The static homoskedasticity test is conducted using regression specification error test Kolmogorov-Gaborpolynomials (KG2).Mathematically, the artificial regression is where t 2 is the predicted error term squared, t is the KG2 test, and t v is the error term.
The autocorrelation test is conducted with an artificial regression as where t is the predicted error term, X t is the independent variable, t1 is predicted lagged dependent variable, and t v is the error term.
Conditional mean tests are conducted together to test for parameter stability, functional form, and independence.Mathematically, the artificial regression is where t is the predicted error term, X t is the independent variable, t P represents the structural change using time trend, t F represent non-linearity using KG2 test, t I is the lagged error term, and v t is the error term.
Conditional variance tests are conducted to check for static and dynamic heteroskedasticity.This test is based on the following artificial regression is the predicted error term square, t P allows structural change using time trend,
S t
allows the static heteroskedasticity using KG2 test, and allows for dynamic heteroskedasticity, and v t is the error term.
In models where the normality assumption is violated, the nonparametric bootstrap method is used.This method does not make any distributional assumption such as normality (Greene,Chapter 16).When the autocorrelation assumption is violated, the model is estimated using maximum likelihood (Gujarati, 1995).
Results
Tables 2 and 3 report parameter estimates for four different models in which the dependent variables (basis change, basic change less storage cost, futures price change, and gross revenue) are regressed on basis deviation, futures price deviation and futures price spread.A fifth model regresses basis change on initial basis (rather than basis deviation), futures price deviation and futures price spread.The results in table 2assume that grain is stored from harvest until the end of November, and the December contract is used for hedging.The results in table 3 assume that grain is stored from harvest until the end of August, and the September contract is used for hedging.(Note 5) The regression results indicate that basis deviation is statistically significant in predicting basis change and basis change less storage cost.Initial basis is also statistically significant in predicting basis change.The signs are negative in these cases, indicating that the lower June basis is, or the lower June basis is relative to the expected basis at delivery, the higher is the return to hedged grain.Basis deviation is not significant in predicting futures price change orgross revenue.Neither futures price deviation nor futures price spread is significant in predicting any of the dependent variables.This result is consistent with efficient markets theory, with a statistically more powerful empirical test than some previous approaches have provided.
Zulauf and Irwin (1998) found that basis is a useful signal for hedged storage, but not for unhedged storage.
Although our results essentially agree with theirs, our interpretation is slightly different.We argue that basis is a useful indicator whether the grain is hedged or unhedged, but the statistical power of the test is larger with hedged storage.
Although the coefficient on basis deviation is not statistically significant in predicting gross revenue (unhedged storage), the results in table 2 suggest that to the extent that futures prices are efficient, basis deviation is a useful predictor of returns to even unhedged storage.By definition, the dependent variable Basis Change equals Gross Revenue minus Futures Price Change -in other words, Basis Change (return to hedged storage) is equal to the change in cash prices less the change in futures prices.If futures prices are efficient, expected Futures Price Change is zero, and expected basis change (Basis Deviation) predicts an equivalent change in basis (Basis Change) and in Gross Revenue (unhedged return to storage).
Supporting this theoretical argument, in table 2, the coefficient on Basis Deviation in the Basis Change regression (-0.4182) is very near the difference between the coefficients on Basis Deviation in the Gross Revenue and Futures Price Change regressions (-0.2312 -0.20225 = -0.4376).If the coefficient on Basis Deviation in the Futures Price Change regression were zero, the effect of Basis Deviation on Basis Change would be almost equal to its effect on Gross Revenue.The extra randomness in our gross revenue regression ("unhedged storage" in Zulauf and Irwin (1998)) leads to high variability in returns to storage, so that the insignificance of the basis as a barometer in these regressions is due to this variability ("noise") and not because basis is not a useful indicator.
The explanatory power of the regression using initial basis to predict basis change is slightly higher than that of the regression using basis deviation, for hedging on the December futures contract and liquidating at the end of November.In contrast, for hedging on the September contract and liquidating the hedge at the end of August, the results in table 4 show that the explanatory power of the regression using initial basis is much lower than that using basis deviation, and the coefficient on initial basis is statistically insignificant.For a hedge on the September contract, adjusting for past variation in ending basis added predictability.
Tables 4 and 5 report results of misspecification tests for these models.The misspecification tests employed were: normality test (omnibus test), homoskedasticity test (Harvey and Godfredy LM test), autocorrelation test (Durbin-Watson test), joint conditional mean, and joint conditional variance.The results in tables4and 5 indicate that we reject the null hypothesis of normality for the basis change and basis change less cost of carry models.For these models nonparametric bootstrapping, which does not require normality, was employed.
Conclusions
The results show that basis is a more consistent market signal than futures price or futures price spread for wheat producers and grain elevator managers to use when deciding to store or sell their wheat at harvest.This finding is important because although the theoretical argument is strong, the empirical research has been more suggestive than conclusive.Thus, these results answer our earlier research question, "Are there market signals that grain producers and elevator managers can use to make the harvest store/sell decision?"The answer is "Yes, basis should be used as a market signal by wheat producers and grain elevator managers."For greater assurance, however, in achieving expected returns to storage, the grain should be hedged using the futures market, not just stored.
Earlier research relied on simulation methods that may have lower power than the regression approach used here.
The results indicate that basis is the single most important indicator that can affect the decision to store or sell wheat at harvest.The theoretical argument is strong and the empirical research results are strong for using basis as a barometer.
Basis t 2 Futures Price Deviation t 3 Futures Price Spread t t t 0 1 Basis Deviation t 2 Futures Price Deviation t 3 Futures Price Spread t t (9) where Y t is the dependent variable at time t and t is the error term.The regression model presented in equation (
Table 2 .
Parameter Estimates for Five Models: Storing until November -values are in parentheses under the coefficients for Basis Change(w/Basis Deviation), and Basis Change Less Cost of Carry as these models were re-estimated using a non-parametric bootstrap method because the normality assumption was violated (see Table4 & 5).However, standard errors are in parentheses under the coefficients for the Basis Change(w/Initial Basis, Futures Price Change, and Net Revenue models.
Table 3 .
Parameter Estimates for Dependent Variables Basis Change, Futures Price Change, and Net Revenue: Storing until September -values are in parentheses under the coefficients for Basis Change with Basis Deviation (BD t ), and Basis Change Less Storage Cost as these models were re-estimated using a non-parametric bootstrap method because the normality assumption was violated (see Table4 & 5).However, standard errors are in parentheses under the coefficients for the Basis Change with Initial Basis (IB t ), Future Price Change, and Net Revenue models.** indicates significance at the 5% level.
Table 4 .
Misspecification Tests for BasisChange, Futures Price Change, and Net Revenue (1975-2005): Storing until November For static homoskedasticity, joint conditional mean, and joint conditional variance tests the F critical value is presented with the p-value in parentheses.For the autocorrelation and normality tests the p-value is in parentheses.** indicates significance at the 5% level.
Table 5 .
Misspecification Tests for Basis Change, Futures Price Change, and Net Revenue (1975-2005): Storing until September For static homoskedasticity, joint conditional mean, and joint conditional variance tests the F critical value is presented with the p-value in parentheses.For the autocorrelation and normality tests the p-value is in parentheses.** indicates significance at the 5% level. | 2018-12-05T03:53:26.224Z | 2012-04-23T00:00:00.000 | {
"year": 2012,
"sha1": "b508cee1d2a13694131ef1f7b30b8d327665be74",
"oa_license": "CCBY",
"oa_url": "https://ccsenet.org/journal/index.php/ijef/article/download/16506/10991",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "b508cee1d2a13694131ef1f7b30b8d327665be74",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Economics"
]
} |
11478322 | pes2o/s2orc | v3-fos-license | The preoperative scheduling system as a secondary tool for hospital management : suggestions for adjustments proposed by a university ophthalmology unit
Objective: To evaluate the incidence of a software program package at Brazilian public eye care institutions through an online survey. The package is made up of the institutions’ websites, electronic medical records, regulatory assistance control system and the preoperative scheduling system. The aim is to analyze if this type of software facilitates the provisioning of eye care services, the patients’ surgical procedure agenda, and access to information about the patients. Methods: Public health care institutions with eye care services were divided into multidisciplinary university hospitals, multidisciplinary health system hospitals, single discipline hospitals (only ophthalmology) and eye banks (when linked to public health care). Two kinds of data were evaluated on the Internet: I) ophthalmology institutions and the correlation with the presence of the software program, II) ophthalmology institutions and the correlation with the software source. A statistical analysis was performed using chi-square and Fisher’s exact test. Significance levels were estimated at 0.05. Results: 48 public institutions were evaluated at 24 universities, 15 health care institutions, 3 single discipline hospital institutions (ophthalmology only), and 6 institutions specifically referred to as eye banks. In this survey, the digital preoperative scheduling system was only found in 10.4% of the institutions. After the statistical analysis, the eye bank was the only relevant institution that was significantly related to the presence of the digital preoperative scheduling system. Conclusion: Preoperative scheduling systems are expected to provide a good support tool for administrative management. However, this type of instrument as a software program is rarely found in public health care institutions in Brazil.
INTRODUCTION
The national public hospital network of care in ophthalmology is formed by medical services of university hospitals and federal, state and municipal care hospitals.All are made of multi-professional and even multidisciplinary medical entities (various medical specialties).[3][4][5][6][7][8][9][10][11][12] Many of these entities have digital website, highlighting the academic aspects of a strictu sensu graduation program for university students, or ophthalmological services with emphasis on medical residency in other institutions.
However, many of these ophthalmic public services have problems related to the surgical waiting list.They provide ophthalmic surgical procedures of medium and high complexity, as well as a disparate variety in relation to income and welfare goals.The existence of surgical waiting lists on public hospitals is often a social affliction due to its delay character.This surgical waiting list is a dilemma with multiple requirements in the technical management of the procedure, often impossible to be met due to the precariousness of resources in the public service and the difficulty of monitoring the surgical waiting list by the most needy citizens. 6,7s far as reality seems more urgent and dissemination of medical service over the Internet becomes more common, the idea of digital aid and surgical waiting list has been slowly happening.Thus, as a social response, some public institutions with large numbers of surgical care are using the digital surgical line with public consultation.This does not only exist in the specialty of ophthalmology, as well as in others with large surgical activity as in orthopedics, organ transplant specialties and some health care services of the municipalities.However, in the latter case, it ranges from primary, secondary and even tertiary care level. 1,2,6,7s this type of digital surgical waiting list is consolidated, it is evident the need for a more organized hospital structure also in its digital character.Accessory structures should be available, such as electronic medical records, integrated service regulation systems at the state government, in addition to the digital page on the Internet.These devices act synergistically together, forming a single digital structure to aid in the management of this type of hospital management. 1,2,5hus, these authors proposed research in ophthalmology in order to assess through internet search which public services to eye care in the country have a digital package comprising the institution's website, electronic medical records, service control system and digital line surgical waiting list.All that to emphasize the existence and the operating condition of the digital surgical waiting list, with emphasis on the possibility of public consultation by the user patient.
METHODS
Cross-sectional study through Internet search that includes ophthalmologic care services of the the public system with a digital page for disclosure.Data was searched on the Internet in the period from June to July 2015.
The study included ophthalmologic medical services subdivided into: • University -multidisciplinary • Hospital Care -multidisciplinary • Hospital Care -unidisciplinary (ophthalmology only) • Eye Bank -as an entity of public service linked to SUS.Institutions without a digital page for ophthalmology were excluded.
The variables researched were: • Main variable • Presence of a digital page for disclosure in ophthalmological public service.
• Secondary variables • Presence of electronic medical record for medical care at levels: Integral (both Ambulatory and Administrative like Innerhospital) or Partial (Only Inner-hospital and/or Administrative).
• Participation of a regulatory system for medical care managed by the State at any level (federal, state or municipal).
• Presence digital surgical waiting list.
• With public consultation by interested party • Without public consultation by interested party The results were presented in tables.Table I shows the correlation between the primary and secondary variables.Table The preoperative scheduling system as a secondary tool for hospital management: suggestions for adjustments proposed by a.... II shows the correlation with the origin of the digital package software, emphasizing its origin either from private companies or of public development.This correlation is built with a complete (with all variables) or partial (only one or two of them) digital package concept.
The statistical analysis was performed using the chi-square test with Fisher's exact test.The significance level to be searched was 0.05.The statistical software used was SPSS by IBM, 2013 -Chicago, USA -version 21.0.
Because it is a study for administrative management of health, the names of each service searched were not specifically disclosed, but the reference sources in which they were evaluated.However, to reduce regional disparities, it was decided to include at least one representative for each state of the union, even in case of establishment of low numbers of consultations.
The sources consulted for making the database were direct queries to Browse to the national websites Google, Yahoo and Microsoft Network (Bing -MSN), the official website of Conselho Brasileiro de Oftalmologia (CBO) and Sociedade Brasileira de Oftalmologia (SBO), portal of UNA-SUS (Universidade Aberta do Sistema Único de Saúde \ Arouca platform), websites of Associação Brasileira de Banco de Olhos (ABBO) and Associação Panamericana de Banco de Olhos (APABO).
Private institutions with agreements for public service were not considered, except in cases of service connected to the eye bank as public-private institutions.
RESULTS
Forty-eight digital pages of the Internet were consulted in this study, representing most public services in ophthalmology care.Only five States in the country were not evaluated due to the absence of hospital department of ophthalmology being disclosed on the Internet.
The 48 institutions included in the study were evaluated as 24 universities, 15 welfare institutions, 3 unidisciplinary hospitals (ophthalmology only), 6 institutions specifically referenced as eye banks.(Table 1) The universities have their own characteristics of academic disclosure on the Internet.Some of them (5 universities) have their own digital pages inherent to ophthalmology, as well as participation in the website of Hospital Universitário and/or Faculdade de Medicina or the University.
On the other hand, in many states Hospital de Clínicas Universitário is the main and only institution at tertiary level of public service within the specialty.This often has little disclosure of expertise in the academic web page.
Welfare institutions enhance on the Internet features of their activities, and those of the ophthalmology service internally allocated on the digital pages of the general hospital itself.Sometimes they have additional pages on social networks like Facebook and Twitter.These are used as a contact interface between users' doctors and the patients.
Unidisciplinary institutes only related to ophthalmology are rarer.Overall, they are associated to initially philanthropic or charitable organizations that have become, over time, service providers to SUS (Single Health System) at federal level.
Table 1 shows that the number of care facilities following the digital package (website, full electronic medical record, attendance regulation system and digital surgical waiting list) is still small, especially with regard to electronic medical records and surgical waiting list.The highest percentage of these cases is relevant in the eye banks.This type of data is highlighted as significant in the statistical test (Table 1).
The eye banks form separated service structures.These organizations are newer within the specialty, and can even provide subspecialty services (retina, cataract, eye plastic surgery), besides corneal transplantation.These institutions often act as collectors of the eyeball to be transplanted, being or not effectors of surgical transplant, besides being possible to offer the corneal button to other entities.Some eye banks are directly linked to other state institutions or those accredited to private entities.Universities and hospitals are older in organization, and due to their multidisciplinary nature, they have greater difficulty in the global installation of an electronic medical record of multiple features.Besides the need to meet different specialties, there is the problem of contemplating a whole extensive previous medical file on paper which needs to be updated.However, many of these institutions have partial filing systems as for surgical procedures and specific complementary exams.
A common feature to all institutions studied is the presence of medical residency in Ophthalmology.Many of them are accredited to Conselho Brasileiro de Oftalmologia, or associated to Sociedade Brasileira de Oftalmologia or just COREM (Co-missão Nacional de Residência Médica -Ministério da Educação -MEC).
As shown in table 2 the frequency of having a complete digital package is still very small and restricted to universities.Table 2 shows the advantage of developing the software at the university.Their utilization and practicality features are highlighted.This type of data is highlighted as significant in the statistical test (Table 2).
Similarly, you can see that the few unidisciplinary Ophthalmology institutions as the eyes banks, and few having digital surgical waiting list, have software developed by private companies.
DISCUSSION
Institutions with simpler organizations such as the unidisciplinary entities of ophthalmology only could be the easier element to obtain the full digital package.The electronic medical record without the problem of multidisciplinary and similarly applied to the digital waiting list would be these elements.However, due to the lower number of these entities, as well as a minor importance in technical participation and management, make them present lower participation in this type of digital proposal.
However, it is noteworthy that the eye bank institutions are a more complete concept of digital package.They even include public consultation by the user of surgical digital waiting line, which in the ophthalmology specialty was the only institution allocated to the research for this concept. 13,14owever, the public consultation on the Internet related to the waiting list for elective surgical procedures not related to organ transplant was not present in any other institution of the specialty.Only one institution linked to orthopedic specialty contributes to this other concept. 13,14ven in the literature of administrative medical management, there are few national references on the subject of digital surgical waiting list.The advantages of a digital surgical waiting line with encryption features are evident, even in the absence of external public consultation.The digital surgical waiting line can an aid of administrative or technical character, or even for legal consultation. 1,2,15,16ccording to the proposal of a technical manual aimed at handling digital hospital management, it is necessary to attach the personal hospital records of the patients to one register integrated to SUS or Serviços de Regulação de Atendimentos, as well as to a previous register within the management network of the institution. 1,2,5s suggestions of programmatic software criteria there could be an Input and Output of data with special multidisciplinary aspects.That is, there would be an entry for the "status" where these aspects would not be the same for each specialty in particular, and there could be more than one waiting line according to each morbidity or aspect associated to the difficulties of the surgery (surgical complexity, more sophisticated equipment, type of anesthesia, need for ICU support). 17,18nother device would be priority according to aspects of worsening the condition of morbidity where the need of the procedure could be changed after trial by the manager of the waiting line.This type of device must have its reasons explained and recorded with the accompanying comments attached to the registration of each patient. 1,2,5-total number of sample, N.S. -Not significant, p -Statistical significance, * -institutions with digital waiting line without public consultation by the user, X 2 -chi-square test Source: Total data from Ophthalmology public services -National Character -researched on the Internet during June / July 2015.The preoperative scheduling system as a secondary tool for hospital management: suggestions for adjustments proposed by a....
Table 1 Service institutions following the digital package at National Character -researched on the Internet during June / July 2015: total data of public services in Ophthalmology Presence of digital page N Electronic medical record Regulation system Digital surgical waiting line
n -total number of sample, N.S. -Not significant, p -Statistical significance, * -institutions with digital waiting line without public consultation by the user, X 2 -chi-square test Source: Total data from Ophthalmology public services -National Character -researched on the Internet during June / July 2015.Damasceno NA, Ventura MP, Herzog Neto G,Damasceno EF | 2017-09-14T04:43:12.089Z | 2016-06-01T00:00:00.000 | {
"year": 2016,
"sha1": "753da8802ba539129750e0b8c2cc37ebb4c36e7b",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/pdf/rbof/v75n3/en_0034-7280-rbof-75-03-0190.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "19c5632f0f625884b1466ed5f6cf6027021c9597",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Political Science"
]
} |
44374717 | pes2o/s2orc | v3-fos-license | Doing the CDC20 shuffle
![][1]
Without Cks1 (filled squares), CDC20 expression is flat.
May/Macmillan
For perhaps a decade, Suc1 was the hardy little protein that featured in almost every cell cycle paper. But it was used only as a reagent—its binding to cyclin-dependent kinases (such as Cdc2 or Cdc28)
Dong/Elsevier
Doing the CDC20 shuffle or perhaps a decade, Suc1 was the hardy little protein that featured in almost every cell cycle paper. But it was used only as a reagent-its binding to cyclindependent kinases (such as Cdc2 or Cdc28) yielded nearly pure MPF.
Then the budding yeast version, called Cks1, was found to be essential for growth. Now, May Morris, Steven Reed, and colleagues (The Scripps Research Institute, La Jolla, CA) have found that Cks1 helps shuffle proteins, including Cdc28, at the critical CDC20 cell cycle promoter, thus helping push cells through mitosis.
Cdc20 activates the ubiquitination machinery, which then destroys mitotic cyclins and sister chromatid glue to initiate anaphase. Morris stumbled on CDC20 because its overexpression suppressed a cks1 mutant. She then found that Cks1 helped turn on CDC20 expression during mitosis, and that Cks1 and Cdc28 both localized to the CDC20 promoter. But a version of Cdc28 that interacts poorly with Cks1 actually stuck to the promoter more avidly, suggesting that Cdc28 binds the promoter first, then brings in Cks1, and finally is kicked off via a Cks1related mechanism.
Indeed, at the peak of both CDC20 expression and Cks1 binding to the promoter, Cdc28 was released. The release may be triggered by the proteasome, which cofractionates with Cks1 and can also bind transiently to the CDC20 promoter. Proteasome protease function is not necessary for CDC20 expression, so perhaps the proteasome is recycling a promoter-bound protein, such as Cdc28, after it has fulfilled some essential phosphorylation function, or remodeling either chromatin or a stalled transcription complex.
Sticky transmission
erve cells restrict neurotransmitter release to their synapses. That restriction, say Markus Missler (Georg-August Universität, Göttingen, Germany), Thomas Sudhof (University of Texas Southwestern, Dallas, TX), and colleagues, is provided in part by the synaptic proteins ␣ -neurexins. These cell adhesion proteins promote calcium channel activity, and thus fusion of neurotransmitter vesicles, at the synaptic junctions.
The polymorphism and adhesion properties of neurexins led researchers to suspect a synapse-forming function. But Missler and coworkers found that mice lacking all three ␣ -neurexins had ultrastructurally normal synapses. The mice breathed with difficulty and died on the first day after birth. In the brain stem, where breathing rhythms are generated, synapses showed reduced frequency of spontaneous transmissiona possible sign of presynaptic problems. Reduced amplitudes of synaptic responses were also evident after stimulation of neurons in the neocortex.
The reduced transmission was not further reduced by drugs blocking N-type calcium channels, suggesting that neurexins normally help the channels to function. Channels were still made and transported to the cell mem-N Axonal transmission dies away as more neurexins are removed (left to right).
Missler/Macmillan
mimicked SA: it liberated monomeric NPR1 from a complex that was too big to enter the column.
SA is produced when plants first blast away at infecting microbes with oxidants, with SA prompting various enzymes to boost oxidant production. The cells then overcompensate with antioxidants.
It is this later reducing environment that unhinges NPR1 from an oligomeric complex, say the researchers. Cysteinesubstituted NPR1 mutants were constitutively monomeric and constitutively active in inducing downstream PR genes, which provide longer-term antimicrobial defenses.
The intermolecularly bonded storage form appears to be novel for signaling networks, although intramolecular disulfide formation is known to drive conformational changes that either activate transcription (by bacterial OxyR) or conceal a nuclear export signal (in yeast yAP1). Reference: Mou, Z., et al. 2003. Cell. 113:935-944. NPR1 is made constitutively, but activated by salicylic acid (SA) as part of a general plant defense against infection. Dong initially set out to purify the NPR1 complex by gel filtration. She was puzzled that an NPR1 peak was found only with SA-treated samples, but then discovered that in the uninduced samples DTT brane. Thus, channels may no longer be localized to the synapse or, as Missler suggests, no longer get activated. Consistent with the latter idea, whole cell calcium currents from the cell bodies were also reduced in mutants.
According to the activation hypothesis, "in neurons, unlike in other tissues, a negative clamp may exist on the function of calcium channels, and neurexin is needed to remove this clamp," says Missler. "It may sound a bit out of the blue. But neurons have to control the number of active calcium channels very tightly because of the negative consequences of excessive calcium influx. So neurexin may provide a localized activation of calcium channels at synapses." | 2018-05-08T18:36:37.978Z | 2003-07-21T00:00:00.000 | {
"year": 2003,
"sha1": "8584e2f89e0764418d9ad5b3ec953a3075931158",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2248380",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "8584e2f89e0764418d9ad5b3ec953a3075931158",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
8355645 | pes2o/s2orc | v3-fos-license | Abstract 70: Generation of Parathyroid Cells from Human Adipose Derived Stem Cells
METHODS: Immunocompromised nude-mice underwent external beam irradiation of the scalp. Five weeks later, mice either received seven deferoxamine treatments (1mg in 100ul) or saline subcutaneously to the irradiated area every other day. Laser Doppler analysis (LDA) was recorded prior to irradiation, following irradiation, and 24 hours following each treatment. Human fat grafts were then injected in the subcutaneous plane of the scalp and volume retention measured by CT scan over 8 weeks. Finally, skin and fat samples were evaluated histologically for vasculature, dermal thickness, and fat graft quality.
PURPOSE:
The difficulty of harvest and relative scarcity of bone marrow stromal cells (BMSCs) has limited the widespread use and clinical application of this technology, thereby necessitating inquiry into other therapies including adipose-derived stromal cells (ASCs). The goal of this study was to compare the ability of ASCs and BMSCs to heal mandibular defects and understand the mechanism through which this occurs. We hypothesize that ASCs will enhance fracture healing by improving vasculogenesis, while BSMCs will directly affect osteogenesis.
METHODS:
Male Lewis rats were radiated (35Gy), and subsequently underwent mandibular osteotomy with external fixation with implantation of two million BMSCs (n=12) or ASCs (n=16) marked with Green fluorescent protein (GFP). After 40 days, union rates were evaluated using microCT. Confocal microscopy visualized the contribution of ASCs/ BMSCs to the bone regenerate. Quantitative polymerase chain reaction of ASCs/BMSCs compared expression of osteogenic and vasculogenic genes. Coculture of ASCs (n=3) or BMSCs (n=3) with human umbilical vein endothelial cells (HUVECs) was performed in vitro in transwells to measure tubule formation as a marker of vasculogenesis.
RESULTS
: ASC-implantation resulted in higher union rates than BMSC-implantation (union rate: 94% vs. 66%). These cells contribute indirectly to fracture healing, as GFP was not visualized at the site. BMSCs expressed osteogenic genes including osteopontin to a significantly greater degree than did ASCs, while ASCs expressed greater levels of vascular endothelial growth factor. This translated to greater tubule formation among HUVECs co-cultured with ASCs than with BMSCs (64.3 ± 7.3 vs. 23.3 ± 2.6, p=0.0008), and increased vasculogenesis in vivo in mandibles after ASC implantation.
CONCLUSIONS: ASCs heal fracture defects better than
BMSCs. This effect is likely mediated by indirect modulation of vasculogenesis, rather than by a direct effect on osteogenesis. Clinicians interested in cell-based therapies for irradiated bone injury should consider ASCs as a promising option, given their abundance, ease of acquisition, and improved fracture healing. PURPOSE: Acquired hypoparathyroidism is most commonly caused by surgical removal or iatrogenic injury. Permanent hypoparathyroidism occurs in up to 5% of patients undergoing total thyroidectomy or neck dissection, which can result in profound hypocalcemia. If injury is noted intraoperatively the parathyroid gland can be minced and implanted subcutaneously to recover function. Frequently, the manifestations of hypoparathyroidism are only discovered postoperatively when symptoms of calcium and phosphorus dysregulation are noted. Medical management is required to prevent tetany, bone loss, and possibly death. Adipose derived stem cells (ADSCs) are multipotent mesenchymal stem cells which may offer an autologous treatment for this chronic condition. We hypothesize that ADSCs can be converted into functional parathyroid cells in vitro, which could be utilized for delayed subcutaneous implantation and correction of hypoparathyroidism.
METHODS:
Adipose tissue was obtained from patients undergoing lipectomy (n = 5). Tissue was digested and the stromal vascular fraction obtained. ADSCs were isolated using magnetic activated cell sorting against CD90 and underwent in vitro directed differentiation over a 26-day period. Differentiated cells were stained for both Calcium Sensing Receptors (CaSR) and Parathyroid Hormone/ related protein-Receptors (PTH/PTHrp-R) being verified by microscopy and flow cytometry. The physiologic response of differentiated parathyroid cells to calcium was assessed via Enzyme-Linked Immunosorbent Assay of PTH.
RESULTS:
ADSCs were reliably differentiated into parathyroid cells in all patients as verified by both immunofluorescence and flow cytometry against CaSR and PTH/ PTHrp-R. Furthermore, differentiated parathyroid cells exhibited a dose and time dependent release of parathyroid hormone following calcium stimulation. PTH secretion was noted at 5 minutes, peaked at 10 minutes, and returned to baseline levels after one hour.
CONCLUSIONS:
A century ago, Lahey was the first to perform human parathyroid autotransplantation following a partial thyroidectomy. Failure to acutely recognize an iatrogenic injury makes autotransplantation impossible. We have consistently converted adipose derived stem cells into a parathyroid cell phenotype. Our cells demonstrate appropriate responsiveness to extracellular calcium by the release of parathyroid hormone. They may therefore represent a reliable autologous solution to hypoparathyroidism diagnosed in a delayed setting.
Characterizing The Contribution Of Circulating Mesenchymal Cells To Pathologic Wound Healing And Heterotopic Ossification
PURPOSE: Pathologic wound healing after injury represents dysregulation of several cellular components of the physiologic wound niche. Trauma induced heterotopic ossification (tHO) is a highly morbid class of pathologic healing characterized by endochondral formation of de novo osseous lesions in soft tissue. These lesions occur at several anatomic sites and it remains unclear which cell populations form the pre-HO niche. Identification of the specific cells which give rise to tHO is critical to the development of targeted therapeutic options. Similarities between endochondral ossification and tHO suggest a common progenitor, however, it is unclear if this population is entirely local or receives contributions from circulating cells. Here we utilized a parabiotic reporter-based model of tHO to identify and characterize contributions from circulating cells to tHO.
METHODS:
A mouse model of parabiosis between wild type mice and reporters carrying the CAG-luc-eGFP L2G85 transgene to examine the presence of circulating (eGFP+) mesenchymal cells at the injury site was generated. These animals carry both luciferase and eGFP reporters allowing for concurrent bioluminescent and histologic analysis of circulating populations. After blood chimerism was confirmed, the wild type parabiotic mouse received a hindlimb Achilles' tenotomy and dorsal burn. Bioluminescence imaging was used to study the timing and localization of circulating cells. Immunofluorescence was performed to identify common inflammatory, vascular, mesenchymal markers and to characterize their source and contribution to each stage of the tHO anlagen. | 2018-05-31T06:12:35.368Z | 2017-04-01T00:00:00.000 | {
"year": 2017,
"sha1": "b065d662e43bd969396609b4ca5665c1ca62ae9f",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1097/01.gox.0000516590.16120.f3",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d1c1fd77664f6b2945e8849d5f22901d59f3a058",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology"
]
} |
1180284 | pes2o/s2orc | v3-fos-license | Common risk factor approach to address socioeconomic inequality in the oral health of preschool children – a prospective cohort study
Background Dental caries remains the most prevalent chronic condition in children and a major contributor to poor general health. There is ample evidence of a skewed distribution of oral health, with a small proportion of children in the population bearing the majority of the burden of the disease. This minority group is comprised disproportionately of socioeconomically disadvantaged children. An in-depth longitudinal study is needed to better understand the determinants of child oral health, in order to support effective evidence-based policies and interventions in improving child oral health. The aim of the Study of Mothers’ and Infants’ Life Events Affecting Oral Health (SMILE) project is to identify and evaluate the relative importance and timing of critical factors that shape the oral health of young children and then to seek to evaluate those factors in their inter-relationship with socioeconomic influences. Methods/Design This investigation will apply an observational prospective study design to a cohort of socioeconomically-diverse South Australian newborns and their mothers, intensively following these dyads as the children grow to toddler age. Mothers of newborn children will be invited to participate in the study in the early post-partum period. At enrolment, data will be collected on parental socioeconomic status, mothers’ general and dental health conditions, details of the pregnancy, infant feeding practice and parental health behaviours and practices. Data on diet and feeding practices, oral health behaviours and practices, and dental visiting patterns will be collected at 3, 6, 12 and 24 months of age. When children turn 24-30 months, the children and their mothers/primary care givers will be invited to an oral examination to record oral health status. Anthropometric assessment will also be conducted. Discussion This prospective cohort study will examine a wide range of determinants influencing child oral health and related general conditions such as overweight. It will lead to the evaluation of the inter-relationship among main influences and their relative effect on child oral health. The study findings will provide high level evidence of pathways through which socio-environmental factors impact child oral health. It will also provide an opportunity to examine the relationship between oral health and childhood overweight.
Background
While dental caries is largely preventable [1] and there are effective population-based and individual preventive strategies such as community water fluoridation, brushing with fluoridated toothpaste, and widespread access to dental care, dental caries remains the most prevalent chronic disease in children, exerting significant impact on both the children affected and society in general. The prevalence of dental caries in primary school children in Australia [2] and in the U.S. [3,4] is four to five times higher than that of asthma, the next most prevalent chronic condition in this age group. Furthermore, dental caries is the most common (yet avoidable) cause of acute hospital admissions in children [5] due to the common situation of children requiring multiple dental extractions under general anaesthesia. Dental caries has a major financial impact on the health system and psychological impact on the children and their families [6].
A summary of the latest key findings on the state of oral health of the Australian population estimates the prevalence of early childhood caries (ECC) among five year-old Australians to be 42% [7]. Despite a substantial level of resources being directed to prevention and dental services for children in Australia, the oral health of Australian children has worsened [2], with inequality in oral health widening in the last decade. The extent of caries experience among 6-year-old Australian children increased by 24% during the late 1990s [2]. More alarmingly, there is ample evidence of a skewed distribution of dental caries with a small proportion of children in the population bearing the majority of the burden of the disease [8]. The most severely affected 10% of 4-yearold Australian children had (on average) seven teeth with dental caries experience [2], a level that significantly impacts on their lives [9]. This minority group is comprised mostly of socioeconomically disadvantaged children.
The pathways, through which socioeconomic disadvantage affects child oral health, remain unclear. Children of different socioeconomic backgrounds could have different patterns of food and fluid consumption and different patterns of oral health practices such as toothbrushing from a very early age. Such differences could act as mediators of the association between socioeconomic disadvantage and child oral health. Understanding the mediation process is necessary to better address the oral health inequality and improve overall child health. It is also important to note that the evidence base on measures to tackle oral health inequalities is limited. These important issues require in-depth research to understand the determinants of child oral health and their inter-relationship to support effective evidence-based policies and interventions to improve child oral health, a major contributor to general health.
Aim and objectives
The research project described in this paper has been funded by an Australian National Health and Medical Research Council (NHMRC) Project Grant # 1046219 (2013-2016) under the scientific title 'Common risk factor approach to address socioeconomic inequality in oral health of contemporary Australian preschool children'. The aims of the project, henceforth referred to by its working title "Study of Mother's and Infant's Life Events Affecting Oral Health (SMILE)" project, are to identify and evaluate the relative importance and timing of critical factors that shape the oral health of young children and then to seek to evaluate those factors in their inter-relationship with socioeconomic influences. The knowledge gained in this study will point to key factors that differentiate the oral health of children from different socioeconomic groups, leading to strategies for the improvement of oral health in disadvantaged young children.
This study has four specific objectives: 1. To measure socioeconomic variations in dietary patterns including patterns of fluid consumption, patterns of oral health practices such as toothbrushing and fluoride use and non-fluoride preventive applications of children from birth to age two years; 2. To evaluate variations in child oral health status at age two years; 3. To evaluate the timing and extent to which healthpromoting factors attenuate the effect of diseasepredisposing factors and; 4. To evaluate the commonality of risk factors for oral health and obesity, a highly prevalent childhood general health condition related to infant feeding practice.
Study design
The SMILE project will apply an observational prospective study design to follow a cohort of socioeconomicallydiverse South Australian newborns and their primary care-givers (mothers will be used in this paper from here on), from birth until they reach toddler age. A multivariable, multilevel approach will be applied to data collection and analysis. The timeline and main groups of outcome and explanatory variables are outlined in Table 1. The project has received ethical approval from the Southern Adelaide Clinical Human Research Ethics Committee (HREC # 50.13, approval date: 28 Feb 2013), the South Australian Women and Children Health Network (HREC # HREC/13/WCHN/69, approval date: 7 Aug 2013) and clinical governance clearance from the three participating maternity hospitals in Adelaide, Australia.
The study participant recruitment has commenced since July 2013 and is expected to last 12 months. The 3-month and 6-month data collection is ongoing.
Targeted population
The study setting is Adelaide and its immediate surroundings. The targeted population is all children born in Adelaide in 2013-14. Annual statistics compiled by the Pregnancy Outcome Unit, Department of Health of South Australia show that 20,344 children were born in 2011 in Adelaide, with around 60% of all live births occurring in the three largest metropolitan public hospitals providing maternity services to the whole of Adelaide. The cohort will be recruited from these three public hospitals.
Recruitment began in mid-2013 and will continue to mid-2014. All new mothers at the postnatal departments who are able to understand the description and instructions of the study are invited to participate. The mothers who indicate their intention to move out of South Australia within a year are excluded. Premature or low birth weight children can be included if mothers agree to participate.
Sample size calculation
The required baseline sample size was calculated to address the most sensitive objective (Aim 2) of this fouryear study using standard methods [10]. Sample size was calculated to detect a rate ratio of 0.2 between slopes (considered a small effect size [10]) for explanatory variables in multivariable regression models for caries experience (mean dmfs) at age two years (Aim 2) with an alpha level of 0.05 (two-tail) and statistical power of 90%. The calculated sample size required at age two years is 1,398 children. A minimum two-year retention rate of 80% was used in the calculation. It resulted in a targeted sample size at birth of 1,677 children (rounded up to 1,700). This recruitment target is highly achievable given the population pool of 16,000 children. The expected two-year retention rate of 80% is considered conservative based on our experience [11]. Using this sample size, statistical power expected to achieve other aims was calculated at over 90% after taking into account possible interaction among variables.
Sample recruitment
Research workers experienced in working with women and small children such as dental therapists or dental nurses have been employed and trained to recruit participants for the study. An information package about the study has been developed for the potential participants. Small participation incentives (e.g. toothbrush and tooth paste samples) and travel reimbursements will be provided to participants to improve recruitment and retention rates. The research workers visit the hospitals on a frequency determined by the birth rate at each hospital. Contact is made with mothers in the maternity ward, typically within 48 hours of the birth of their infant. Mothers are provided with a written and verbal explanation about the study and then invited to participate in the study. All questions are answered and informed consent obtained. Low-SES mothers will be oversampled, as recommended for prospective cohort studies [12]. This can be done by increasing the sample from those hospitals who primarily service this group. Distribution of participants by area-level SES and individual SES distribution will be checked frequently and compared with the population parameters. Oversampling low-SES groups will offset for expected relatively higher attrition rate by these groups. As the study progresses, sample maintenance procedures such as birthday cards to the child will be used. Response rates by different SES groups at data collection rounds will also be checked frequently. If necessary, different strategies will be applied to SES sub-groups to improve overall response rate.
Primary outcome variable of oral health status
After children reach age two years (age range 24-30 mo), they and their mothers will be invited to undergo a dental examination conducted by a small group of specially trained dentists under standardised clinical conditions. Standard clinical indices, developed at the Australian Research Centre for Population Oral Health (ARCPOH) based on the US National Institute of Dental and Craniofacial Research (NIDCR) protocol [13] and the International Caries Detection and Assessment System (ICDAS-II) [14], will be used. The ARCPOH protocol is currently being used in a National Child Oral Health Survey. Two principal investigators of this cohort study are principal investigators and main examiner trainers for that survey. Noncavitated or cavitated carious lesions, filling, missing tooth surfaces because of decay, non-carious developmental defects such as hypoplasia and gingival conditions will be recorded. Five per cent of children and mothers will be randomly selected for a replicate examination to determine inter-and intra-examiner reliability. Salivary plaque samples will be collected from both mothers and children. Samples will be stored at the University of Adelaide laboratory for analysis of cariogenic bacteria.
Outcome variable of child weight status
At the examination children will be weighed and measured using standardised equipment and methodology [15]. Child weight status will be assessed based on World Health Organization age and gender specific BMI percentiles, where a BMI >85 th and <97 th percentile is overweight and ≥97 th percentile is obese (http://www.who. int/childgrowth/standards/en/.)
Main explanatory variables
Mother-and child-related information will be collected using a series of self-completed postal or online questionnaires and face-to-face interviews during the study period. The first wave of questionnaire data collection is at recruitment, with subsequent waves occurring at 3, 6, 12 and 24 months.
Community-level factors
Area-based Census data reported by the Australian Bureau of Statistics will be obtained and linked with mothers' residence. The Socioeconomic Index for Areas (SEIFA) (Australian Bureau of Statistics 2011) will be used to classify participants into groups by area-level SES. Any subsequent changes in residential locations will be collected.
Mother and family-related information
Mothers' socioeconomic details such as age, income, education and occupation will be collected using standardised questionnaires developed at ARCPOH [16]. Household socioeconomic indicators at recruitment with updates at subsequent data collections form the SES characteristics. Data on general and oral health knowledge and oral health beliefs and practices during the prenatal period and during the study period will be periodically collected. Mothers will be offered an oral health examination at the time of examination for children at age two years. During the examinations at age two years, face-to-face interviews with the mothers will be conducted to collect more complex information. Maternal oral health beliefs, status, microbiological assessments and self-reported oral health practices will be used as explanatory variables in the analyses.
Child-related information
Parents will be asked to report birth and subsequent general health issues. Data on dietary patterns, patterns of fluid consumption, and oral health practices are to be collected at 3, 6, 12 and 24 months. The collected information will be used to estimate fluoride exposure and intake. Details of receipt of oral health anticipatory guidance or oral health advice from dental professionals and general health practitioners including Child and Youth Health nurses, will be obtained from the questionnaires and interviews. Data on the timing, reason, nature and consequences of dental visits will be sought.
Dietary patterns and patterns of fluid consumption
Dietary data will be collected at age 3, 6, 12 and 24 months. Information on infant feeding patterns (including an estimation of breast milk and/or infant formula intake) will be collected using questionnaires previously used in the first and second Perth Infant Feeding Studies [17,18] and the Australian National Infant Feeding Study [19]. A particular focus will be the collection of information related to the practice of prolonged breastfeeding and night time breastfeeding.
Detailed information on usual food and beverage intake and dietary behaviours will be collected at 12 months using a telephone 24 hour recall interview conducted by a trained nutritionist supplemented with a 2-day (one week and one weekend day) food record completed by the mother [20]. The food record booklet, which will include detailed instructions and a food measurement aid (including photos of infant feeding bowls, cups and utensils), will be mailed along with the 12 month questionnaire. A cover letter will advise mothers that an interviewer will ring them within 7-10 days to collect information on what their child is eating and explain how to complete the food record. On return of the food record an interviewer will contact the mothers by telephone to clarify any record with incomplete information (such as missing estimates of intake, or time of consumption). At 24 months dietary information will be collected using a short food frequency questionnaire, specifically designed and validated for the purposes of this study. Dietary data will be analysed using the FoodWorks® (Xyris Software) computerised dietary assessment program.
Estimation of intake of free sugars and fluoride intake
If required, fluoride concentrations in the main foods and beverages reported in the diet record will be assayed in our Adelaide laboratory using methods employed successfully in our previous studies [21,22]. Estimates of fluoride intake will be based on the fluoride concentrations in foods and beverages, and the amount consumed [23,24]. The type, amount and frequency of intake of free sugars will be determined from the 24 hour recall and 2 day estimated diet record at 12 months and the food frequency at 24 months questionnaire using the method described by Kelly and colleagues [25].
Consumption of public water will be collected in detail using questionnaires and interviews at age 3, 6, 12 and 24 months. This information will be linked with a database of fluoride levels in public water source for residential locations. The proportion of tap water in total fluid intake and use of any fluoride-removing filter will be detailed. The level of fluoride in public mains water will be measured periodically in the laboratory. If participants report using other sources of water, such as bottled water or rain water or using water filter at home, water samples will be obtained and fluoride levels measured.
Oral health practices
The questionnaires will collect information on the oral health practices of the children at age 6, 12 and 24 months. This will include age of commencement of tooth cleaning without toothpaste, when fluoridated toothpaste use commences, and components of toothbrushing practice such as frequency of brushing, toothpaste type, toothpaste amount used per brushing, eating/licking toothpaste habit and rinsing/spitting after brushing. Information on professionally-applied fluoride applications (such as fluoride varnish) will be collected, noting the frequency, type, and age when it is used. Use of fluoride supplements, age of use and dosage will also be queried. Occurrence of dental visits and dental care received by the children will be collected.
Information on visiting for dental reasons will be collected. Places of visit, age when the visits begin, reasons for the visit, oral health advice, preventive procedures and treatment received will be documented.
Fluoride exposure and intake from oral health practices Fluoride exposure and intake from oral health practices will be estimated using the collected data. Type of toothpaste, amount of toothpaste per brushing, frequency of brushing, proportion of ingested toothpaste, use of fluoride supplements and other in-office fluoride applications will be used to estimate fluoride intake [23,24]. Body weights will be used to calculate average fluoride intake per kg body weight from both dietary intake and oral health practices.
General health conditions of mothers and children
Information on a number of common health conditions of the children will be collected using questionnaires. Use of antibiotics and sugar-containing medications will be detailed. Information on mothers' general health condition will be collected through questionnaires. Mothers will be asked to bring the child's Child and Youth Health record book (Blue book) to the oral examination. Permission will be asked to transcribe information of the child's development and general health during the period.
Analysis and reporting
We will evaluate Objectives 1 to 4 using SAS (SAS Institute Inc., Cary, NC, USA) and the multilevel analytical software, MLwiN (The Centre for Multilevel Modelling, the University of Bristol, UK). Potentially, count of dmfs scores can be analysed using a negative-binomial link function with adjustment (using generalized estimating equations in SAS PROC GENMOD) to account for within-person clustering of observations [26]. If the caries outcome data result in a large excess of zero counts (particularly at the 2-year period), we will adopt models based on zero-inflated negative binomial (ZINB) regression. Random effect models will be generated using SAS and MLwiN [27,28]. A number of covariates are time-varying. Therefore, it will be appropriate to apply marginal structural models (MSM) to obtain estimates of causal inferences between those factors and the outcomes [29,30].
We will report the results of the objectives as peerreviewed scientific publications following the STROBE guidelines [31] for reporting observational research. Results will be disseminated to appropriate health agencies in Australia and internationally through publications, conferences and workshops.
Discussion
This prospective cohort study will examine a wide range of determinants influencing child oral health and a number of related general conditions. It will lead to the evaluation of the inter-relationships among groups of main influences and their relative effect on oral health in a representative sample of children. The study findings will provide high level evidence of pathways through which socio-environmental factors impact the oral health of young children. The study will also provide opportunity to examine the relationship between oral health and childhood obesity and their shared pathways. We believe that the evidence will enable a more effective common risk factor approach [32] to tackle these two common childhood problems. The common risk factor approach is capable of reducing social inequalities by focusing on improving health conditions in general for the whole population and for groups at high risk and of integrating oral health into general health through a Social Determinants framework [33]. From there, appropriate interventions will be identified to tackle socioeconomic inequality in child oral health and, hence, to improve the overall oral health and general health of the child population. | 2017-10-01T02:56:02.964Z | 2014-05-06T00:00:00.000 | {
"year": 2014,
"sha1": "09370830120277ba16e0d22fa9d6c4eaba588651",
"oa_license": "CCBY",
"oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/1471-2458-14-429",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "174965aaf24164584205ff7ff5b3b0b245e2c812",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
6264599 | pes2o/s2orc | v3-fos-license | Noncommutative Symmetric Functions and Lagrange Inversion
We compute the noncommutative Frobenius characteristic of the natural action of the 0-Hecke algebra on parking functions, and obtain as corollaries various forms of the noncommutative Lagrange inversion formula.
Introduction
There are some advantages to interpret the classical Lagrange inversion formula for the reversion of formal power series in terms of symmetric functions (see, e.g., [17], Ex. 24 p. 35, Ex. 25 p. 132, [15] Section 2.4 and [16]). Recall that one possible formulation of the problem is as follows. Given (1) ϕ(x) = n≥0 ϕ n x n (ϕ 0 = 0) find the coefficients c n of the unique power series We can assume without loss of generality that ϕ 0 = 1 and that is the generating series of the homogeneous symmetric functions of an infinite set of variables X. Indeed, the h n (X) are algebraically independent, so that σ u (X) is a generic power series. Now, symmetric functions encode various mathematical objects, and the solution can be interpreted in many ways, for example in terms of characters of the symmetric group. Indeed, in the λ-ring notation, the solution reads (5) c n = 1 n + 1 h n ((n + 1)X) (recall that σ t (nX) = σ t (X) n , see, e.g., [17] p. 25). On this expression, it is obvious that c n is Schur positive, in fact, even a positive sum of homogeneous products h µ , so that it is the Frobenius characteristic of a permutation representation of S n . This representation is well-known [6]: it is based on the set PF n of parking functions of length n (see below for the definition). The first terms are c 0 = 1, c 1 = h 1 , c 2 = h 2 + h 11 , c 3 = h 3 + 3h 21 + h 111 , c 4 = h 4 + 4h 31 + 2h 22 + 6h 211 + h 1111 .
Now, we have at our disposal noncommutative analogs of the Lagrange inversion formula [4,19,1], and a theory of noncommutative symmetric functions [3,11], known to be related to 0-Hecke algebras in the same way as ordinary symmetric functions are related to symmetric groups [12]. The aim of this note is to clarify the relations between these different topics. We shall first analyze the natural representation of the 0-Hecke algebra on parking functions. This is a projective module, whose qcharacteristic noncommutative symmetric function turns out to be the term of degree n in the noncommutative q-Lagrange inversion formula. This allows us to give simple and unified proof of all versions of the noncommutative Lagrange formula [4,19,1]. Interpreting the terms as ordered trees leads to closed expressions for the expansion of the solution in various bases. These calculations suggest the introduction of noncommutative analogs of Abel's polynomials, and of an infinite family of combinatorial triangles, which includes classical refinements of the Motzkin, Catalan and Schröder numbers as the first three cases. The action of the 0-Hecke algebra on (k, l)-parking functions is also described.
Acknowledgements.-This project has been partially supported by CNRS and by EC's IHRP Programme, grant HPRN-CT-2001-00272, "Algebraic Combinatorics in Europe". The authors would also like to thank the contributors of the MuPAD project, and especially of the combinat part, for providing the development environment for their research (see [10] for an introduction to MuPAD-Combinat).
Notations
Our notations for noncommutative symmetric functions will be as in [3,11]. We recall that the Hopf algebra of noncommutative symmetric functions is denoted by Sym, or by Sym(A) if we consider the realization in terms of an auxiliary alphabet. Bases of Sym n are labelled by compositions I of n. The noncommutative complete and elementary functions are denoted by S n and Λ n , and the notation S I means S i 1 · · · S ir . The ribbon basis is denoted by R I . The notation I n means that I is a composition of n. The conjugate composition is denoted by I ∼ .
The graded dual of Sym is QSym (quasi-symmetric functions). The dual basis of (S I ) is (M I ) (monomial), and that of (R I ) is (F I ).
The evaluation Ev(w) of a word w over a totally ordered alphabet A is the sequence (|w| a ) a∈A where |w| a is the number of occurences of a in w. The packed evaluation I = pEv(w) is the composition obtained by removing the zeros in Ev(w).
The Hecke algebra H n (q) (q ∈ C) is the C-algebra generated by n − 1 elements T 1 , . . . , T n−1 satisfying the braid relations and (T i − 1)(T i + q) = 0. We are interested in the case q = 0, whose representation theory can be described in terms of quasisymmetric functions and noncommutative symmetric functions [12,2].
The Hopf structures on Sym and QSym allows one to mimic, up to a certain extent, the λ-ring notation which is so useful for dealing with ordinary symmetric functions (see [15] for the commutative version and [11] for the noncommutative extension). If A is a totally ordered alphabet, the noncommutative symmetric functions of nA (n ∈ Z) and [n] q A (where [n] q = {1 < q < · · · < q n−1 }) are defined by More generally, noncommutative symmetric functions can be evaluated on any element x of a λ-ring, S n (x) = S n (x) being the n-th symmetric power. Recall that x is The scalar x = 1 is the only element having both properties. We usually consider that our auxiliary variable t is of rank one, so that σ t (A) = σ 1 (tA). For each of the noncommutative formulas obtained from representations of the 0-Hecke algebras, we shall give the commutative specializations to the alphabet A = 1 (S n (1) = 1 for all n) and to the virtual alphabet A = E, defined by σ t (E) = e t . This will produce a number of (generally known) combinatorial identities, which can now be traced back to a common source.
3. Permutational 0-Hecke modules 3.1. Let [N] = {1, . . . , N} regarded as an ordered alphabet. There is a right action of H n (q) on C[N] n corresponding to the standard right action of S n (see [12]). If w = a 1 a 2 · · · a n , one sets w · σ i = a 1 · · · a i+1 a i · · · a n , and For q = 0, this simplifies as Thus, the image of a word w by an element of H n (0) is either (up to a sign) a rearrangement of w or 0. In particular, starting from a nondecreasing word v, one obtains all rearrangements of v. These form the basis of a projective H n (0)-module M whose noncommutative characteristic is ch(M) = S I ∈ Sym = Sym(A) where I is the packed evaluation of v [12,2]. The characteristic of the permutation representation W n (N) = C[N] n is easily seen to be (11) ch(W n (N)) = One can do better, and keep track of the sum of the letters, a statistic obviously preserved by the action of S n or H n (0). We shall normalize it as Then, and we can write down a q-characteristic
One can decompose the set of words w ∈ [n + r] n according to the length of their maximal parking subword p(w) (which may be empty, and is clearly unique). If p(w) is of length k, the complementary subword can only involve letters greater than k + 1, and can in fact be any word of [k + 2, n + r] n−k . Hence [13], Taking the q-characteristic of the underlying permutational 0-Hecke modules, and remembering that shuffling over disjoint alphabets amounts to inducing representations, we obtain which allows us to extract the generating series of G n (q; A) := ch q (PF n ). Indeed, writing and (19) we arrive at Theorem 3.1. The generating series of G n (q; A) := ch q (PF n ) is In particular, this expression is independent of r, a fact which is not easily derived by mere algebraic manipulations.
The functional equation
We shall now see that G(x, q; A) solves a functional equation, and recover the noncommutative q-Lagrange formula in this way. For later convenience, let us first change q into 1/q and consider and we can write The powers of q can be absorbed in the products if we set K(x) = xqH(x). Finally, we obtain One has In particular, with S 0 = 1. The first terms are Note that g i is obtained by setting q = 1 in (15), that (30) is (29) for q = x = 1 and that one recovers (6) by assuming that the S i commute. The solution of [4,19] is obtained by taking r = 1 in Formula (20). The commutative image gives various forms of the Garsia-Gessel q-Lagrange formula.
The general noncommutative Lagrange inversion formula
5.1. Nondecreasing parking functions. The versions of [4] and [19] on the noncommutative inversion formula deal with the slightly more general functional equation where S 0 is another indeterminate which does not necessarily commute with the other ones. The solution can be expressed in the form where NDPF n denotes the set of nondecreasing parking functions on [n]. For example, the nondecreasing parking functions giving f 3 being (in this order) 123, 122, 113, 112, 111.
Dyck words.
Here is an amusing way to prove Formula (33), inspired by one of the examples of [19]. If we denote by D the sum of all Dyck words (1 being the empty word) (36) D(a, b) = 1 + ab+ aabb+ abab+ aaabbb + aabbab+ aababb + abaabb+ ababab + · · · which can be defined by the functional equation then, the series f = Db satifies (32), with (38) S n = a n b .
Indeed, iterating (36), we have so that we know the solution of (32) in this particular case. But the particular case is generic: S = {a n b|n ≥ 0} is a prefix code, and the S n defined by (38) are algebraically independent. The general solution (33) is then obtained by decomposing the words of Db on the code S.
This expression being granted, the other version of the solution (as a quotient of series) is obtained directly from (16) as above.
In [19], the specialization S n = 1 n! a n b is also considered, leading to what the authors have called noncommutative inversion polynomials.
5.3.
Trees. Alternatively, Formula (32) can be interpreted as a sum over ordered trees. Let us set c = S 0 , d n = S n , and interpret d n as the symbol of an n-ary operation in Polish notation, so that for example is the Polish notation for and corresponds to the five ordered trees of Figure 1. This implies an expression of the coefficients δ I defined by (42) g n = ch(PF n ) = I|=n δ I S I since g n is obtained from f n by setting c = 1 in (30). Indeed, given a tree T , define its skeleton as the tree obtained by removing the leaves c and labeling the internal vertices with their arity. Given the skeleton S of a tree T , define its 0-composition I 0 (S) as the sequence formed by the values of the labels of the vertices of S read in prefix order. For example, one finds on Figure 2 a tree and its skeleton. The corresponding 0-composition is (3,2,4,2).
The number of trees with skeleton S is obviously where I 0 (S) = (i 1 , . . . , i p ) and a k is the arity of the k-th vertex of the tree S, numbered in prefix order.
Let I = (i 1 , . . . , i p ) be a composition of n. We are now in a position to compute δ I . Indeed, the coefficient of S I in g n is equal to the number of ordered trees on n + 1 Figure 2. A tree and its skeleton. vertices whose sequence of non-zero arities in prefix order is I. The skeletons of these trees are the ordered trees on p vertices labeled by the elements of I in prefix order. The sequences of arities of the skeletons in prefix order are all sequences (a 1 , . . . , a p ) such that a 1 + · · · + a j ≥ j for j < p and equal to p − 1 if j = p. Thus a p = 0 and a 1 + · · · + a p−1 = p − 1 so that where the sum is taken over the set of sequences (a 1 , . . . , a p−1 ) such that a 1 +· · ·+a j ≥ j for all j and a 1 + · · · + a p−1 = p − 1.
6. Noncommutative formal diffeomorphisms 6.1. Another form of the noncommutative Lagrange inversion has been obtained by Brouder-Frabetti-Krattenthaler [1]. It is stated in the form of an explicit formula for the antipode of the Hopf algebra H dif of "formal diffeomorphisms". As an associative algebra, H dif can be identified with Sym by means of the correspondence a n = S n = S n (A). The coproduct can then be expressed as In this notation, computing the antipode amounts to find a series Hence, h(A) satisfies the functional equation differing from that of Gessel and Pak-Postnikov-Retakh, which reads However, the difference is not that big, since we have Theorem 6.1. The relation between the noncommutative symmetric series h(A) and g(A) respectively defined by (48) and (49) is Proof -This is a good illustration of the power of the "noncommutative λ-ring notation". Using the expression of g(A) given by putting q = 1 in (20), we can write Remark 6.2. This calculation works as well for the q-analog, and allows one to compute the antipode of the q-deformed coproduct For q = 0, this is the usual coproduct of Sym. We have therefore an interpolation between the two structures, the combinatorics being governed by the q-Lagrange formula, hence by parking functions.
6.2. Trees. One can give for h(A) a combinatorial interpretation analogous to (33). Starting from the generalized inversion problem we recast it in the form and we can now interpret each d i as the symbol of an (i + 1)-ary operation in Polish notation. Then, f n is the sum of Polish codes of ordered trees with no vertex of arity 1 on 2n + 1 vertices (or Schröder bracketings of the words c n+1 ) as one can check on Figure 4. From this, we can easily recover Formula (2.21) of [1]. This amounts to set Figure 4. The terms f 0 , f 1 , f 2 expressed as a sum of ordered trees. c = 1, that is, solving We proceed as in the previous section. Given the skeleton S of a tree T , define its 1-composition I 1 (S) as the sequence of values of the labels of the vertices of S minus 1 in prefix order. Then, thanks to Equation (43), the number of trees with skeleton S is where I 1 (S) = (i 1 , . . . , i p ) and a k is the arity of the k-th vertex of S, numbered in prefix order. For example, there are 34 trees whose skeleton have (1, 3, 1, 1) as associated 1-composition as one can check on Figure 5. Let I = (i 1 , . . . , i p ) be a composition of n. The coefficient of S I in h n is equal to the number of ordered trees on 2n + 1 vertices whose sequence of non-zero arities minus one in the prefix reading is I. As before, the skeletons of these trees are the ordered trees on p vertices labeled by one plus the elements of I in prefix order. The sequences of arities of the skeletons are the same as before so that where the sum is taken over the set of a k such that a 1 + · · · + a j ≥ j for all j and a 1 + · · · + a p−1 = p − 1. This is Formula (2.23) of [1]. In this presentation, it is clear that the sum is over a Catalan set, here the ordered trees.
Explicit expressions in various bases
7.1. We shall now compute the coefficients of the expansions of g (or h as well) in the bases (R I ) and (Λ I ) of Sym.
The expansion on ribbons can be given for the q-analogs. Let QRP(I) be the set of parking quasi-ribbons of shape I (see [18]), and let Then, since two words with the same evaluation are hypoplactically equivalent iff the inverses of their standardized have the same descents, For example, For q = 1, this expansion presents a remarkable symmetry. The expansion on elementary functions is given by the same formula as the expansion on ribbons, up to sign and conjugation of the compositions: 7.
To simplify the presentation, we shall identify a nondecreasing parking function with its evaluation. More precisely, define a generalized composition as a composition where zeros are allowed. The composition obtained by removing all zeros is called the corresponding composition. A generalized composition I of n is of parking type iff it is of length n + 1 and i 1 + . . . + i k ≥ k for all k in [1, n]. In other words, the set of generalized compositions of parking type is the set of evaluations of parking functions with an appropriate number of trailing zeros. Before describing the involution on generalized compositions of parking type, we need some more structure on the set of elements having the same packed evaluation. For each composition I of n, build a directed graph Γ I with vertex set given by generalized compositions of parking type with corresponding composition I and an arrow J −→ J ′ iff J ′ is obtained from J by exchanging two consecutive parts of J, j i and j i+1 so that j i or j i+1 is 0, an operation reminescent of Hivert's quasisymmetrizing action [7]. For example, Γ 331 and Γ 21211 are given on Figure 6. Γ I can be seen as an initial interval of a permutohedron: consider the word K = (0 n+1−l(I) ) 331..... Figure 6. The graphs Γ 331 and Γ 21211 (0s are represented by dots). and the shuffle S = I K. To these elements corresponds naturally an element of the shuffle S ′ = (123 · · · l(I)) (l(I) + 1 · · · n + 1). Then if one restricts to the elements of S that are the evaluation of a nondecreasing parking function, for any such element s, all the permutations smaller than the corresponding element s ′ in S ′ correspond to evaluations of nondecreasing parking functions: indeed, this means that if J is of parking type, all generalized compositions obtained from J by moving zeros to the right also are of parking type, which is obvious. Now there is only one minimal element, the concatenation of I and K, and only one maximal element, the evaluation where any non-zero entry i, except for the last one, is followed by exactly i − 1 zeros before the next non-zero entry: no successor of this element is the evaluation of a parking function and all other elements have at least one successor of this type.
We are now in a position to describe the involution on generalized compositions of parking type which induces the conjugation on the underlying compositions.
Algorithm 7.2. Let J be a generalized composition of parking type.
• Let J ′ be the tuple obtained by reading J from right to left.
• Compute the conjugate C of the corresponding composition of J, • fill the zero slots of J ′ by the parts of C, • replace by 0 the nonzero parts of J ′ .
Proof -The algorithm is an involution since the conjugation of compositions is one, so we only have to prove that the output is of parking type if the input is. By construction of Γ I , it is sufficient to prove that the image of the bottom element of Γ I is of parking type. Thanks to its characterization, it is obvious that this bottom element is sent to the bottom element of Γ I ∼ by our involution.
Theorem 7.4. The graphs Γ I and Γ I ∼ associated to mutually conjugate compositions of n are isomorphic. Moreover, if one labels the edges by i when one exchanges the letters in positions i and i + 1, then the labels of the edges are exchanged by the involution i ↔ n + 1 − i.
Proof -The graph Γ I corresponds to a part of the shuffle I 0 n+1−l(I) whereas the graph Γ I ∼ corresponds to a part of the shuffle I ∼ 0 n+1−l(I ∼ ) . It is known that l(I) + l(I ∼ ) = n + 1 so that both graphs correspond to parts of a shuffle of an element of length l(I) with an element of length l(I ∼ ). Moreover, given the definition of the edges of both graphs, an edge labelled i between P and P ′ proves that there is an edge labelled n + 1 − i between ι(P ) and ι(P ′ ). So both graphs are isomorphic.
For example, the two graphs on Figure 6 corresponding to 331 and 331 ∼ are indeed isomorphic.
A representation theoretical interpretation.
In fact, nondecreasing parking functions form a sub-semigroup of the semigroup of all endofunctions of [n]. Its representation theory has been investigated by Hivert and Thiéry [9], and it follows from their work that the graphs Γ I (now seen on nondecreasing parking functions instead of generalized compositions of parking type) encode the indecomposable projective modules P I of the semigroup algebra C n = C[NDPF n ]. Indeed, these modules are parametrized by compositions of n, and each P I has a basis (b π ) π∈NDPF I , such that if one denotes by e i the generator mapping i + 1 to i and leaving invariant all other j, e i • b π = b π ′ iff π i −→ π ′ and e i • b π = 0 otherwise. Thus, on the one hand, the coefficients δ I of the expansion (72) g n = ch(PF n ) = I n δ I S I are the dimensions δ I = dim P I of the indecomposable projective modules of C n . On the other hand, the noncommutative symmetric functions S I are the characteristics of the permutational modules of H n (0), which are projective, but decomposable for I = (n). As also shown in [9], these permutational modules are in fact the indecomposable projective modules for a larger algebra, the Hecke-symmetric algebra HS n . One can check that the right action of HS n on C PF n and the left action of C n (by composition π • a) commute with each other, so that the expression (72) of ch(PF n ) reflects the decomposition of C PF n as a (C n , HS n )-bimodule. The coefficients (λ I ) of the ribbon expansion can be similarly interpreted as the dimensions of the projective modules of the commutant of H n (0) in C PF n , an algebra D n having as dimension the Schröder number s n and containing C n . 8. Noncommutative Abel identities 8.1. Abel's generalization of the binomial identity can be stated as that is, the Abel polynomials form a sequence of binomial type. Let E be the exponential alphabet. If we set g n = ch(PF n ), we have g n (tE) = (n + 1) n−1 t n /n!, and it follows from (16) that ch(PF (r) n ) =: g (r) where (76) g (r) n (tE) = t n n! |PF (r) n | = t n n! r(r + n) n−1 and PF (r) n is the set of words a whose nondecreasing rearrangement satisfies a ↑ i ≤ i + r − 1. This is because of the self-evident generalization of (16) for all j in [1, r − 1], which implies in particular Hence, the tE-specialization of g x is the exponential generating function of Abel's polynomials, and Abel's identity amounts to the obvious equality g x g y = g x+y .
We can therefore define the noncommutative Abel polynomial P n (x; A) as the term of degree n in g(A) x . It can be computed directly using the binomial expansion of g(A) x = (1 + U) x = n x n U n . For example, P 1 (x; A) =xS 1 , In particular, one has (80) ch(PF (r) n ) = P n (r; A) . But this characteristic can also be computed directly. Indeed, since PF (r) n is a permutational module, we have where α I is the number of nondecreasing words a ∈ PF (r) n with packed evaluation I. These elements can be classified according to their parkized b = Park(a) (see [18]), which is an ordinary nondecreasing parking function. The cardinality α b of such a class is a binomial coefficient. To see this, let be the maximal factorization of b into connected nondecreasing parking functions (• denoting shifted concatenation, see [18]). The nondecreasing a ∈ PF (r) n such that Park(a) = b are obtained by shifting each factor b i of an amount k i , such that k 1 + · · · + k m ≤ r. Thus, Set c(b) = m. Formula (83) being valid for all positive integers r, we have in general Similarly, the coefficient of S 31 in P 3 is corresponding to 1112, 1113 and 1114 = 111 • 1.
8.2. By construction, the specialization A = E gives back the Abel polynomials. As usual, the specialization A = 1 is also interesting. Let a(n, m) be the Catalan triangle [20, A009766]. That is, x(x + 1) · · · (x + n − 1) n! , that is, S n (x) is the coefficient of t n in (1 − t) −x (which can be interpreted as σ t (x) for x a binomial element, whence the choice of notation). We can now state: Theorem 8.1. The specialization A = 1 of the noncommutative Abel polynomials P n (x; A) is given by Moreover, their generating series is Proof -Equation (94) is clear if one rewrites the quadratic equation for C(z) as Equation (93) follows from (84), since (91) shows that c(n, k) is the number of nondecreasing parking functions of length n such that c(b) = k. It can also be proved analytically. The generating series of the right-hand sides of (93) can be written as a contour integral, over a circle γ = {|w| = ε < 1} where a = zC(z) and f (w) = (1 − w) −x . For |z| small enough, a is inside γ, and by Cauchy's theorem, the right-hand side is The coefficients of the P n (x; 1) build up the triangle [20, A038455]. In fact, is Lambert's generalized binomial series (see [5], (5.68) p. 200). According to [5], (5.70), we have finally the closed expression (99) P n (x; 1) = x + 2n n x x + 2n .
(k, l)-Parking functions
There is a general notion of parking functions associated with a sequence u = (u n ) n≥1 of positive integers: these are the words a such that (a ↑ ) i ≤ u i . In general, their enumeration can be obtained only in terms of Goncarov polynomials [14]. In the particular case where u is an arithmetic progression, it is possible to obtain closed formulas, of which we shall now give the noncommutative analogs. Let k} . Stanley and Pitman [21] have shown that (101) |PF (k,l) n | = l(l + kn) n−1 . As above, this can be extended to the calculation of the 0-Hecke characteristic. The argument used for (16) proves as well (cf. [14]) the last equality following from Abel's identity (the middle term is g(tkE) l/k ). Note that this can also be expressed in terms of the generalized exponential series of [5] (106) E α (z) = n≥0 (nα + 1) n−1 z n n! , that is, If we set we see that Hence, for any α since this is true for α rational. This equality implies most of the interesting properties of E α (z). Indeed, let us write down explicitly the functional equation for g(zE) (111) g(zE) = n≥0 S n (zE)g(zE) n = n≥0 z n n! g(zE) n = e zg(zE) .
The general solution f n is given by the Polish codes of ordered trees with (b+1)n+1 leaves and no vertices of arity between 1 and b.
We are interested in the special case that is, we want to compute the coefficients δ Given the skeleton of a tree such that each internal vertex is of arity at least b + 1, one can define its b-composition I b (S) as the sequence of values of the labels of the vertices of S minus b, in prefix order. Thanks to Equation (43), the number of trees with skeleton S is where I b (S) = (i 1 , . . . , i p ) and a k is the arity of the k-th vertex of S in prefix order, so that, as in Equations (44) where the sum is again taken over sequences (a 1 , . . . , a p−1 ) such that a 1 + · · · + a j ≥ j for all j and a 1 + · · · + a p−1 = p − 1. This amounts to enumerate the trees by arity of the root. The triangles (γ (b) p,n ) include some classical triangles of the combinatorial literature: for n = 0, one recovers the Catalan triangle (sequence A033184 of [20]), for n = 1, one recovers the Schröder triangle (sequence A091370 of [20]). Their first terms are given on Figure 7. The triangles for b = 2 and b = 3 are given on Figure 8. Note that although they are not (yet) referenced in [20], the row sums of the case b = 2 yields Sequence A108447, with a quite different interpretation. and one recovers up to sign the Motzkin triangle (sequence A091836 of [20], see Figure 9) splitting up the Motzkin numbers (sequence A001006 of [20]) when putting c = 1 and considering d i as a (i−1)-ary operation. Recall that Motzkin paths are the paths from (0, 0) to (n, 0), with three kinds of steps (1, 0), (1, 1), and (1, −1), that never go below the horizontal axis. The bijection between trees and Motzkin paths is as follows: let P be a Motzkin path. Let 0 = i 1 < · · · < i k = n be the sequence of abscissas of integer points (i, 0) belonging to P (also called the returns to zero of P ). Denote by P j the part of P between (i j , 0) and (i j+1 , 0). Note that those elements have no non-trivial returns to zero.
Then the tree corresponding to P is built in the following recursive way: put k at the root of the tree (meaning d k ). If P j = (1, 0) then put c as the j-th son of the root. Else, P j is of the form P j = (1, 1)Q j (1, −1). Then insert Q j recursively as the j-th son of the root. Figure 10 presents an example of the bijection. | 2014-10-01T00:00:00.000Z | 2005-12-26T00:00:00.000 | {
"year": 2005,
"sha1": "1895a9e9afe2cb4e1ae3a70e6a61185bf98ea49c",
"oa_license": "elsevier-specific: oa user license",
"oa_url": "https://doi.org/10.1016/j.aam.2007.05.005",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "8df9286e6d143cc3d1efad97237da1306ecb6405",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
88515171 | pes2o/s2orc | v3-fos-license | Online Natural Gradient as a Kalman Filter
We cast Amari's natural gradient in statistical learning as a specific case of Kalman filtering. Namely, applying an extended Kalman filter to estimate a fixed unknown parameter of a probabilistic model from a series of observations, is rigorously equivalent to estimating this parameter via an online stochastic natural gradient descent on the log-likelihood of the observations. In the i.i.d. case, this relation is a consequence of the"information filter"phrasing of the extended Kalman filter. In the recurrent (state space, non-i.i.d.) case, we prove that the joint Kalman filter over states and parameters is a natural gradient on top of real-time recurrent learning (RTRL), a classical algorithm to train recurrent models. This exact algebraic correspondence provides relevant interpretations for natural gradient hyperparameters such as learning rates or initialization and regularization of the Fisher information matrix.
In statistical learning, stochastic gradient descent is a widely used tool to estimate the parameters of a model from empirical data, especially when the parameter dimension and the amount of data are large [BL03] (such as is typically the case with neural networks, for instance). The natural gradient [Ama98] is a tool from information geometry, which aims at correcting several shortcomings of the widely ordinary stochastic gradient descent, such as its sensitivity to rescalings or simple changes of variables in parameter space [Oll15]. The natural gradient modifies the ordinary gradient by using the information geometry of the statistical model, via the Fisher information matrix (see formal definition in Section 1.2; see also [Mar14]). The natural gradient comes with a theoretical guarantee of asymptotic optimality [Ama98] that the ordinary gradient lacks, and with the theoretical knowledge and various connections from information geometry, e.g., [AN00,OAAH17]. In large dimension, its computational complexity makes approximations necessary, e.g., [LMB07,Oll15,MCO16,GS15,MG15]; this has limited its adoption despite many desirable theoretical properties.
The extended Kalman filter (see e.g., the textbooks [Sim06, Sä13, Jaz70]) is a generic and effective tool to estimate in real time the state of a nonlinear dynamical system, from noisy measurements of some part or some function of the system. (The ordinary Kalman filter deals with linear systems.) Its use in navigation systems (GPS, vehicle control, spacecraft...), time series analysis, econometrics, etc. [Sä13], is extensive to the point it can been described as "one of the great discoveries of mathematical engineering" [GA15].
The goal of this text is to show that the natural gradient, when applied online, is a particular case of the extended Kalman filter. Indeed, the extended Kalman filter can be used to estimate the parameters of a statistical model (probability distribution), by viewing the parameters as the hidden state of a "static" dynamical system, and viewing i.i.d. samples as noisy observations depending on the parameters 1 . We show that doing so is exactly equivalent to performing an online stochastic natural gradient descent (Theorem 2).
This results in a rigorous dictionary between the natural gradient objects from statistical learning, and the objects appearing in Kalman filtering; for instance, a larger learning rate for the natural gradient descent exactly corresponds to a fading memory in the Kalman filter (Proposition 3). Table 1 lists a few correspondences between objects from the Kalman filter side and from the natural gradient side, as results from the theorems and propositions below. Note that the correspondence is one-sided: the online natural gradient is exactly an extended Kalman filter, but only corresponds to a particular use of the Kalman filter for parameter estimation problems (i.e., with static dynamics on the parameter part of the system).
Beyond the static case, we also consider the learning of the parameters of a general dynamical system, where subsequent observations exhibit temporal patterns instead of being i.i.d.; in statistical learning this is called a recurrent model, for instance, a recurrent neural network. We refer to [Jae02] for an introduction to recurrent models in statistical learning (recurrent neural networks) and the afferent techniques (including Kalman filters), and to [Hay01] for a clear, in-depth treatment of Kalman filtering for recurrent models. We prove (Theorem 12) that the extended Kalman filter applied jointly to the state and parameter, amounts to a natural gradient on top of real-time recurrent learning (RTRL), a classical (and costly) online algorithm for recurrent network training [Jae02].
Thus, we provide a bridge between techniques from large-scale statistical learning (natural gradient, RTRL) and a central object from mathematical engineering, signal processing, and estimation theory. Casting the natural gradient as a specific case of the extended Kalman filter is an instance of the provocative statement from [LS83] that "there is only one recursive identification method" that is optimal on quadratic functions. Indeed, the online natural gradient descent fits into the framework of [LS83,§3.4.5]. Arguably, 1 For this we slightly extend the definition of the Kalman filter to include discrete observations, by defining (Def. 5) the measurement error as T (y) −ŷ instead of y −ŷ, where T is the sufficient statistics of an exponential family model for output noise with meanŷ. This reduces to the standard filter for Gaussian output noise, and naturally covers categorical outputs as often used in statistical learning (withŷ the class probabilities in a softmax classifier and T a "one-hot" encoding of y).
iid (static, non-recurrent) modelŷ t = h(θ, u t ) Extended Kalman filter on static Online natural gradient on θ with parameter θ learning rate η t = 1/(t + 1) Covariance matrix P t Fisher information matrix J t = η t P −1 t Bayesian prior P 0 Fisher matrix initialization J 0 = P −1 0 Fading memory Larger or constant learning rate Fading memory+constant prior Fisher matrix regularization Recurrent (state space) modelŷ t = Φ(ŷ t−1 , θ, u t ) Extended Kalman filter on (θ,ŷ) RTRL+natural gradient+state correction Covariance of θ alone, P θ Fisher matrix J t = η t (P θ ) −1 Correlation between θ andŷ t RTRL gradient estimate ∂ŷ t /∂θ [Ber96] identifies the extended Kalman filter with a Gauss-Newton gradient descent for the specific case of nonlinear regression. [dFNG00] interprets process noise in the static Kalman filter as an adaptive, per-parameter learning rate, thus akin to a preconditioning matrix. [ŠKT01] uses the Fisher information matrix to study the variance of parameter estimation in Kalman-like filters, without using a natural gradient; [BL03] comment on the similarity between Kalman filtering and a version of Amari's natural gradient for the specific case of least squares regression; [Mar14] and [Oll15] mention the relationship between natural gradient and the Gauss-Newton Hessian approximation; [Pat16] exploits the relationship between secondorder gradient descent and Kalman filtering in specific cases including linear regression; [LCL + 17] use a natural gradient descent over Gaussian distributions for an auxiliary problem arising in Kalman-like Bayesian filtering, a problem independent from the one treated here. For the recurrent (non-i.i.d.) case, our result is that joint Kalman filtering is essentially a natural gradient on top of the classical RTRL algorithm for recurrent models [Jae02]. [Wil92] already observed that starting with the Kalman filter and introducing drastic simplifications (doing away with the covariance matrix) results in RTRL, while [Hay01,§5] contains statements that can be interpreted as relating Kalman filtering and preconditioned RTRL-like gradient descent for recurrent models (Section 3.2).
Perspectives.
In this text our goal is to derive the precise correspondence between natural gradient and Kalman filtering for parameter estimation (Thm. 2, Prop. 3, Prop. 4, Thm. 12), and to work out an exact dictionary between the mathematical objects on both sides. This correspondence suggests several possible venues for research, which nevertheless are not explored here.
First, the correspondence with the Kalman filter brings new interpretations and suggestions for several natural gradient hyperparameters, such as Fisher matrix initialization, equality between Fisher matrix decay rate and learning rate, or amount of regularization to the Fisher matrix (Section 2.2). The natural gradient can be quite sensitive to these hyperparameters. A first step would be to test the matrix decay rate and regularization values suggested by the Bayesian interpretation (Prop. 4) and see if they help with the natural gradient, or if these suggestions are overriden by the various approximations needed to apply the natural gradient in practice. These empirical tests are beyond the scope of the present study.
Next, since statistical learning deals with either continuous or categorical data, we had to extend the usual Kalman filter to such a setting. Traditionally, non-Gaussian output models have been treated by applying a nonlinearity to a standard Gaussian noise (Section 2.3). Instead, modeling the measurement noise as an exponential family (Appendix and Def. 5) allows for a unified treatment of the standard case (Gaussian output noise with known variance), of discrete categorical observations, or other exponential noise models (e.g., Gaussian noise with unknown variance). We did not test the empirical consequences of this choice, but it certainly makes the mathematical treatment flow smoothly, in particular the view of the Kalman filter as preconditioned gradient descent (Prop. 6).
Neither the natural gradient nor the extended Kalman filter scale well to large-dimensional models as currently used in machine learning, so that approximations are required. The correspondence raises the possibility that various methods developed for Kalman filtering (e.g., particle or unscented filters) or for natural gradient approximations (e.g., matrix factorizations such as the Kronecker product [MG15] or quasi-diagonal reductions [Oll15,MCO16]) could be transferred from one viewpoint to the other.
In statistical learning, other means have been developed to attain the same asymptotic efficiency as the natural gradient, notably trajectory averaging (e.g. [PJ92], or [Mar14] for the relationship to natural gradient) at little algorithmic cost. One may wonder if these can be generalized to filtering problems.
Proof techniques could be transferred as well: for instance, Amari [Ama98] gave a strong but sometimes informal argument that the natural gradient is Fisher-efficient, i.e., the resulting parameter estimate is asymptotically optimal for the Cramér-Rao bound; alternate proofs could be obtained by transferring related statements for the extended Kalman filter, e.g., combining techniques from [ŠKT01,BRD97,LS83].
Organization of the text. In Section 1 we set the notation, recall the definition of the natural gradient (Def. 1), and explain how Kalman filtering can be used for parameter estimation in statistical learning (Section 1.3); the definition of the Kalman filter is included in Def. 5. Section 2 gives the main statements for viewing the natural gradient as an instance of an extended Kalman filter for i.i.d. observations (static systems), first intuitively via a heuristic asymptotic argument (Section 2.1), then rigorously (Thm. 2, Prop. 3, Prop. 4). The proof of these results appears in Section 2.3 and sheds some light on the geometry of Kalman filtering. Finally, the case of non-i.i.d. observations (recurrent or state space model) is treated in Section 3.
Acknowledgments. Many thanks to Silvère Bonnabel, Gaétan Marceau-Caron, and the anonymous reviewers for their careful reading of the manuscript, corrections, and suggestions for the presentation and organization of the text. I would also like to thank Shun-ichi Amari, Frédéric Barbaresco, and Nando de Freitas for additional comments and for pointing out relevant references.
1 Problem setting, natural gradient, Kalman filter
Problem setting
In statistical learning, we have a series of observation pairs (u 1 , y 1 ), . . . , (u t , y t ), . . . and want to predict y t from u t using a probabilistic model p θ . Assume for now that y t is real-valued (regression problem) and that the model for y t is a Gaussian centered on a predicted valueŷ t , with known covariance matrix R t , namely The function h may represent any computation, for instance, a feedforward neural network with input u, parameters θ, and outputŷ. The goal is to find the parameters θ such that the predictionŷ t = h(θ, u t ) is as close as possible to y t : the loss function is up to an additive constant. For non-Gaussian outputs, we assume that the noise model on y t given y t belongs to an exponential family, namely, thatŷ t is the mean parameter of an exponential family of distributions 2 over y t ; we again define the loss function as ℓ t := − ln p(y t |ŷ t ), and the output noise R t can be defined as the covariance matrix of the sufficient statistics of y t given this mean (Def. 5). For a Gaussian output noise this works as expected. For instance, for a classification problem, the output is categorical, y t ∈ {1, . . . , K}, andŷ t will be the set of probabilitiesŷ t = (p 1 , . . . , p K−1 ) to have y t = 1, . . . , K − 1.
(The last probability p K is determined by the others via p k = 1 and has to be excluded to obtain a non-degenerate parameterization and an invertible covariance matrix R t .) This convention allows us to extend the definition of the Kalman filter to such a setting (Def. 5) in a natural way, just by replacing the measurement error y t −ŷ t with T (y t )−ŷ t , with T the sufficient statistics for the exponential family. (For Gaussian noise this is the same, as T (y) is y.) In neural network terms, this means that the output layer of the network is fed to a loss function that is the log-loss of an exponential family, but places no restriction on the rest of the model.
General notation.
In statistical learning, the external inputs or regressor variables are often denoted x. In Kalman filtering, x often denotes the state of the system, while the external inputs are often u. Thus we will avoid x altogether and denote by u the inputs and by s the state of the system.
The variable to be predicted at time t will be y t , andŷ t is the corresponding prediction. In generalŷ t and y t may be different objects in that y t encodes a full probabilistic prediction for y t . For Gaussians with known variance,ŷ t is just the predicted mean of y t , so in this case y t andŷ t are the same type of object. For Gaussians with unknown variance,ŷ encodes both the mean and second moment of y. For discrete categorical data,ŷ encodes the probability of each possible outcome y.
Thus, the formal setting for this text is as follows: we are given a sequence of finite-dimensional observations (y t ) with each y t ∈ R dim(y) , a sequence of inputs (u t ) with each u t ∈ R dim(u) , a parametric model where Z(β) is a normalizing constant, and λ(dy) is any reference measure on y. For instance, if y ∈ R K , T k (y) = y k and λ(dy) is a Gaussian measure centered on 0, by varying β one gets all Gaussian measures with the same covariance matrix and another mean. y may be discrete, e.g., Bernoulli distributions correspond to λ the uniform measure on y ∈ {0, 1} and a single sufficient statistic T (0) = 0, T (1) = 1. Often, the mean parameter T := Ey∼p β T (y) is a more convenient parameterization than β. Exponential families maximize entropy (minimize information divergence from λ) for a given mean of T . ŷ = h(θ, u t ) with parameter θ ∈ R dim(θ) and h some fixed smooth function from R dim(θ) × R dim(u) to R dim(ŷ) . We are given an exponential family (output noise model) p(y|ŷ) on y with mean parameterŷ and sufficient statistics T (y) (see the Appendix), and we define the loss function ℓ t := − ln p(y t |ŷ t ).
The natural gradient descent on parameter θ t will use the Fisher matrix J t . The Kalman filter will have posterior covariance matrix P t .
For multidimensional quantities x and y = f (x), we denote by ∂y ∂x the Jacobian matrix of y w.r.t. x, whose (i, j) entry is ∂f i (x) ∂x j . This satisfies the chain rule ∂z ∂y ∂y ∂x = ∂z ∂x . With this convention, gradients of real-valued functions are row vectors, so that a gradient descent takes the form x ← For a column vector u, u ⊗2 is synonymous with uu ⊤ , and with u ⊤ u for a row vector.
Natural gradient descent
A standard approach to optimize the parameter θ of a probabilistic model, given a sequence of observations (y t ), is an online gradient descent with learning rate η t . This simple gradient descent is particularly suitable for large datasets and large-dimensional models [BL03], but has several practical and theoretical shortcomings. For instance, it uses the same non-adaptive learning rate for all parameter components. Moreover, simple changes in parameter encoding or in data presentation (e.g., encoding black and white in images by 0/1 or 1/0) can result in different learning performance. This motivated the introduction of the natural gradient [Ama98]. It is built to achieve invariance with respect to parameter re-encoding; in particular, learning become insensitive to the characteristic scale of each parameter direction, so that different directions naturally get suitable learning rates. The natural gradient is the only general way to achieve such invariance [AN00, §2.4].
The natural gradient preconditions the gradient descent with J(θ) −1 where J is the Fisher information matrix [Kul97] with respect to the parameter θ. For a smooth probabilistic model p(y|θ) over a random variable y with parameter θ, the latter is defined as Definition 1 below formally introduces the online natural gradient. If the model for y involves an input u, then an expectation or empirical average over the input is introduced in the definition of J [AN00, However, this comes at a large computational cost for large-dimensional models: just storing the Fisher matrix already costs O((dim θ) 2 ). Various strategies are available to approximate the natural gradient for complex models such as neural networks, using diagonal or block-diagonal approximation schemes for the Fisher matrix, e.g., [LMB07,Oll15,MCO16,GS15,MG15].
Definition 1 (Online natural gradient). Consider a statistical model with parameter θ that predicts an output y given an input u. Suppose that the prediction takes the form y ∼ p(y|ŷ) whereŷ = h(θ, u) depends on the input via a model h with parameter θ. Given observation pairs (u t , y t ), the goal is to minimize, online, the loss function as a function of θ.
The online natural gradient maintains a current estimate θ t of the parameter θ, and a current approximation J t of the Fisher matrix. The parameter is estimated by a gradient descent with preconditioning matrix J −1 t , namely with learning rate η t and Fisher matrix decay rate γ t .
In the Fisher matrix update, the expectation over all possible values y ∼ p(y|ŷ) can often be computed algebraically, but this is sometimes computationally bothersome (for instance, in neural networks, it requires dim(ŷ t ) distinct backpropagation steps [Oll15]). A common solution [APF00, LMB07, Oll15, PB13] is to just use the value y = y t (outer product approximation) instead of the expectation over y. Another is to use a Monte Carlo approximation with a single sample of y ∼ p(y|ŷ t ) [Oll15,MCO16], namely, using the gradient of a synthetic sample instead of the actual observation y t in the Fisher matrix. These latter two solutions are often confused; only the latter provides an unbiased estimate, see discussion in [Oll15,PB13].
The online "smoothed" update of the Fisher matrix in (1.7) mixes past and present estimates (this or similar updates are used in [LMB07,MCO16]). The reason is at least twofold. First, the "genuine" Fisher matrix involves an expectation over the inputs u t [AN00, §8.2]: this can be approximated online only via a moving average over inputs (e.g., γ t = 1/t realizes an equal-weight average over all inputs seen so far). Second, the expectation over y ∼ p(y|ŷ t ) in (1.7) is often replaced with a Monte Carlo estimation with only one value of y, and averaging over time compensates for this Monte Carlo sampling.
As a consequence, since θ t changes over time, this means that the estimate J t mixes values obtained at different values of θ, and converges to the Fisher matrix only if θ t changes slowly, i.e., if η t → 0. The correspondence below with Kalman filtering suggests using γ t = η t .
Kalman filtering for parameter estimation
One possible definition of the extended Kalman filter is as follows [Sim06, §15.1]. We are trying to estimate the current state of a dynamical system s t whose evolution equation is known but whose precise value is unknown; at each time step, we have access to a noisy measurement y t of a quantitŷ y t = h(s t ) which depends on this state. The Kalman filter maintains an approximation of a Bayesian posterior on s t given the observations y 1 , . . . , y t . The posterior distribution after t observations is approximated by a Gaussian with mean s t and covariance matrix P t . (Indeed, Bayesian posteriors always tend to Gaussians asymptotically under mild conditions, by the Bernstein-von Mises theorem [vdV00].) The Kalman filter prescribes a way to update s t and P t when new observations become available.
The Kalman filter update is summarized in Definition 5 below. It is built to provide the exact value of the Bayesian posterior in the case of linear dynamical systems with Gaussian measurements and a Gaussian prior. In that sense, it is exact at first order.
The Kalman filtering viewpoint on a statistical learning problem is that we are facing a system with hidden variable θ, with an unknown value that does not evolve in time, and that the observations y t bring more and more information on θ. Thus, a statistical learning problem can be tackled by applying the extended Kalman filter to the unknown variable s t = θ, whose underlying dynamics from time t to time t + 1 is just to remain unchanged (f = Id and noise on s is 0 in Definition 5). In such a setting, the posterior covariance matrix P t will generally tend to 0 as observations accumulate and the parameter is identified better 3 (this occurs at rate 1/t for the basic filter, which estimates from all t past observations at time t, or at other rates if fading memory is included, see below). The initialization θ 0 and its covariance P 0 can be interpreted as Bayesian priors on θ [SW88,LS83].
We will refer to this as a static Kalman filter. In the static case and without fading memory, the posterior covariance P t after t observations will decrease like O(1/t), so that the parameter gets updated by O(1/t) after each new observation. Introducing fading memory for past observations (equivalent to adding noise on θ at each step, Q t ∝ P t|t−1 in Def. 5) leads to a larger covariance and faster updates.
An example: Feedforward neural networks. The Kalman approach above can be applied to any parametric statistical model. For instance [SW88] treat the case of a feedforward neural network. In our setting this is described as follows. Let u be the input of the model and y the true (desired) output. A feedforward neural network can be described as a functionŷ = h(θ, u) where θ is the set of all parameters of the network, where h represents all computations performed by the network on input u, andŷ encodes the network prediction for the value of the output y on input u. For categorical observations y,ŷ is usually a set of predicted probabilities for all possible classes; while for regression problems,ŷ is directly the predicted value. In both cases, the error function to be minimized can be defined as ℓ(y) := − ln p(y|ŷ): in the regression case,ŷ is interpreted as a mean of a Gaussian model on y, so that − ln p(y|ŷ) is the square error up to a constant.
Training the neural network amounts to estimating the network parameter θ from the observations. Applying a static Kalman filter for this problem [SW88] amounts to using Def. 5 with s = θ, f = Id and Q = 0. At first glance this looks quite different from the common gradient descent (backpropagation) approach for neural networks. The backpropagation operation is represented in the Kalman filter by the computation of H = ∂h(s,u) ∂s (2.17) where s is the parameter. We show that the additional operations of the Kalman filter correspond to using a natural gradient instead of a vanilla gradient.
Unfortunately, for models with high-dimensional parameters such as neural networks, the Kalman filter is computationally costly and requires blockdiagonal approximations for P t (which is a square matrix of size dim θ); moreover, computing H t = ∂ŷ t /∂θ is needed in the filter, and requires doing one separate backpropagation for each component of the outputŷ t .
Natural gradient as a Kalman filter: the static (i.i.d.) case
We now write the explicit correspondence between an online natural gradient to estimate the parameter of a statistical model from i.i.d. observations, and a static extended Kalman filter. We first give a heuristic argument that outlines the main ideas from the proof (Section 2.1). Then we state the formal correspondences. First, the static Kalman filter corresponds to an online natural gradient with learning rate 1/t (Thm. 2). The rate 1/t arises because such a filter takes into account all previous evidence without decay factors (and with process noise Q = 0 in the Kalman filter), thus the posterior covariance matrix decreases like O(1/t). Asymptotically, this is the optimal rate in statistical learning [Ama98]. (Note, however, that the online natural gradient and extended Kalman filter are identical at every time step, not only asymptotically.) The 1/t rate is often too slow in practical applications, especially when starting far away from an optimal parameter value. The natural gradient/Kalman filter correspondence is not specific to the O(1/t) rate. Larger learning rates in the natural gradient correspond to a fading memory Kalman filter (adding process noise Q proportional to the posterior covariance at each step, corresponding to a decay factor for the weight of previous observations); this is Proposition 3. In such a setting, the posterior covariance matrix in the Kalman filter does not decrease like O(1/t); for instance, a fixed decay factor for the fading memory corresponds to a constant learning rate.
Finally, a fading memory in the Kalman filter may erase prior Bayesian information (θ 0 , P 0 ) too fast; maintaining the weight of the prior in a fading memory Kalman filter is treated in Proposition 4 and corresponds, on the natural gradient side, to a so-called weight decay [Bis06] towards θ 0 together with a regularization of the Fisher matrix, at specific rates.
Natural gradient as a Kalman filter: heuristics
As a first ingredient in the correspondence, we interpret Kalman filters as gradient descents: the extended Kalman filter actually performs a gradient descent on the log-likelihood of each new observation, with preconditioning matrix equal to the posterior covariance matrix. This is Proposition 6 below. This relies on having an exponential family as the output noise model.
Meanwhile, the natural gradient uses the Fisher matrix as a preconditioning matrix. The Fisher matrix is the average Hessian of log-likelihood, thanks to the classical double definition of the Fisher matrix as square gradient or Hessian, for any probabilistic model p(y|θ) [Kul97]. Assume that the probability of the data given the parameter θ is approximately Gaussian, p(y 1 , . . . , y t |θ) ∝ exp(−(θ − θ * ) ⊤ Σ −1 (θ − θ * )) with covariance Σ. This often holds asymptotically thanks to the Bernstein-von Mises theorem; moreover, the posterior covariance Σ typically decreases like 1/t. Then the Hessian (w.r.t. θ) of the total log-likelihood of (y 1 , . . . , y t ) is Σ −1 , the inverse covariance of θ. So the average Hessian per data point, the Fisher matrix J, is approximately J ≈ Σ −1 /t. Since a Kalman filter to estimate θ is essentially a gradient descent preconditioned with Σ, it will be the same as using a natural gradient with learning rate 1/t. Using a fading memory Kalman filter will estimate Σ from fewer past observations and provide larger learning rates.
Another way to understand the link between natural gradient and Kalman filter is as a second-order Taylor expansion of data log-likelihood. Assume that the total data log-likelihood at time t, L t (θ) := − t s=1 ln p(y s |θ), is approximately quadratic as a function of θ, with a minimum at θ * t and a . Then when new data points become available, this quadratic approximation would be updated as follows (online Newton method): and indeed these are equalities for a quadratic log-likelihood. Namely, the update of θ * t is a gradient ascent on log-likelihood, preconditioned by the inverse Hessian (Newton method). Note that h t grows like t (each data point adds its own contribution). Thus, h t is t times the empirical average of the Hessian, i.e., approximately t times the Fisher matrix of the model (h t ≈ tJ). So this update is approximately a natural gradient descent with learning rate 1/t.
Meanwhile, the Bayesian posterior on θ (with uniform prior) after observations y 1 , . . . , y t is proportional to e −Lt by definition of L t .
, this is a Gaussian distribution centered at θ * t with covariance matrix h −1 t . The Kalman filter is built to maintain an approximation P t of this covariance matrix h −1 t , and then performs a gradient step preconditioned on P t similar to (2.2).
The simplest situation corresponds to an asymptotic rate O(1/t), i.e., estimating the parameter based on all past evidence; the update (2.1) of the Hessian is additive, so that h t grows like t and h −1 t in (2.2) produces an effective learning rate O(1/t). Introducing a decay factor for older observations, multiplying the term h t−1 in (2.1), produces a fading memory effect and results in larger learning rates.
These heuristics justify the statement from [LS83] that "there is only one recursive identification method". Close to an optimum (so that the Hessian is positive), all second-order algorithms are essentially an online Newton step (2.1)-(2.2) approximated in various ways.
But even though this heuristic argument appears to be approximate or asymptotic, the correspondence between online natural gradient and Kalman filter presented below is exact at every time step.
Statement of the correspondence, static (i.i.d.) case
For the statement of the correspondence, we assume that the output noise on y givenŷ is modelled by an exponential family with mean parameterŷ. This covers the traditional Gaussian case y = N (ŷ, Σ) with fixed Σ often used in Kalman filters. The Appendix contains necessary background on exponential families.
Theorem 2 (Natural gradient as a static Kalman filter). These two algorithms are identical under the correspondence (θ t , J t ) ↔ (s t , P −1 t /(t + 1)): 1. The online natural gradient (Def. 1) with learning rates η t = γ t = 1/(t + 1), applied to learn the parameter θ of a model that predicts observations (y t ) with inputs (u t ), using a probabilistic model y ∼ p(y|ŷ) withŷ = h(θ, u), where h is any model and p(y|ŷ) is an exponential family with mean parameterŷ.
2. The extended Kalman filter (Def. 5) to estimate the state s from observations (y t ) and inputs (u t ), using a probabilistic model y ∼ p(y|ŷ) withŷ = h(s, u) and p(y|ŷ) an exponential family with mean parameterŷ, with static dynamics and no added noise on s (f (s, u) = s and Q = 0 in Def. 5).
The correspondence is exact only if the Fisher metric is updated before the parameter in the natural gradient descent (as in Definition 1).
The correspondence with a Kalman filter provides an interpretation for various hyper-parameters of online natural gradient descent. In particular, J 0 = P −1 0 can be interpreted as the inverse covariance of a Bayesian prior on θ [SW88]. This relates the initialization J 0 of the Fisher matrix to the initialization of θ: for instance, in neural networks it is recommended to initialize the weights according to a Gaussian of covariance diag(1/fan-in) (number of incoming weights) for each neuron; interpreting this as a Bayesian prior on weights, one may recommend to initialize the Fisher matrix to the inverse of this covariance, namely, Indeed this seemed to perform quite well in small-scale experiments.
Learning rates, fading memory, and metric decay rate. Theorem 2 exhibits a 1/(t + 1) learning rate for the online natural gradient. This is because the static Kalman filter for i.i.d. observations approximates the maximum a posteriori (MAP) of the parameter θ based on all past observations; MAP and maximum likelihood estimators change by O(1/t) when a new data point is observed. However, for nonlinear systems, optimality of the 1/t rate only occurs asymptotically, close enough to the optimum. In general, a 1/(t+1) learning rate is far from optimal if optimization does not start close to the optimum or if one is not using the exact Fisher matrix J t or covariance matrix P t .
Larger effective learning rates are achieved thanks to so-called "fading memory" variants of the Kalman filter, which put less weight on older observations. For instance, one may multiply the log-likelihood of previous points by a forgetting factor (1−λ t ) before each new observation. This is equivalent to an additional step P t−1 ← P t−1 /(1 − λ t ) in the Kalman filter, or to the addition of an artificial process noise Q t proportional to P t−1 in the model. Such strategies are reported to often improve performance, especially when the data do not truly follow the model [Sim06, §5.5, §7.4], [Hay01, §5.2.2]. See for instance [Ber96] for the relationship between Kalman fading memory and gradient descent learning rates (in a particular case).
Proposition 3 (Natural gradient rates and fading memory). Under the same model and assumptions as in Theorem 2, the following two algorithms are identical via the correspondence (θ t , J t ) ↔ (s t , η t P −1 t ): • An online natural gradient step with learning rate η t and metric decay rate γ t • A fading memory Kalman filter with an additional step P t−1 ← P t−1 /(1− λ t ) before the transition step; such a filter iteratively optimizes a weighted log-likelihood function L t of recent observations, with decay (1 − λ t ) at each step, namely: provided the following relations are satisfied: For example, taking η t = 1/(t + cst) corresponds to λ t = 0, no decay for older observations, and an initial covariance P 0 = J −1 0 /cst. Taking a constant learning rate η t = η 0 corresponds to a constant decay factor λ = η 0 .
The proposition above computes the fading memory decay factors 1 − λ t from the natural gradient learning rates η t via (2.6). In the other direction, one can start with the decay factors λ t and obtain the learning rates η t via the cumulated sum of weights S t : S 0 := 1/η 0 then S t := (1 − λ t )S t−1 + 1, then η t := 1/S t . This clarifies how λ t = 0 corresponds to η t = 1/(t + cst) where the constant is S 0 .
The learning rates also control the weight given to the Bayesian prior and to the starting point θ 0 . For instance, with η t = 1/(t + t 0 ) and large t 0 , the gradient descent will move away slowly from θ 0 ; in the Kalman interpretation this corresponds to λ t = 0 and a small initial covariance P 0 = J −1 0 /t 0 around θ 0 , so that the prior weighs as much as t 0 observations. This result suggests to set γ t = η t in the online natural gradient descent of Definition 1. The intuitive explanation for this setting is as follows: Both the Kalman filter and the natural gradient build a second-order approximation of the log-likelihood of past observations as a function of the parameter θ, as explained in Section 2.1. Using a fading memory corresponds to putting smaller weights on past observations; these weights affect the first-order and the second-order parts of the approximation in the same way. In the gradient viewpoint, the learning rate η t corresponds to the first-order term (comparing (1.8) and (2.2)) while the Fisher matrix decay rate corresponds to the rate at which the second-order information is updated. Thus, the setting η t = γ t in the natural gradient corresponds to using the same decay weights for the first-order and second-order expansion of the log-likelihood of past observations.
Still, one should keep in mind that the extended Kalman filter is itself only an approximation for nonlinear systems. Moreover, from a statistical point of view, the second-order object J t is higher-dimensional than the firstorder information, so that estimating J t based on more past observations may be more stable. Finally, for large-dimensional problems the Fisher matrix is always approximated, which affects optimality of the learning rates. So in practice, considering γ t and η t as hyperparameters to be tuned independently may still be beneficial, though γ t = η t seems a good place to start.
Regularization of the Fisher matrix and Bayesian priors.
A potential downside of fading memory in the Kalman filter is that the Bayesian interpretation is partially lost, because the Bayesian prior is forgotten too quickly. For instance, with a constant learning rate, the weight of the Bayesian prior decreases exponentially; likewise, with η t = O(1/ √ t), the filter essentially works with the O( √ t) most recent observations, while the weight of the prior decreases like ≈ e − √ t (as does the weight of the earliest observations; this is the product (1 − λ t )). But precisely, when working with fewer data points one may wish the prior to play a greater role.
The Bayesian interpretation can be restored by explicitly optimizing a combination of the log-likelihood of recent points, and the log-likelihood of the prior. This is implemented in Proposition 4.
From the natural gradient viewpoint, this translates both as a regularization of the Fisher matrix (often useful in practice to numerically stabilize its inversion) and of the gradient step. With a Gaussian prior N (θ prior , Id), this manifests as an additional step towards θ prior and adding ε. Id to the Fisher matrix, known respectively as weight decay and Tikhonov regularization [Bis06, §3.3, §5.5] in statistical learning.
Proposition 4 (Bayesian regularization of the Fisher matrix). Let π = N (θ prior , Σ 0 ) be a Gaussian prior on θ. Under the same model and assumptions as in Theorem 2, the following two algorithms are equivalent: • A modified fading memory Kalman filter that iteratively optimizes L t (θ) + n prior ln π(θ) where L t is a weighted log-likelihood function of recent observations with decay (1 − λ t ): initialized with P 0 = η 1 1+n prior η 1 Σ 0 . • A regularized online natural gradient step with learning rate η t and metric decay rate γ t , initialized with J 0 = Σ −1 0 , provided the following relations are satisfied: Thus, the regularization terms are fully determined by choosing the learning rates η t , a prior such as N (0, 1/fan-in) (for neural networks), and a value of n prior such as n prior = 1 (the prior weighs as much as n prior data points). This holds both for regularization of the Fisher matrix J t + η t n prior Σ −1 0 , and for regularization of the parameter via the extra gradient step λ t n prior Σ −1 0 (θ − θ prior ). The relative strength of regularization in the Fisher matrix decreases like η t . In particular, a constant learning rate results in a constant regularization.
The added gradient step λ t n prior Σ −1 0 (θ − θ prior ) is modulated by λ t which depends on η t ; this extra term pulls towards the prior θ prior . The Bayesian viewpoint guarantees that this extra term will not ultimately prevent convergence of the gradient descent (as the influence of the prior vanishes when the number of observations increases).
It is not clear how much these recommendations for natural gradient descent coming from its Bayesian interpretation are sensitive to using only an approximation of the Fisher matrix.
Proofs for the static case
The proof of Theorem 2 starts with the interpretation of the Kalman filter as a gradient descent (Proposition 6).
We first recall the exact definition and the notation we use for the extended Kalman filter.
Definition 5 (Extended Kalman filter). Consider a dynamical system with state s t , inputs u t and outputs y t , where p(·|ŷ) denotes an exponential family with mean parameterŷ (e.g., y = N (ŷ, R) with fixed covariance matrix R).
The extended Kalman filter for this dynamical system estimates the current state s t given observations y 1 , . . . , y t in a Bayesian fashion. At each time, the Bayesian posterior distribution of the state given y 1 , . . . , y t is approximated by a Gaussian N (s t , P t ) so that s t is the approximate maximum a posteriori, and P t is the approximate posterior covariance matrix. (The prior is N (s 0 , P 0 ) at time 0.) Each time a new observation y t is available, these estimates are updated as follows.
The transition step (before observing y t ) is and the observation step after observing y t is (these are just the error E t = y t −ŷ t and the covariance matrix R t = R for a Gaussian model y = N (ŷ, R) with known R) For non-Gaussian output noise, the definition of E t and R t above via the mean parameterŷ of an exponential family, differs from the practice of modelling non-Gaussian noise via a nonlinear function applied to Gaussian noise. This allows for a straightforward treatment of various output models, such as discrete outputs or Gaussians with unknown variance. In the Gaussian case with known variance our definition is fully standard. 4 The proof starts with the interpretation of the Kalman filter as a gradient descent preconditioned by P t . Compare this result and Lemma 9 to [Hay01, (5.68)-(5.73)].
Proposition 6 (Kalman filter as preconditioned gradient descent). The update of the state s in a Kalman filter can be seen as an online gradient descent on data log-likelihood, with preconditioning matrix P t . More precisely, denoting ℓ t (y) := − ln p(y|ŷ t ), the update (2.20) is equivalent to where in the derivative, ℓ t depends on s t|t−1 viaŷ t = h(s t|t−1 , u t ).
Lemma 7 (Errors and gradients). When the output model is an exponential family with mean parameterŷ t , the error E t is related to the gradient of the log-likelihood of the observation y t with respect to the prediction y t by For a Gaussian y t = N (ŷ t , R), this is just a direct computation. For a general exponential family, consider the natural parameter β of the exponential family which defines the law of y, namely, p(y|β) = exp( i β i T i (y))/Z(β) with sufficient statistics T i and normalizing constant Z. An elementary computation (Appendix, (A.3)) shows that by definition of the mean parameterŷ. Thus, where the derivative is with respect to the natural parameter β. To express the derivative with respect toŷ, we apply the chain rule and use the fact that, for exponential families, the Jacobian matrix of the mean parameter ∂ŷ ∂β is equal to the covariance matrix R t of the sufficient statistics (Appendix, (A.11) and (A.6)).
Lemma 8. The extended Kalman filter satisfies
Proof of the lemma. This relation is known, e.g., [Sim06,(6.34)]. Indeed, using the definition of Proof of Proposition 6. By definition of the Kalman filter we have s t = s t|t−1 + K t E t . By Lemma 7, But by the definition of H, The first part of the next lemma is known as the information filter in the Kalman filter literature, and states that the observation step for P is additive when considered on P −1 [Sim06, §6.2]: after each observation, the Fisher information matrix of the latest observation is added to P −1 .
Lemma 9 (Information filter). The update (2.18)-(2.19) of P t in the extended Kalman filter is equivalent to (assuming P t|t−1 and R t are invertible). In particular, for static dynamical systems (f (s, u) = s and Q t = 0), the whole extended Kalman filter (2.12)-(2.20) is equivalent to Proof.
The first statement is well-known for Kalman filters [Sim06, (6.33)]. Indeed, expanding the definition of K t in the update (2.19) of P t , we have The second statement follows from Proposition 6 and the fact that for f (s, u) = s, the transition step of the Kalman filter is just s t|t−1 = s t−1 and P t|t−1 = P t−1 .
Lemma 10. For exponential families p(y|ŷ), the term H ⊤ t R −1 t H t appearing in Lemma 9 is equal to the Fisher information matrix of y with respect to the state s, where ℓ t (y) = − ln p(y|ŷ t ) depends on s viaŷ = h(s, u).
Proof. Let us omit time indices for brevity. We have ∂ℓ(y) ∂s = ∂ℓ(y) ∂ŷ Consequently, E y ∂ℓ(y) ∂s is the Fisher matrix of the random variable y with respect toŷ. Now, for an exponential family y ∼ p(y|ŷ) in mean parameterizationŷ, the Fisher matrix with respect toŷ is equal to the inverse covariance matrix of the sufficient statistics of y (Appendix, (A.16)), that is, R −1 t .
Proof of Theorem 2. By induction on t. By the combination of Lemmas 9 and 10, the update of the Kalman filter with static dynamics (s t|t−1 = s t−1 ) is Defining J t = P −1 t /(t + 1), this update is equivalent to Under the identification s t−1 ↔ θ t−1 , this is the online natural gradient update with learning rate η t = 1/(t + 1) and metric update rate γ t = 1/(t + 1).
The proof of Proposition 3 is similar, with additional factors (1 − λ t ). Proposition 4 is proved by applying a fading memory Kalman filter to a modified log-likelihoodL 0 := n prior ln π(θ),L t := ln p θ (y t ) + (1 − λ t )L t−1 + λ t n prior ln π(θ) so that the prior is kept constant inL t .
3 Natural gradient as a Kalman filter: the state space (recurrent) case
Recurrent models, RTRL
Let us now consider non-memoryless models, i.e., models defined by a recurrent or state space equationŷ with u t the observations at time t. To save notation, here we dump intô y t the whole state of the model, including both the part that contains the prediction about y t and all state or internal variables (e.g., all internal and output layers of a recurrent neural network, not only the output layer). The stateŷ t , or a part thereof, defines a loss function ℓ t (y t ) := − ln p(y t |ŷ t ) for each observation y t . The current stateŷ t can be seen as a function which depends on θ via the whole trajectory. The derivative of the current state with respect to θ can be computed inductively just by differentiating the recurrent equation (3.1) definingŷ t : Real-time recurrent learning [Jae02] uses this equation to keep an estimate G t of ∂ŷt ∂θ . RTRL then uses G t to estimate the gradient of the loss function ℓ t with respect to θ via the chain rule, ∂ℓ t /∂θ = (∂ℓ t /∂ŷ t )(∂ŷ t /∂θ) = (∂ℓ t /∂ŷ t )G t .
Definition 11 (Real-time recurrent learning). Given a recurrent modelŷ t = Φ(ŷ t−1 , θ t−1 , u t ), real-time recurrent learning (RTRL) learns the parameter θ via Since θ changes at each step, the actual estimate G t in RTRL is only an approximation of the gradient ∂ŷt ∂θ at θ = θ t , valid in the limit of small learning rates η t .
In practice, RTRL has a high computational cost due to the necessary storage of G t , a matrix of size dim θ × dimŷ. For large-dimensional models, backpropagation through time is usually preferred, truncated to a certain length in the past [Jae02]; [OTC15,TO17] introduce a low-rank, unbiased approximation of G t .
Statement of the correspondence, recurrent case
There are several ways in which a Kalman filter can be used to estimate θ for such recurrent models.
1. A first possibility is to view eachŷ t as a function of θ via the whole trajectory, and to apply a Kalman filter on θ. This would require, in principle, recomputing the whole trajectory from time 0 to time t using the new value of θ at each step, and using RTRL to compute ∂ŷ t /∂θ, which is needed in the filter. In practice, the past trajectory is not updated, and truncated backpropagation through time is used to approximate the derivatice ∂ŷ t /∂θ [Jae02,Hay01]. Intuitively, the joint Kalman filter maintains a covariance matrix on (θ,ŷ t ), whose off-diagonal term is the covariance betweenŷ t and θ. This term captures how the current state would change if another value of the parameter had been used. The decomposition (3.13) in the theorem makes this intuition precise in relation to RTRL: the Kalman covariance between y t and θ is directly given by the RTRL gradient G t .
Theorem 12 (Kalman filter on (θ,ŷ) as RTRL+natural gra-dient+state correction). Consider a recurrent modelŷ t = Φ(ŷ t−1 , θ t−1 , u t ). Assume that the observations y t are predicted with a probabilistic model p(y|ŷ t ) that is an exponential family with mean parameter a subset ofŷ t .
Given an estimate G t of ∂ŷ t /∂θ, and an observation y, denote the corresponding estimate of ∂ℓ t (y)/∂θ. Then these two algorithms are equivalent: • The extended Kalman filter on the pair (θ,ŷ) with transition function • A natural gradient RTRL algorithm with learning rate η t = 1/(t + 1), defined as follows. The state, RTRL gradient and Fisher matrix have a transition step and after observing y t , the state and parameter are updated as Moreover, at each time t, the covariance matrix of the extended Kalman filter over (θ,ŷ) is related to G t and J t via This result may explain an observation from [Wil92, §4.2] that RTRL can be obtained by introducing some drastic simplifications in the Kalman filter equations (changing the formula of the Kalman optimal gain and neglecting the covariance matrix update).
Again, the expectation for the Fisher matrix in (3.9) may be estimated by a Monte Carlo sample y ∼ p(y|ŷ t ), or by just using the current observation y = y t , as discussed after Definition 1.
As before, learning rates η t different from 1/(t + 1) can be obtained by introducing a fading memory (i.e., process noise Q proportional to P ) in the joint Kalman filter. We omit the statement for simplicity, but it is analogous to Propositions 3 and 4.
The algorithm above features a state update (3.12) together with the parameter update; this is not commonly used in online recurrent neural network algorithms. In small-scale experiments, we have not found any clear effect of this; besides, such state updates must be applied cautiously if the range of possible values for the stateŷ is somehow constrained.
In the result above, the Kalman filter is initialized with a covariance matrix in which every uncertainty comes from uncertainty on θ rather than the initial stateŷ 0 . This has the advantage of making the correspondence algebraically simple, but is not a fundamental restriction. If modelling an initial uncertainty onŷ 0 is important, one can always apply the theorem by incorporating the initial condition as an additional component of the parameter θ, with its own variance; in this case, G 0 must be initialized to Id on the corresponding component of θ, namely θ +init := (θ,ŷ 0 ) ⊤ , G 0 := ∂ŷ 0 ∂θ +init = (0, Id) (3.14) and then Theorem 12 can be applied to θ +init . Actually this operation is often not needed at all: indeed, if the dynamical system is such that the initial condition is forgotten reasonably quickly, then the initial covariance ofŷ 0 decreases (terms W in the proof below) and the Kalman covariance tends to the type (3.13) above exponentially fast, even without using θ +init . This is the case, for instance, for any stable linear dynamical system, as a consequence of Lemmas 13-14, and more generally for any system with geometric memory in the sense that ∂ŷt ∂ŷ t−1 is contracting for a fixed parameter and a given input.
The filtering literature contains updates similar to the above for G t , but more complex [LS83,Hay01]; this is, first, because they are expressed over the variable Cov(ŷ t , θ) = G t J −1 t instead of G t alone, second, because we have initialized the uncertainty onŷ 0 to 0, and, third, because in dual rather than joint filter approaches, higher-order terms depending on second derivatives of In terms of computational cost, for recurrent neural networks (RNNs), RTRL alone is already as costly as the joint Kalman filter [Wil92]. Indeed, RTRL requires (dim θ) forward tangent propagations at each step, each of which costs O(dim θ) for a standard RNN model [Jae02], thus for a total cost of O((dim θ) 2 ) per time step. The Fisher matrix is of size (dim θ) 2 ; if a single Monte Carlo sample y ∼ p(y|ŷ t ) is used, then the Fisher matrix update is rank-one and costs O((dim θ) 2 ); the update of the inverse Fisher matrix can be maintained at the same cost thanks to the Woodbury matrix identity (as done, e.g., in [LMB07]). Thus, if RTRL is computationally affordable, there is little point in not using the Fisher matrix on top.
Proofs for the recurrent case
We now turn to the proof for the recurrent case, involving a joint Kalman filter on (θ,ŷ). The key is to decompose the Kalman covariance matrix of the pair (θ,ŷ) into three variables (3.17): the covariance of θ, the correlation between θ andŷ, and the part of the covariance ofŷ that does not come from its correlation with θ (its so-called Schur complement). This provides a nice expression for the transition step of the Kalman filter (Lemma 13).
Then we find that the correlation between θ andŷ is exactly the gradient G = ∂ŷ ∂θ maintained by RTRL (Corollary 15); meanwhile, we find θ and its covariance essentially follow a standalone Kalman filter related to the observations via G, which is a natural gradient for the same reasons as in the static case.
In the recurrent case, we are applying an extended Kalman filter to the state s = θ y with transition function f = Id Φ . Let us decompose the covariance matrix P t of this system as From now on, for simplicity we omit the time indices when no ambiguity is present.
By the theory of Schur complement for positive-semidefinite matrices [BV04, Appendix A.5.5], letting P + be any generalized inverse of P θ , we know that Pŷ−P θŷ P + P θŷ ⊤ is positive-semidefinite and that P θŷ (Id −P + P θ ) = 0. The latter rewrites as P θŷ = P θŷ P + P θ . Let us set W := Pŷ − P θŷ P + P θŷ ⊤ , G := P θŷ P + (3.16) Then P θŷ = GP θ and W = Pŷ − GP θ G ⊤ . Thus, at each time t we can decompose P t as without loss of generality, where W is positive-semidefinite. This decomposition tells us which part of the covariance of the current stateŷ comes from the covariance of the parameter θ via the dynamics of the system. First, we will show that if W 0 = 0, then W t = 0 for all t, and that in this case G t satisfies the RTRL equation.
Lemma 13. Consider the extended Kalman filter on the pair s = (θ,ŷ) ⊤ with transition function f = (θ, Φ(ŷ, θ, u)) ⊤ and no added noise (Q t = 0). Then the Kalman transition step (2.13) on P , expressed in the decomposition (3.17), is equivalent to This equation for G is the RTRL update.
Proof of the lemma. This is a direct computation using the Kalman transition step (2.13) for P . Indeed, the decomposition (3.17) of P rewrites as in that order, where R is given by (2.16). Moreover, if P θ or W are invertible then their respective updates are equivalent to and Thus, the updates for W , (3.19) and (3.23), are just the updates of an extended Kalman filter onŷ alone, with covariance matrix W and noise measurement R. The update for P θ is identical to an extended Kalman filter on θ where measurements are made onŷ, withŷ seen as a function of θ with derivative ∂ŷ/∂θ = G, and where the measurement noise onŷ is R + W (the measurement noise on y plus the covariance ofŷ). Thus, these two lemmas relate the joint Kalman filter on (θ,ŷ) to the dual Kalman filter that filters separately θ givenŷ andŷ given θ, together with an estimate of ∂ŷ/∂θ. As far as we could check, this decomposition is specific to a situation in which one component (the parameter) is supposed to have static underlying dynamics, θ t+1 = θ t .
Proof of the lemma. In our case, the function h of the extended Kalman filter is the function that sends (θ,ŷ) toŷ. In particular, H t = (0, Id).
First, if P θ and W are invertible, then the updates (3.22), (3.23) for P θ and W follow from the updates (3.25), (3.26) on their inverses, thanks to the Woodbury matrix identity. Since working on the inverses is simpler, we shall prove only the latter. Since (3.22), (3.23) are continuous in P θ and W , the non-invertible case follows by continuity.
Starting again with the decomposition of P t as a product (3.21), the inverse of P t is From Lemma 9, the Kalman observation udpate for P t amounts to adding To interpret this as an update on P θ , W and G, we have to introduce new variablesW ,G, andP θ such that (3.29) takes the original form (3.28) in these new variables.
Since we have definedW andG by identifying (3.29) with the original form (3.28), we haveWG = W G by construction. Thus Putting the last two lemmas side by side in the case W = 0, we obtain a much simpler update.
Corollary 15. Consider the extended Kalman filter on the pair s = (θ,ŷ) ⊤ with transition function f (s) = (θ, Φ(ŷ, θ, u)) ⊤ and no added noise (Q t = 0). Decompose the covariance P of the state s as in (3.17) using P θ , G, W . If W = 0 and P θ is invertible then performing the Kalman transition update followed by the observation update is equivalent to in that order.
From this, the end of the proof of Theorem 12 essentially proceeds as in the non-recurrent case. Since we initialize W to 0 in Theorem 12, we have W = 0 at all times. As before, for exponential families R −1 is equal to the Fisher matrix with respect toŷ t , namely, R −1 = E y∼p (y|ŷ) ∂ℓ t (y) ∂ŷ t ⊗2 (Appendix). Now, the term E y∼p(y|ŷ) g t (y) ⊗2 in the Fisher matrix update (3.9) uses g t (y) = ∂ℓ t (y) ∂ŷ t G t (3.6) to estimate the derivative of the loss ℓ t (y) with respect to θ. So the term G ⊤ R −1 G in (3.38) coincides with the Fisher matrix update term E y∼p(y|ŷ) g t (y) ⊗2 in (3.9). (Compare Lemma 10.) So if we just define J t := η t (P θ t ) −1 with η t = 1/(t + 1), the additive update (3.38) on (P θ ) −1 translates as the online Fisher matrix update (3.9) on J t .
Moreover, since the Kalman gradient is an ordinary gradient preconditioned with the covariance matrix P t (Proposition 6), the update of the pair (θ,ŷ) is (3.40) (indeed ℓ t does not depend explicitly on θ in recurrent models, only via the current stateŷ t ). Given the decomposition P t = P θ (GP θ ) ⊤ GP θ GP θ G ⊤ , this translates as which is the update in Theorem 12.
A Appendix: reminder on exponential families
An exponential family of probability distributions on a variable x (discrete or continuous), with sufficient statistics T 1 (x), . . . , T K (x), is the following family of distributions, parameterized by β ∈ R K : where Z(β) is a normalizing constant, and λ(dx) is any reference measure on x, such as the Lebesgue measure or any discrete measure. The family is obtained by varying the parameter β ∈ R K , called the natural or canonical parameter. We will assume that the T k are linearly independent as functions of x (and linearly independent from the constant function); this ensures that different values of β yield distinct distributions. For instance, Bernoulli distributions are obtained with λ the uniform measure on x ∈ {0, 1} and with a single sufficient statistic T (0) = 0, T (1) = 1. Gaussian distributions with a fixed variance are obtained with λ(dx) the Gaussian distribution centered on 0, and T (x) = x.
Another, often convenient parameterization of the same family is the following: each value of β gives rise to an average valueT of the sufficient statistics, For instance, for Gaussian distributions with fixed variance, this is the mean, and for a Bernoulli variable this is the probability to sample 1. Exponential families satisfy the identities by a simple computation [AN00, (2.33)]. These identities are useful to compute the Fisher matrix J β with respect to the variable β, as follows [AN00, (3.59)]: or more synthetically J β = Cov(T ) (A.7) where the covariance is under the law p β . That is, for exponential families the Fisher matrix is the covariance matrix of the sufficient statistics. In particular it can be estimated empirically, and is sometimes known algebraically.
In this work we need the Fisher matrix with respect to the mean parameterT , which in particular, can be estimated empirically. | 2018-08-27T18:45:10.000Z | 2017-03-01T00:00:00.000 | {
"year": 2017,
"sha1": "6e1b81051e4467b20ffaf45b465c555aa7ff0cda",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1214/18-ejs1468",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "6e1b81051e4467b20ffaf45b465c555aa7ff0cda",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
119399070 | pes2o/s2orc | v3-fos-license | $\Lambda$ Production at High Rapidity in d+Au Collisions at $\sqrt{s_{NN}} = 200$ GeV
We present first preliminary studies of $\Lambda$ and $\bar{\Lambda}$ production in the pseudorapidity region $2.5<|\eta|<4$, covered by the forward radial-drift TPCs (FTPCs) in STAR. The FTPCs provide momentum and charge determination but no particle identification, making the use of combinatorial methods and background subtraction necessary for $\Lambda$ identification. The $\bar{\Lambda}/\Lambda$ ratio measured at high rapidity is compared to the ratio obtained at mid-rapidity with the STAR TPC. Differences in the ratio at positive and negative rapidity point to an asymmetry in particle and antiparticle production in d+Au collisions.
I. INTRODUCTION
In relativistic heavy ion collisions at the highest available RHIC energies, the ratio of yields of antiparticles to particles reaches the highest values yet observed. Even the baryon ratios p/p and Λ/Λ reach values near unity, indicating an almost net baryon free environment at midrapidity. The antiparticle to particle ratios are observed to be flat as a function of rapidity (y) close to midrapidity [1,2].
Away from mid-rapidity, the baryon content of the beam nuclei comes into play, and, in addition to particleantiparticle pair production, other processes contribute significantly to the particle production. Measurements by the BRAHMS collaboration [1] show a significant drop of p/p in Au+Au collisions starting at y ∼ 1. In asymmetric collision systems, measurements at lower energies by NA49 with p+Pb collisions show different contributions of baryon number transfer for projectile and target rapidity regions due to multiple collisions suffered by the projectile nucleon, but not by the target [3].
In the present paper, first preliminary measurements of Λ/Λ at |y| ∼ 2.7 in d+Au collisions are presented.
II. DETECTOR AND ANALYSIS TECHNIQUE
The two radial-drift forward time projection chambers (FTPCs) [4] of the STAR [5] experiment permit the study of charged hadrons at forward rapidity in heavy ion collisions. This extends the acceptance of the spectrometer towards the fragmentation region and gives access to phenomena away from mid-rapidity. However, the FTPCs * Electronic address: fsimon@mppmu.mpg.de measure a maximum of 10 hits per track, which makes particle identification via energy loss measurements impossible with the current state of detector calibrations.
The two FTPCs west (positive rapidity) and east (negative rapidity) of the interaction point in STAR are well located to study asymmetries in d+Au collisions. In the 2003 d+Au beam time, the deuterons were entering STAR from the east, gold from the west. Hence the particle multiplicity in FTPC east is higher than in west, since particles produced from the gold nucleons preferentially continue towards the east.
Since the measurable decay mode of the Λ is Λ → p π (with a branching ratio of 64%), the lack of particle identification makes the use of all positive particles as p candidates for the Λ case or all negative particles as p for the Λ case necessary. This introduces a considerable background to the measurement. Strict geometric cuts on the assumed daughter tracks and the resulting Λ candidate help to reduce the background. The most important cuts were on the distance of closest approach (dca) to the primary vertex of the daughters, which should be relatively small for the p candidate and large for the π candidate, and cuts on the dca and the decay length of the resulting Λ candidate.
The major source of background remaining after these cuts is estimated to be from K 0 s → π + π − , where one of the two daughter pions is assumed to be a proton. For the current analysis a full GEANT detector simulation with a HIJING [6] generated K 0 s distribution in p t and y was used to predicted this background produced by making the wrong mass assumption for one of the two daughters.
Although the analysis presented here does not correct the yields for acceptance and efficiency, simulation studies show that the corrections for Λ and Λ are equal to first order. This permits the calculation of antiparticle to particle ratios without the knowledge of the absolute yield. The resulting ratios are not corrected for absorption or annihilation of the Λ or its daughter particles in the detector material. Due to the high momentum of the particles in forward direction, absorption effects are small. Simulations show them to be less than 2%, which is of the same order as the statistical error of the study.
III. ANALYSIS RESULTS
For the current analysis, a sample of 10.6 million d+Au minimum bias events with a reconstructed primary ver-tex within 50 cm of the nominal interaction point are used.
With the assumption that all candidates that pass the cuts are actually Λs, their rapidity can be calculated. This still includes all background that remains after the geometrical cuts. Figure 1 shows the total rapidity acceptance of the analysis for both detectors and the used range within ± 0.25 of the mean rapidity y .
The rapidity slice used for the analysis is limited to 0.5 units because HIJING simulations predict a strong rapidity dependence of Λ/Λ in the region of interest, which would affect the measurement if a wide range of rapidities is included in one bin. In addition, the p t acceptance of the FTPCs is constant as a function of the pseudorapidity η, not as a function of y. In a wide analysis window in y the covered p t changes considerably over the selected y range, and a possible p t dependence of the ratio would make the interpretation of the results more difficult, especially if this dependence is different for the deuteron and the gold side of the collision. This background is subtracted bin by bin and leads to the Λ and Λ invariant mass distributions shown for both detectors in figure 2 b) and 2 c). The widths of the peaks are in agreement with simulations and due to the limited momentum resolution of the FTPCs, especially for tracks not originating from the primary vertex. From these invariant mass distributions the uncorrected yields and the statistical errors for Λ and Λ at forward and backward rapidity are determined by summing all bins 1.08 GeV/c 2 < M inv < 1.18 GeV/c 2 .
The systematic errors have not been investigated in detail yet. From a variation of cuts and background rapidity Λ estimations they are predicted to be 0.12 for this preliminary analysis. Since possible remaining background contributions tend to be equal for Λ and Λ, an asymmetry of the systematic errors towards lower values of the ratio appears likely, but has not yet been thoroughly investigated. The derived antiparticle to particle ratios are compared with the ratios given by the HIJING event generator.
The mean rapidity of Λ and Λ candidates on the deuteron side (FTPC W) is y = 2.69. The analysis yields a ratio of Λ/Λ = 0.58 ± 0.02(stat)±0.12(syst). On the gold side (FTPC E) y = -2.72, with a ratio Λ/Λ = 0.71 ± 0.02(stat)±0.12(syst). Figure 3 shows Λ/Λ from HIJING without taking detector effects into account. It has been shown by measurements in Au+Au collisions at √ s N N = 130 GeV [7,8] that the pure HIJING model tends to underestimate the influence of baryon number transport, and thus overestimates the antibaryon/baryon ratio. Overlaid in red are the STAR points, the mid-rapidity value being 0.84, determined from a preliminary analysis of main TPC data. The data show good agreement with HIJING on the deuteron side, while on the gold side, HIJING appears to overestimate the ratio. Λ/Λ shows a significant drop at high rapidity, probably due to baryon number conservation and fragmentation contributions to the Λ production. The asymmetry of the ratio in the highly asymmetric collision system may be caused by different contributions of pair production and baryon number transport in the projectile and the target region. While the participating nucleons in the gold nucleus typically suffer only a single collision each, the nucleons from the deuteron participate in multiple collisions as they pass through the gold nucleus. This can lead to increased baryon number transport from the deuteron region towards mid-rapidity and thus results in a decrease of Λ/Λ in the studied rapidity range.
IV. CONCLUSION
First preliminary measurements of Λ/Λ at high rapidity in d+Au collisions at √ s N N = 200 GeV using the forward TPCs of the STAR experiment have been presented. An asymmetry between the deuteron and the gold side of the collision is seen, indicating different contributions of antiparticle and particle production mechanisms and baryon number transport in the forward region. In general, the ratios at high rapidity are lower than at mid-rapidity.
Future studies will compare to a wider range of theoretical models and use a variety of background models and cut sets. A binning of the analysis in collision centrality, rapidity or p t might also be feasible. | 2019-04-14T03:06:45.252Z | 2004-03-12T00:00:00.000 | {
"year": 2004,
"sha1": "a5ab05ed976a3b2fb38b728e64366d8b5e3a5847",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "a5ab05ed976a3b2fb38b728e64366d8b5e3a5847",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
53874834 | pes2o/s2orc | v3-fos-license | Impacts of prescribed burning on Sphagnum mosses in a long-term peatland field experiment
Understanding fire impacts on peatland vegetation can inform management to support function and prevent degradation of these important ecosystems. However, time since burn, interval between burns and number of past burns all have the potential to modify impacts. Grazing regime may also affect vegetation directly or via an interaction with burning. We used new, comprehensive survey data from a hillslope-scale field experiment initiated in 1954 to investigate the effects of burning and grazing treatments on Sphagnum. Historical data were consulted to aid interpretation of the results. The unburned reference and the most frequently burned (10-year rotation) treatments had greater Sphagnum abundance and hummock height than intermediate treatments (20-year rotation and no-burn since 1954). Abundance of the most common individual species (S. capillifolium, S. subnitens and S. papillosum) followed similar patterns. Light grazing had no impact on Sphagnum-related variables, nor did it interact with the burning treatments.These results suggest that in some cases fire has a negative impact on Sphagnum, and this can persist for several decades. However, fire return interval and other factors such as atmospheric pollution may alter effects, and in some cases Sphagnum abundance may recover. Fire severity and site specific conditions may also influence effects, so we advise consideration of these factors, and caution when using fire as a management tool on peatlands where Sphagnum is considered desirable.
Introduction
Peatlands, which cover around 4.23 million km 2 globally [1], are important landscapes for biodiversity, carbon storage and hydrological functions [2]. On many peatlands, particularly at high latitudes, Sphagnum mosses are central to ecosystem function, influencing hydrology [3], chemistry [4], temperature [5] and microtopography [6], as well as sequestering carbon [7]. Fire is common on peatlands worldwide and includes both wildfire and prescribed burning PLOS ONE | https://doi.org/10.1371/journal.pone.0206320 November 1, 2018 1 / 17 a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 for purposes including wildfire prevention, land clearance, agricultural grazing and game management [8][9][10]. Knowledge of the impacts of fire on Sphagnum is therefore vital to inform fire-impact predictions and nature conservation management decisions. Prescribed burning occurs in many peatland ecosystems worldwide including areas of North America [11] and Europe [12,13]. Fires are often controlled to burn vegetation without igniting the underlying peat, so results from studies of wildfire, where moss and surface peat layers can be consumed, may not be directly applicable. In the UK, prescribed burning is commonly carried out on patches of up to c.4000 m 2 (0.4ha) in rotations of around 8-25 years. The canopy layer, which on UK peatlands is usually dominated by dwarf shrubs (including Calluna vulgaris) and sedges (commonly Eriophorum vaginatum and E. angustifolium) is burned to create a range of vegetation ages suitable for nesting and foraging of the game bird red grouse (Lagopus lagopus scotica). Official guidance advocates a strong presumption against burning on deep peat [14], but there is evidence that burning has increased on UK peatlands in recent decades [9,15,16].
While national-scale work has shown that there is less Sphagnum cover on peatlands subject to prescribed burning in England [17], results from local and regional scale studies suggest that effects can vary depending on fire severity and return interval [18,19]. Burning may influence Sphagnum by heat damage or combustion with varying recovery prospects [18,20,21], and changes in substrate properties can also have an effect. For example, higher near-surface peat bulk densities and lower soil water availability on recently burned sites [22] can limit Sphagnum growth [23], and more extreme peat surface temperatures in the years after burning [24] may also have a negative impact [25]. Ash deposition from burning may cause short term cation enrichment [26] with potentially positive effects for some Sphagnum species [23], but this may also increase competition, and some cations may be depleted in the longer term (2 + years after burning) [27]. Previous field studies have reported Sphagnum abundance [17,19], but hummock height, which may also be affected by burning representing a change in biomass, is seldom reported. Understanding burning effects on both abundance and hummock height would contribute to a more complete knowledge of impacts on carbon sequestration and other ecosystem services.
The Hard Hill vegetation burning and grazing experiment at Moor House National Nature Reserve in the North Pennines, UK was established in 1954. The main experiment includes three burning treatments with plots burned on either short (10-year) or long (20-year) rotations, or burned once in 1954/55 and left unburned since (S, L and N plots; Table 1). Reference (R) plots were established adjacent to the main experiment plots, outside of the 1954 burn area. Studies of the vegetation of the main experiment plots were published in the 1970s [28] and 1980s [29]. More recent work has shown greater Sphagnum abundance [19] and lower Sphagnum propagule availability [30] on the 10-year (S) rotation plots compared to 20-year (L) and no-burn since 1954 (N) plots. However, a direct comparison of Sphagnum abundance between the experiment and reference (R) plots has not previously been carried out, so it is not known how the experimental treatments compare to surrounding vegetation. Furthermore, previous surveys may not have captured rarer or less evenly distributed Sphagnum species [31].
Here we present the results of a comprehensive survey of the main experiment and reference plots, including hummock height data and mapping of all Sphagnum patches in every plot at species level, with the aim of investigating the effect of burning treatments. This represents the most complete survey of Sphagnum in a burning experiment to date and the first time the Hard Hill experimental treatments have been compared to a reference. Changes over time are also considered with reference to data from past surveys. The results are discussed in the context of the potential processes responsible for burning impacts on Sphagnum and implications for future burn management and policy.
Experimental design
The Hard Hill experiment consists of four 90m x 60m blocks, each made up of six 30m x 30m plots. At the start of the experiment in 1954, half of each block (three plots) was fenced to exclude grazing, and within each half three burning treatments (S, L and N) were allocated at random. All of the main experiment plots were burned at the start of the experiment and the S and L plots have been burned on approximately 10-and 20-year rotations respectively since then (subject to suitable weather conditions). Unfenced reference plots which had remained unburned for at least 30 years prior to 1954 [28] were established alongside each block outside of the initial burn areas (Fig 1). The burning and survey schedule of the experiment is described in Lee et al. [19], which also provides information on the overall vegetation composition. Briefly, the plots comprise Calluna vulgaris-Eriophorum vaginatum blanket mire which is characteristic of much of the blanket bog in the English Pennines which has been modified to a greater or lesser extent by grazing and burning management and other impacts.
Sphagnum surveys
The 24 main experiment plots and four reference plots were surveyed between August 2015 and April 2016. Within each plot 10 transects were laid out at evenly spaced intervals. Transects were located at least 1.5 m away from the plot edges to avoid heavily trampled areas and edge effects and hence were between 22.5 and 27 m long. Survey data were recorded at 10 evenly spaced pin points along each transect (100 points total per plot). Sphagnum presence or absence was recorded at each pin point, and where Sphagnum was present the species was identified. S. capillifolium ssp. capillifolium (Ehrh.) Hedw. and S. capillifolium ssp. rubellum (Wilson) M.O.Hill were initially recorded separately, but the two subspecies could not always be differentiated with certainty so analysis was undertaken at species level. Sphagnum patch length and width were measured to the nearest cm at the widest points parallel and perpendicular to the transect, and patch area was calculated using the formula area ¼ p lengthþwidth Sphagnum patch height was measured to the nearest cm by inserting a cane vertically into the patch at the pin point until it met resistance from the underlying peat. For the 24 main experiment plots the height data were recorded approximately 6 months later than the frequency, species identity and length/width data, so it was not possible to replicate precisely the original pin points. Alongside the transect survey, a mapping survey was conducted to record the location, species and approximate area of every Sphagnum patch in all 24 plots. This was carried out by walking along the nine 2.5-3 m wide strips between transects in each plot and drawing each Sphagnum patch encountered as a polygon on a corresponding map. Transects were marked with tape measures to provide a reference for patch position. To calculate Sphagnum frequency from the resulting maps, a transparent overlay with 1296 regular grid squares for each plot was used. The number of squares partly or entirely occupied was counted for each Sphagnum species, and Sphagnum as a genus.
Past surveys
Vegetation surveys which recorded Sphagnum and other species in some or all of the Hard Hill plots were carried out in 1961, 1965, 1972/3, 1982, 1991, 2001 and 2011 using various recording methods ( Table 2). Analysis of the 1972-2001 data from the S, L and N plots and investigation of change between 1965 and 2011 in the R plots was carried out by Lee et al. [19]. Rawes and Hobbs [28] presented results from the 1961 survey, but differences in Sphagnum between treatments were not discussed. No comparison of the N and R plot data from the 1965 data has been published to date. Therefore, to support the interpretation of the 2015/16 data, we analysed Sphagnum abundance from the plots surveyed in 1961 (all main experiment plots) and 1965 (grazed N and R plots).
Data analysis
All statistical analyses were carried out using R 3.1.0 [32]. Data from the main experiment plots were analysed using split plot ANOVA with split plot nested within block as the error term and burning, grazing and their interaction as factors. Second, data from the grazed main experiment plots and reference plots were compared using ANOVA with burn status and block as factors. The two types of analysis both represent balanced experimental designs, and were carried out separately to account for the absence of a fenced reference treatment. Dependent variables in the 2015-2016 data included transect hits, patch size, hummock height, and proportion of map squares occupied for Sphagnum as a genus. The proportion of transect hits and map squares occupied were calculated for individual species and where species were present in more than one percent of samples these variables were analysed in the same way.
Analysis of the 1961 and 1965 historical data used similar methods, with split plot ANOVA for the 1961 main experiment data and ANOVA with burn status and block as factors for the 1965 N and R plot data. Domin scores were transformed using the Domin 2.4 transformation (Currall 1987) to give an approximation of percentage cover. To account for potential effects of unequal sampling effort between treatments only the first five quadrats from each grazed N plot in 1965 were used in the analysis.
For each ANOVA model homogeneity of variances and normality of residuals were inspected graphically, and where appropriate data were transformed to reduce skew and/or heteroscedasticity. Tukey's Honest Significant Difference (HSD) pairwise test was used to investigate differences between treatment combinations.
Comparison of main experiment plots in 2015-16
For Sphagnum spp. (hereafter Sphagnum), both transect hits and map frequency were significantly associated with burn treatment (Table 3), with more frequent occurrence in S plots than in L and N plots and more frequently in L plots than in N plots (Fig 2). However, neither median patch area nor hummock height were significantly associated with burn status ( Table 3). The only species present at more than 1% of transect pin points was S. capillifolium, which was also significantly associated with burn treatment (Table 3), occurring more frequently in S plots than in L and N plots. S. capillifolium, S. subnitens and S. papillosum all occurred in more than 1% of the map squares and all three were significantly associated with burn treatment (Table 3). S. capillifolium occurred more frequently in S plots than in L and N plots and more frequently in L plots than in N plots, while S. subnitens and S. papillosum occurred more frequently in S plots than in N plots. Neither grazing treatment nor its interaction with burn treatment were significantly associated with any of the Sphagnum-related variables tested.
Comparison of reference and grazed main experiment plots in 2015-16
Sphagnum was significantly associated with burn status (Table 4), occurring more frequently in R and S plots than in L and N plots according to both the transect and map data (Fig 3). Patch area was not significantly associated with burning status, but hummock height was (Table 4), with higher values in R plots than in L or N plots (Fig 3). In both the transect and map data, S. capillifolium occurred more frequently in R and S plots than in L and N plots (Table 4). S. subnitens and S. papillosum were both significantly associated with burning status in the map data (Table 4) and were more frequent in S than in N plots, with S. papillosum also more frequent in S than in R plots.
Past surveys
Analysis of the 1961 data from the main experiment plots showed no significant difference in cover of Sphagnum according to burn treatment, grazing or their interaction ( Table 5, Fig 4) seven years after the initial burn. Analysis of the data from N and R plots in 1965 found that the reference plots had significantly greater Sphagnum cover ( Table 6, Fig 5).
Recording methods and Sphagnum occurrence
The mapping survey provided the most comprehensive measure of Sphagnum frequency, with a greater number of species recorded than in the transect survey. Sphagnum frequency recorded in the transect survey was systematically lower than in the map survey, but the pattern of results was similar ( Table A in S1 Supporting Information , Figs 2 and 3). This indicates Values for i) transect frequency (0-1), ii) map frequency (0-1), iii) median patch size (m 2 ) and iv) median hummock height (cm) of Sphagnum for all plots (grazed and fenced) within the main experiment in 2015-16. Burn treatments are short rotation (S), long rotation (L) and no-burn (N). Treatments which do not share an x, y, z letter coding are significantly different (p < 0.05) according to Tukey HSD tests. https://doi.org/10.1371/journal.pone.0206320.g002
Table 4. Results of ANOVAs for Sphagnum-related variables recorded in the 2015-16 survey in the grazed main experiment (S, L and N) and reference (R) plots.
Transformations of the dependent variables are noted in brackets. that on the scale of this experiment, transect sampling is an acceptable way to evaluate treatment effects on more common species, but mapping surveys may be a more appropriate method to record less frequent species. Sphagnum as a genus, and the individual species which were common enough to analyse separately, appeared to respond to burning treatments similarly. However, it is possible that some of the species occurring in less than one percent of plots, which were not analysed separately, responded differently. For example, S. angustifolium did not occur in the main experiment plots but was the second most common species in the reference plots, and conversely S. russowii occurred only in the main experiment plots (Table A in S1 Supporting Information), but the relative rarity of these species within the Hard Hill plots means that it is difficult to confidently attribute these differences to burning effects. A greater number of species occurred in the 24 main experiment plots compared to the four reference plots (Table A in S1 Supporting Information), which was expected due to the greater area covered.
Sphagnum frequency in the main experiment plots
The results of the 2015-16 survey indicate that Sphagnum is most frequent in S plots (10-year rotation), followed by L plots (20-year rotation), and least frequent in the N plots (unburned since 1954). Previous work by Lee et al. [19] using data from point quadrat surveys in 1972 /3, 1982, 1991, and 2001 showed that Sphagnum abundance was greatest in S plots, but did not report any significant difference between N and L plots. We found no significant differences in Sphagnum patch area or hummock height between treatments, which suggests that the difference in frequency could be due to more numerous patches in the more frequently burned treatments. The cause of this difference could be a more open canopy or increased bare ground after burning providing a release from competition and an opportunity for Sphagnum to establish [19]. Alternatively, ash produced by fire can release limiting nutrients such as phosphorus, which can promote moss spore germination [33] and growth of some Sphagnum species [23]. As the S plots have been burned most frequently (five times since 1954 compared to twice in L plots), there have been more potential establishment opportunities in this treatment. Furthermore, the shorter rotation in S compared to L plots means less biomass accumulation between burns [34] and therefore less fuel, potentially resulting in lower fire temperatures [35]. This may have reduced the chance of heat-related damage to existing Sphagnum [20], contributing to the greater abundance than in the L treatment. Values for i) transect frequency (0-1), ii) map frequency (0-1), iii) median patch size (m 2 ) and iv) median hummock height (cm) of Sphagnum for grazed plots within the main experiment, and reference plots in 2015-16. Burn treatments are short rotation (S), long rotation (L), no-burn (N) and reference (R). Treatments which do not share an x, y, z letter coding are significantly different (p < 0.05) according to Tukey HSD tests.
https://doi.org/10.1371/journal.pone.0206320.g003 It is also possible that atmospheric pollution at the time of burning is relevant. After the UK clean air act of 1956, levels of sulphur pollutants peaked around 1960 and subsequently declined [36][37][38]. High atmospheric pollution levels in the 1950s may therefore have inhibited Sphagnum regeneration after burning, resulting in vegetation dominated by other species, as observed in N plots which were last burned in 1954. However, in S and L plots, subsequent burns which occurred under reduced atmospheric pollution levels may have facilitated Sphagnum growth by reducing competition as discussed above. Though different sampling methods were used, the data from the 2015 transect survey and the point quadrats used by Lee et al. [19] both provide an estimate of percentage cover and comparison suggests that this may have increased recently, e.g., from 7% in the S grazed treatment in 2001 [19] to 33% in 2015. Sphagnum is known to be affected by atmospheric pollutants [39,40] and Noble et al. [17] observed that an interaction between burning and atmospheric pollution was associated with Sphagnum abundance. Further study could help to clarify the processes behind such interactions. Grazing treatment had no impact on any of the Sphagnum-related variables. Similarly Lee et al. [19] found no effect of burning on Sphagnum or overall vegetation composition, which they suggested may be due to the low density, summer only grazing regime at Hard Hill. Past work has found some evidence of higher density grazing impacting Sphagnum [28], and Noble et al. [17] found that plots with livestock droppings had less Sphagnum cover, suggesting that there could be a negative effect at some stocking levels.
Sphagnum frequency in the grazed main experiment and reference plots
The results of the comparison between the grazed main experiment and reference plots, in particular the three times greater Sphagnum map frequency in R plots compared to N plots, suggest that the 1954 burns had a negative impact on Sphagnum which has persisted for over 60 years. This indicates the importance of considering and critically evaluating 'control' treatments when interpreting results from long term experiments. Although the R plots at Hard Hill may have been burned historically prior to the experiment, they are likely to provide a more representative baseline than the N plots. The severity of the 1954 burns is unknown (and could have varied between blocks), and therefore the difference in Sphagnum abundance between N and R plots could be a result of combustion, temperature related damage [20], or indirect effects via changes to peat properties [41][42][43] after the 1954 fires.
The R plots also had more Sphagnum than L plots, but a similar amount to S plots, suggesting that the 10-year burning rotation in these plots has mitigated the impact of the initial 1954 burn. The greater hummock height in R plots than in L and N plots (Fig 3) could indicate that hummocks in the R plots are generally older, or have grown at a faster rate over the course of the experiment, whilst the hummock height in S plots (which had no significant difference with any other group) could be due to an intermediate growth rate. NMDS analysis (Fig A in S2 Supplementary Information) showed that R plots were distinct from, and occupied a smaller area of the ordination space than the grazed experimental plots, suggesting that they were more consistent in terms of Sphagnum species composition than the S, L and N plots. Apart from burning treatments, the only consistent difference between the main experiment and reference plots is likely to be greater trampling by humans, as the main experiment plots have been surveyed more frequently [44].
Sphagnum frequency in past surveys
Analysis of the data from the 1961 survey showed no significant difference in Sphagnum abundance between the main experiment treatments at this time (Fig 4). This was expected, as in 1961 all of the main experiment plots had been subject to the same treatment (burned once in 1954). Comparison of the N and R plots in 1965 showed that there was significantly more Sphagnum in R plots at this point. This shows that the negative effect of the 1954 burn on Sphagnum, observed in the 2015-16 survey, was apparent 11 years after burning.
Although the 1961 and 1965 surveys used the same Domin abundance survey methodology, the Sphagnum abundance recorded in the N plots in 1965 was lower than in 1961 (Figs 4 and 5). This could be an artefact of the variation in sample sizes or differing interpretations of the nonlinear Domin scale by surveyors on the two occasions. Alternatively a decrease in Sphagnum abundance in N plots between 1961 and 1965 could have been caused by the unusually cold winter of 1962-63 [45], the relatively high levels of atmospheric pollutants such as SO 2 at this time [36,37], or an interaction between one of these factors and burning. As R plots were not surveyed in 1961 and S and L plots were not surveyed in 1965, it is not possible to determine whether this difference was specific to the N plots. The difference highlights some of the potential problems with comparing data collected by different surveyors and at different times and makes it difficult to interpret how Sphagnum abundance in R plots compared to S and L plots in the early years of the experiment.
Caveats
The Hard Hill experiment has provided a significant amount of published knowledge on prescribed burning impacts [19, 28-30, 34, 46-50]. However, caution is required when extrapolating results to peatland or moorland in general. For example, it has been suggested that local conditions at Moor House including high altitude and high annual rainfall may cause a delay in regeneration of C. vulgaris compared to other sites [51], which may give other plants, including Sphagnum, more opportunity to establish and grow after burning.
The burning carried out for the Hard Hill experiment is likely to have been carefully controlled, and does not represent the full range of burning methods and severities which occur on peatlands. The experiment also represents a relatively small area of a hillslope otherwise dominated by vegetation that has remained unburned for over 90 years, which may influence the hydrology of the plots and provide a source of Sphagnum propagules. On sites managed for grouse shooting, a much larger proportion of the site may be burned and fire effects on water availability [22,43] and the Sphagnum propagule bank [30] may be compounded.
Conclusions
Our results suggest that, as in the case of the 1954 burn, a single fire event can lead to reduced Sphagnum over 60 years later. This provides evidence against burning previously unburned (or long-unburned) areas of blanket peatland where Sphagnum is present. Thus the recent trend in some parts of the UK for burning encroachment onto areas of peat that have not been burned for at least several decades [9,15,16] could reduce Sphagnum cover with potentially deleterious impacts on ecosystem function [43,52]. Shorter rotations may lead to greater Sphagnum abundance compared to longer rotations in some cases, though environmental conditions including atmospheric pollution may influence this effect. We would caution against burning on a shorter rotation as a method of encouraging Sphagnum because of the potential for other negative effects on peatland function [31,42,53]. The longevity and scale of the Hard Hill experiment make it a valuable source of information, but care should be taken when generalising results from any single site, and in particular the specific management history and climate of Moor House should be considered. Furthermore, interpreting historical data can be challenging, particularly where surveyors, methods or sample sizes are inconsistent, and apparent changes over time should be treated with caution. | 2018-11-15T16:40:34.998Z | 2018-11-01T00:00:00.000 | {
"year": 2018,
"sha1": "0b00f064bfd4fea7b3b7539b5a8a1c1c6888a547",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1371/journal.pone.0206320",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0b00f064bfd4fea7b3b7539b5a8a1c1c6888a547",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science",
"Medicine"
]
} |
234066613 | pes2o/s2orc | v3-fos-license | Exploring Risks in the Adoption of Business Intelligence in SMEs Using the TOE Framework
: The business success of small-and medium-sized enterprises (SMEs) increasingly relies on the adoption of various technological innovations. For today’s unpredictable business operations, business intelligence systems (BISs) represent one of the most prominent tools with a significant impact on business performance. However, different internal and external risks may influence BIS adoption. The goal of this paper is to investigate the risks that impact BIS adoption in SMEs, using the Technology, Organization, and Environment (TOE) framework. For that purpose, we develop the logistic regression model, using data collected by a questionnaire survey using a sample of 100 Croatian SMEs. The results indicate the applicability of the TOE theoretical framework for examining BIS adoption in SMEs. Given the results obtained, the sampled SMEs should take into account the internal risks related to the organizational dimension and external risks related to the environmental dimension. Our research did not reveal the significant impact of technological risks that encompass characteristics of considered technological innovation related to the technology dimension.
Introduction
Currently, business success is mainly correlated with adopting innovations, especially innovations in the context of information technology (Haseeb et al. 2019;Basole et al. 2013;Wu et al. 2015). According to Grauer (2001) and Hughes (2016), each of these emerging technology innovations improves and upgrades enterprises' overall business activities (Kusuma et al. 2020). Enterprises are encouraged to change their business models into electronic-based business models. Consequently, a large number of business processes, such as a large number of business transactions, brought out the problem of accumulating a large amount of business data (Kusuma et al. 2020). This challenge has brought out the development of innovative technologies in the context of data management in the form of numerous solutions designed for storing and analyzing accumulated business data. Consequently, for enterprises that generate large amounts of structured and unstructured data by using different technologies, while operating their daily businesses (Xu et al. 2020), it is crucial to establish a reliable solution for storing, analyzing, and distributing those data (Mola et al. 2020).
Enterprises adapt their products and services to customer needs, using external and internal data sources. One of the best tools for using the various forms of internal and external data sources, and harmonizing them into the solution with relevant business value, is the business intelligent system (BIS), which allows enterprises to recognize and wisely use data to stabilize or improve their position on the market (Khan et al. 2020). Consequently, strategic decisions based on quality data analysis lead to higher financial gains for the enterprise's operations. In particular, the increasing use of technological innovations, such as the BIS, in conducting financial activities, the so-called FinTech, stands out (Muhammad et al. 2014;Marrara et al. 2019). Fintech represents a bridge between the achievements of technological innovation and product and service development in the financial sector (Marrara et al. 2019). In this field, the BI technology as data mining supports the enterprise to predict future financial opportunities and risks, create client-need-tailored products and services, and, finally, achieve financial prosperity (Marrara et al. 2019).
However, the implementation of complex information systems is often prone to risks related to various sources, such as lack of management support or lack of sufficient resources in terms of finance and workforce. Therefore, it is strongly advised for enterprises to consider all the internal and external factors that could impact the success of adopting the BIS.
A theoretical framework Technology-Organization-Environment (TOE) framework, designed by Tornatzky and Fleischer (1990), has been widely used for the investigation of the adoption of various technologies. The name of this theoretical framework rests on three dimensions in which it explores the success factors of adopting various technological innovations in enterprises: technology, organization, and environment context of business. The technology dimension includes internal and external technology infrastructure available for the enterprise. Characteristics of the market, such as competitive forces, are included within the environmental context. The organizational context, such as management support, comprises relationships and mechanisms that exist within the enterprise and that are relevant for the technology adoption (Tornatzky and Fleischer 1990).
Taking into account the need for the assessment of relevant factors that would support the implementation of BIS, this study aimed to examine the determinants that could affect BIS adoption in small-and medium-sized enterprises (SMEs) through the identification of the determining factors within the selected Technology, Organization, and Environment (TOE) framework. For this aim, we conducted a survey on a sample of 100 Croatian SMEs that operate in the sector and developed a logistic regression model. Our results reveal that the organizational and environmental factors are relevant for the implementation of BIS in SMEs, while the technological factors did not have a significant impact. Practical and theoretical implications of our work emerge from these results.
The paper is structured as follows. After the introduction, a theoretical background is presented, emphasizing the key ideas for examining the potential risk of innovative technology adoption, as well as the BIS adoption process in SMEs. The third part of the paper describes the methodology, and the fourth part presents the research results. In the fifth part of this paper, a brief discussion based on obtained results is given. In the concluding part of the paper, final thoughts, as well as limitations of this research, and recommendations for further research are given.
SMEs and Technology
Technological innovation can represent an upgrade of existing products or services according to market expectations or lead to the introduction of new products or services in enterprises (Hisrich et al. 2017). According to different authors (i.e., Pavitt 1987;OECD 2005;Diaconu 2011), technological innovations can be defined as processes or products. Such innovations impact daily life, business, government, and the environment. Expansion, adoption, and the appliance of technological innovations can lead to better productivity, purchasing power, increased employment rates, and reduced environmental pollution, but it can also lead to changes in the behavior of existing society and reduce social disparities among people (Kusuma et al. 2020;Viswanathan and Sreekumar 2019). According to various authors, innovative technologies rapidly change the service sector, as well as the existing seller-buyer relationship (Jeon et al. 2020;Iqbal et al. 2018).
Currently, SMEs are becoming crucial in the economic progress of many countries worldwide, as well as in the Republic of Croatia (Zhen 2013;Casanova et al. 2018;da Silva et al. 2020;CEPOR 2019). Notwithstanding that SMEs are representing the driving force of many countries, they still face many risks that challenge their sustainability in today's turbulent market. SMEs often encounter a lack of financial and human resources, as well as technology infrastructure insufficiency, competition on the market, and other internal and external barriers that limit their full business accomplishment (da Silva et al. 2020). According to Ploh (2017), it is crucial to support Croatian SMEs in development and networking by preparing a more suitable legal framework for their establishment, supporting their participation in various types of education, and using different IT innovations. Accordingly, SMEs need to consider investing in different technological innovations which could enhance their efficiency in terms of process performance, preserve sustainability, and encourage their competitive advantage development (da Silva et al. 2020).
Technology Adoption Models
Many theoretical models, frameworks, and theories for investigating successful technology adoption paths in businesses are developed over time. Thus, different authors examined various technological innovation adoption within enterprises using the Theory of reasoned action (TRA) (i.e., Tran et al. 2014), Technology acceptance model (TAM) (i.e., Najib and Fahma 2020), Unified Theory of Acceptance and Use of Technology (UTAUT) (i.e., Abu et al. 2015), Theory of planned behavior (TPB) (i.e., Seol et al. 2016), Diffusion of innovation (DOI) (i.e., Tehrani and Shirazi 2014), and the TOE framework, just to name as the most important.
According to Kim and Eunil (2020), user resistance, privacy concerns, trust, perception of technology usage and benefits, accessibility, and perception of costs are recognized as potential challenges in the adoption process. Similarly, technical complexity, prediction, and consequences of the consumer technology usage perception and financial risks are also recognized as potential risks for successful technological innovation adoption (Jeon et al. 2020). Inadequate data management is also classified as a perceived risk for the adoption process (Khan et al. 2020). Najib and Fahma (2020) accentuate the difficulty to set up a technology and costs of the project as one of the main risks that could make the innovational technology adoption unsuccessful and later usage of planned technology ineffective. Igwe et al. (2020) emphasize organizational capabilities, like insufficient technical, human, time, management resources, and top-management support deficiency, as potential challenges of successful conduction of technological innovation adoption project within the enterprise. Moreover, competitive and supplier pressure is stressed as a potential risk that could slow down or stop the adoption of the new technology adoption project (Igwe et al. 2020). The TOE framework includes the above-stated factors as the determinants for successful technology adoption. The TOE framework is the most recognized and used in the scientific field of examining the potential factors that could affect the technological innovation adoption process in enterprises and therefore prevent the potential risks that could cause the failure of the adoption project. Moreover, determinants within this framework can be simultaneously observed as constraining factors and influencing factors of particular technological innovation adoption within the enterprise (Bijker and Hart 2013).
BIS Adoption
BIS has become crucial for modern business, as it provides companies with information and new knowledge that, if used on time and for the right problem, can bring many positive effects to the business results. BIS refers to a set of different tools, methods, and applications used to store, analyze, and visualize business data relevant for successful decision-making (Agiu et al. 2014). However, BIS is one of the most demanding technological innovations for adopting in the enterprise, especially since it requires employee's training and additional time to acquire knowledge on how to use the obtained information, thus how to analyze the business data (Nelke 2012).
The examination of the BIS adoption process using the TOE framework within different businesses worldwide is conducted by numerous papers. In their research, different authors have focused on diverse stages of BIS adoption in business. For example, Hung et al. (2016) investigated the BIS implementation success factors, using TOE determinants, within Taiwan's enterprises that participated in the BIS implementation process.
In their paper, Hung et al. (2016) observed two main effects of BIS implementation on (1) user satisfaction and (2) overall system effectiveness. On the other hand, Lautenbach et al. (2017) researched the business intelligence and analytics (BI&A) usage level among 72 enterprises in South Africa that already used BI&A in their businesses. Lautenbach et al. (2017) investigated the impact of six TOE determinants on the BIS usage, half of which proved to be significant, i.e., determinants as data-related infrastructure capabilities, top management support, and external market impact. Owusu et al. (2017) investigated the TOE determinants' impact on the overall BIS adoption process among 120 private universities in Malaysia. Owusu et al. (2017) focused on TOE determinants by observing five BIS adoption stages: (1) basic information system (IS) usage, (2) data warehouse (DW) usage, (3) DW and analytics usage, (4) DW, analytics and strategic tools usage, and (5) real-time use of all previously mentioned BIS tools and methods. Puklavec et al. (2018) distinguished 11 potentially influencing TOE determinants on the overall BIS adoption process by conducting an extensive literature review and qualitative research based on ten in-depth interviews among BIS and IT experts employed within diverse European SMEs.
Even though the BIS adoption topic within various businesses is getting more attention within scientific circles, investigation of potential BIS adoption risks within SMEs is sparse. Prior studies are mostly focusing on the TOE determinants that encouraged BIS implementation or usage in different sized enterprises. While discussing their results and explaining the importance of obtained results for each determinant, some researchers also glinted on the opposite, the negative impact of observed determinants on BIS adoption (i.e., Hung et al. 2016;Lautenbach et al. 2017) Therefore, it would be advisable to observe the adoption of determinants from the BIS adoption risk perspective. Moreover, few studies focus only on BIS adoption within individual business sectors.
Due to the emphasized economic-driving importance of SMEs for the Republic of Croatia, the SMEs' need to adopt technological innovations, such as BIS, that contribute to better decision-making, higher profits, and meeting growing customer needs must not be neglected. In the Republic of Croatia, the BI usage within enterprises of all sizes is getting more attention over time. According to Bilandžić et al. (2012), 57% of all sized enterprises in the Republic of Croatia were doing some part of BI in their business, while only 19% had separate BI departments. Moreover, Bilandžić et al. (2012) argued that Croatian enterprises had not reached the BI usage level as in other countries. Dubravac and Bevanda (2015), concluded that Croatian SMEs lag behind other countries in the process of BI adoption due to insufficient financial resources, lack of technological knowledge, and fear of security vulnerabilities and data loss.
Since there is no similar research that would examine the BIS adoption risks in the Croatian tertiary SMEs and that is known to us, the authors of this paper decided to observe the potential BIS adoption determinants from the risk perspective.
Technological Context
According to Tornatzky and Fleischer (1990), the technology dimension includes technologies and tools already in use or available for the enterprise. Therefore, based on previous research and the importance of each determinant within the technology dimension for technological innovation adoption perception of the comparative advantage (Pan and Pan 2019), perception of the BIS's complexity (Gupta and Bhatia 2019), BIS compatibility with enterprise information system (Pan and Pan 2019), and key personnel ability to assess the BIS benefits (Acheampong and Moyaid 2016) are observed in this study as potential risks for successful conduction of BIS adoption project. The perception of the comparative advantage of BIS is the level of the user perception of how BIS usage is better for achieving higher business performance than some other technological innovation (Pan and Pan 2019). Wang and Wang (2016) found that relative advantage significantly influences knowledge management systems, while Boonsiritomachai et al. (2016) also noted that perception of relative advantage has a significant impact on BI adoption. Therefore, this study hypothesized that Croatian SMEs within the tertiary sector would more likely adopt BIS in their businesses if there were no barriers in their perception of the comparative advantage associated with BIS. Accordingly, the first hypothesis is proposed as follows: Hypothesis 1. Perception of the comparative advantage of BIS has a positive impact on SMEs' adoption of BIS.
Furthermore, Ahmad and Siraj (2018) define complexity as the user perception level of how difficult technological innovation is for understanding or usage. According to Ramamurthy et al. (2008), a higher level of technological innovation complexity will result in a lesser adoption of that technology in business. Boonsiritomachai et al. (2016) proved the significant correlation of complexity determinant with BI adoption. Similarly, Yeoh (2011) mentioned how the complexity of BIS process of adoption can be too demanding and therefore cause a decline in the number of enterprises willing to conduct the BIS project.
According to the aforementioned suggestions, it is supposed that SMEs are less likely to adopt BIS if they perceive it as complicated to understand or use. Hence, the second hypothesis is proposed: Hypothesis 2. Perception of the BIS's complexity has a negative impact on SMEs' adoption of BIS.
Pan and Pan (2019) define the compatibility determinant as the level of competence with infrastructure, technologies, tools, and business practices already existing in the enterprise. Yoon et al. (2014) state that it is more likely for the enterprise to reject the new technology adoption plan if it finds it incompatible with current technology or business procedures. Acheampong and Moyaid (2016) also suggest the importance of observing compatibility as a determinant in the case of the BIS adoption investigation. Based on the mentioned findings, this study proposed that SMEs are more likely to adopt BIS if it is compatible with their existing infrastructure, technologies, tools, business values, and procedures. Hence, the authors propose the third hypothesis as follows: Hypothesis 3. BIS's compatibility with enterprise information system has a positive impact on SMEs' adoption of BIS.
Ghobakhloo and Ching (2019) describe the key personnel ability to assess the BIS benefits as their level of clarity and understanding of values and benefits that they can achieve by adopting BIS. According to Tornatzky and Fleischer (1990), the constantly growing number of new technology solutions on the market can cause various difficulties for the enterprise to understand its need and recognize the proper technological solution for their business. As well as compatibility, BIS observability and trialability are positively related to the BIS adoption (Acheampong and Moyaid 2016). Accordingly, this study proposes that SMEs are more likely to adopt BIS if the key personnel can easily understand and recognize BIS adoption values and benefits within their business. Therefore, the fourth hypothesis is proposed: Hypothesis 4. Key personnel ability to assess the BIS benefits has a positive impact on SMEs' adoption of BIS.
Organizational Context
The organizational dimension observes formal and informal relationships among employees and the existing organizational mechanisms within the enterprise (Tornatzky and Fleischer 1990). According to previous research in the field of new technologies' adoption within enterprises, top management organizational support (Ahmad and Siraj 2018), organizational readiness (Acheampong and Moyaid 2016), and data management (Puklavec et al. 2014) are identified as significant determinants within the organizational context, for the adoption process. Acheampong and Moyaid (2016) define top-management organizational support as its active involvement during the overall BIS adoption process. Wang and Wang (2016) found a high level of top-management support within the enterprises that adopted KMS. Similarly, Yeoh (2011) noted that highly dedicated management, as well as sponsorship within the enterprise, will cause a higher rate of BIS adoption. In light of the mentioned suggestions, this study hypothesized that SMEs in which top management actively participate in the overall adoption process will be more likely to adopt BIS. Hence, the fifth hypothesis of this paper is proposed as follows: Hypothesis 5. Top management organizational support has a positive impact on SMEs' adoption of BIS.
Organizational readiness can be defined as a sufficient amount of resources (e.g., human, financial, technical, knowledge, skills, etc.) that an enterprise possesses (Puklavec et al. 2014). Boonsiritomachai et al. (2016) found that the availability of organizational resources is a significant determinant when observing the BIS adoption process within the enterprise. Similarly, Acheampong and Moyaid (2016) identified organizational readiness as a determinant that has a significant impact on BIS adoption. Hence, this study contends that SMEs are less likely to adopt BIS if they do not have sufficient resources. Accordingly, the sixth hypothesis of this study is proposed: Hypothesis 6. Organizational readiness has a positive impact on SMEs' adoption of BIS.
According to Olexova (2014), DW and data marts are two crucial elements of BIS. Thus, data management is crucial for the data-dependent technologies adoption such as BIS, since it prevents subjective data definition, defines data dictionary standards, enables data security, ensures enterprise integrity and credibility, and manages database modeling and operation (Ramamurthy et al. 2008). Puklavec et al. (2014) treated the organizational data environment as an influential determinant on the BIS adoption. For example, Olexova (2014) stated that the data dictionary establishment was crucial for the further conduction of the BIS adoption process. Because of that, it is expected that SMEs with better data management as a backbone for decision-making processes will be more likely to adopt BIS. Therefore, the seventh hypothesis of this study is proposed as follows: Hypothesis 7. Data management as a backbone for decision-making processes has a positive impact on SMEs' adoption of BIS.
Environmental Context
Environmental context includes everything within the business environment that surrounds the business operating field of the enterprise (Tornatzky and Fleischer 1990). Inspired by today's competitive business market, as well as a growing number of new technologies and their providers, competitive pressure (Acheampong and Moyaid 2016) and BIS vendors' quality (Boonsiritomachai et al. 2016) are identified as crucial determinants within this TOE dimension. Competitive pressure is defined as the competitiveness intensity present on the market in which enterprise participates (Tornatzky and Fleischer 1990). Boonsiritomachai et al. (2016) found competitive pressure to be a determinant that has a significant impact on the BIS. Similarly, Wang and Wang (2016) also proved that higher competitive pressure leads to a higher likelihood of KMS adoption. Consequently, SMEs that have greater competitive pressure in their market are more likely to adopt the BIS. Therefore, the next, i.e., the eighth, hypothesis in a row is proposed as follows: Hypothesis 8. Competitive pressure has a positive impact on SMEs' adoption of BIS.
BIS vendors' quality includes timely adoption project attainment, the post-adoption vendor support, as well as their marketing activities (Puklavec et al. 2014;Boonsiritomachai et al. 2016). According to Puklavec et al. (2014), if enterprises are familiar and satisfied with the work and the competencies of their service providers, it is very likely that they will adopt the BIS. Similarly, Boonsiritomachai et al. (2016) noted that BIS vendors' quality significantly influences the BIS adoption. Therefore, it is expected that SMEs which are satisfied with the way of how their potential vendors complete and support their projects will be more likely to adopt BIS. Thus, ninth, and final, hypothesis of this study is proposed: Hypothesis 9. BIS vendors' quality has a positive impact on SMEs' adoption of BIS.
Sample Description
We tested our model on the sample of Croatian SMEs, for which relevance for the Croatian economy is noticeable through several indicators: employment rate, productivity rate, total income and export, and financial efficiency (CEPOR 2019). According to the Accounting Act (2020) and The Small Business Development Promotion Act (2020), Croatian SMEs are enterprises that do not exceed boundary indicators in two of the following three conditions: total assets EUR 4,000,000.00; turnover EUR 8,000,000.00; the average number of workers during the business year is 50 workers (small enterprises) and total assets EUR 20,000,000.00; turnover EUR 40,000,000.00; the average number of workers during the business year is 250 workers (medium-sized enterprises). SMEs in the Republic of Croatia occupy 99.7% of the total structure of all-sized enterprises (CEPOR 2019) and operate mostly in the sector (Hrvatska.eu 2017). Several employees in SMEs are constantly growing. For example, the employment rate in Croatian small enterprises increased by 3.8% and by 6.1% in medium-sized enterprises in 2018, compared to 2017 (CEPOR 2019). In the Republic of Croatia, SMEs are showing a positive trend in innovating their businesses (CEPOR 2019). In 2018, 13.8% of them offered new products that were new for anyone, 17.9% of them were innovative for only some businesses on the market, and 68.3% offered products that were not new to any business, which are slightly better results than the year before (CEPOR 2019).
To test the empirical model, we used the survey research approach. To gather as many enterprises as possible to participate in the research, we used the snowball approach as a sampling method. The chosen sampling method is mostly used in the studies in which the aiming sample is not easy to approach. It is also advisable to use this sampling method when investigating perceptions of respondents via survey, as is the case in this research (Dragan and Isaic-Maniu 2012). The snowball method is also known as the chain method since existing respondents are finding new respondents, mostly their contacts, which are suitable for the research sample that is difficult to identify (Naderifar et al. 2017). Respectively, the chosen sampling method involves giving recommendations for contacts of potential members of the selected research sample, with the purpose to achieve the sample growth as a snowball. It is most often used in hidden populations, which is the case with this research in which BIS is rarely used in SMEs in the Republic of Croatia. In this research, experts within the BIS field were asked to recommend their contacts who would participate in the research. This method was carried out until the moment when the recommendations of the respondents started to repeat (Etikan et al. 2016). Using this method, a total of 110 responses were collected, of which 100 answers were valid.
The questionnaires were sent online, directly to the respondents via email or through social networks (LinkedIn). The respondents are top management, IT professionals, and employees that understand the functionalities of BIS within the small-and medium-sized enterprises in the Republic of Croatia. Therefore, the sample includes companies engaged in service activities, financial activities, wholesale and retail trade, etc. (Biere 2010), whose business success is assumed to be mostly based on well-organized and used information. The sample consisted of 37% of small-sized and 38% of medium-sized enterprises.
The largest number of enterprises (49%) within the sample is engaged in information and communication activities. Enterprises participating in wholesale and retail trade comprise 15% of the research sample. About 10% of enterprises are engaged in professional, scientific, and technical activities; 6% are engaged in financial and insurance activities; and 3% provide transport and storage services. Merely 1% of enterprises are included in providing accommodation, food preparation, serving, administrative and support services, and educational activities, while a total of 7% of enterprises did not want to classify their businesses. Most enterprises answered that they employ around 11-250 per year (79%), while 58% of enterprises answered that they earn between EUR 0.7 million and EUR 40 million of average annual income. In this research, enterprises operating on the market from 11-50 years (70%) mostly participated. They are followed by 16% of enterprises conducting their businesses from 6-10 years and 12% operating businesses for over 50 and less than 5 years. Only 2% of respondents did not answer the question about their enterprises' operation years. Most enterprises operate in the European Union (45%), while 38% operate in national markets of several or all countries in the region. An equal number of respondents of 24% are employed in enterprises operating in a part of the market within Bosnia and Herzegovina, Croatia, or Serbia and in the entire area of the national market of Bosnia and Herzegovina, Croatia, or Serbia. The enterprises that participated in the survey are domestic, mostly private companies (72%). They are followed by foreign privately owned companies (19%). Only 4% of enterprises are domestic enterprises, classified as semiprivate and semi-state, while 3% of enterprises are predominantly state-owned. When asked about the ownership structure of the company, only 2% of respondents did not answer. Most enterprises do not operate as part of a larger multinational company (77%), while a small number participate as a subsidiary of a multinational company (21%). Only 2% did not answer this question.
Regarding the respondents' education level, most of them have a graduate degree (74%), followed by those who have an MBA or PhD (15%). Just 4% of respondents have an undergraduate degree, while only 2% of them have completed high school. To this question, 5% of respondents did not want to give their answer. Among the respondents, the largest number are IT experts (28%), top managers (24%), and department managers (15%). A total of 4% of BI specialists participated in the study, while only 2% were supervisors. Moreover, 20% of respondents declared themselves as other occupations, while 7% of respondents did not answer this question. Most respondents work in their current workplace for two to three years (26%), more than 10 (21%), or less than 1 (19%). Almost the same percentage of respondents work in their current workplace from four to five years (15%) and five to ten (14%), while only 5% of respondents did not answer this question.
Research Instrument
The dependent variable is defined as the BIS's adoption with enterprises divided into two groups: BIS is not yet implemented in the enterprise, and BIS is fully implemented in the enterprise (Tornatzky and Fleischer 1990;Puklavec et al. 2018).
Independent variables comprise the determinants within the TOE framework that could influence the adoption process in Croatian SMEs. (i) The technology dimension is perception of the comparative advantage of BIS, BIS's complexity, BIS's compatibility with the enterprise information system, and key personnel ability to assess the BIS benefits. (ii) The organizational dimension refers to top-management organizational support, organizational readiness, and data management as a backbone for decision-making processes. (iii) The environmental dimension is the BIS vendors' quality and competitive pressure. Table 1 presents the research instrument in detail. Using BIS allows you to avoid unnecessary costs and time savings.
TD1_2
The cost-effectiveness of BIS is higher than that of other decision support systems (software). TD1_3 The use of BIS enables better decision-making. TD1_4 The use of BIS enables faster execution of actions and decision-making. TD1_5 Using BIS makes it easier to perform business tasks. TD1_6 The use of BIS allows greater control over the business.
TD2_1
The process of getting acquainted with the work of the BIS is complex. TD2_2 The process of introducing the BIS is complex. TD2_3 Using BIS is complex and demanding for users. TD2_4 It is difficult to learn how to work with BIS. TD2_5 Resistance to the use of BIS is a consequence of the complexity of working with BIS.
BIS's compatibility with enterprise information system (TD3)
TD3_1
The use of BIS should be compatible with existing business values and beliefs embedded in enterprise information system (objectives and the tasks of the system support the mission, vision, and goals of the business)
TD3_2
The changes brought about by the BIS adoption should be compatible with existing business practices executed by the enterprise information system (e.g., processes, procedures, organizational structure, and strategic goals). TD3_3 BIS should be compatible with existing enterprise technology infrastructure.
TD3_4
BIS should be fully integrated with enterprise information systems, software tools, and software solutions.
Key personnel ability to assess the BIS benefits (TD4)
TD4_1
Key personnel are aware of the expected results of the BIS adoption TD4_2 Key personnel understands that the benefits of implementing BIS are clear and easily measurable.
TD4_3
Key personnel are aware of the existence of the BIS in the software market. TD4_4 Key personnel have the opportunity to see BIS being used in other enterprises.
OD1_1
Top management supports the implementation and adoption of the BIS.
OD1_2
Top management actively participates in establishing the vision and shaping the strategy of BIS adoption. OD1_3 Top management is ready to take the possible risks of adoption and use of BIS.
OD1_4
There is a person at the management level who strongly advocates the implementation of the BIS (warns the importance of implementing the system).
OD1_5
There is a person at the management level who shows great enthusiasm in initiating the BIS adoption (motivates to adopt the system).
OD1_6
There are one or more people at the management level who constantly emphasizes the benefits of BIS.
OD2_1
Managers and employees know how to use BIS forbusiness support. OD2_2 Managers and employees understand well how to use BIS in business.
OD2_3
We have enough technical, managerial, and other skills required to adopt the BIS. OD2_4 We have enough financial, technological, and other resources required to adopt the BIS.
Data management as a backbone for decision-making processes (OD3)
OD3_1
The data we currently use in our business is reliable. OD3_2 There is an agreement on clearly defined business rules and a set of data definitions. OD3_3 The search for and use of data/information to support decision-making is encouraged. OD3_4 Decision-making processes involving quantitative/numerical analysis are encouraged.
ED1_1
The competition degree in our business brought the pressure that has influenced the decision on the BIS adoption necessity.
ED1_2
Our enterprise had to start using BIS to maintain its competitive advantage in the market. ED1_3 I am aware that competitors already use BIS in their business. ED1_4 For our enterprise, it was strategically necessary to start with BIS usage.
ED2_1
The reputation of the software manufacturer and/or provider is important when choosing a BIS. ED2_2 The technological competencies of software providers are essential when choosing a BIS.
ED2_3
The ability of BIS producer and/or provider to successfully conduct the BIS adoption project is important to us while choosing BIS.
ED2_4
It is important to us that the BIS manufacturer and/or provider support BIS use upon completion of the adoption project.
ED2_5
Software manufacturers and/or providers promote BIS by offering free hours of education.
Statistical Methods
Statistical analysis was conducted by using the following procedure. In the first step, factor analysis was performed. The influence of determinant statements within each observed variable analysis is presented through the obtained eigenvalue, the percentage of variance, and the cumulative percentage of variance. Then, a matrix of factor structure was made after Varimax rotation. This process confirmed the construct validity of the various factors. This result led to the identification of 9 variable groups (4 variable groups comprise the technology dimension, 3 variable groups comprise the organizational dimension, and 2 variable groups comprise the environmental dimension).
In the second step, Cronbach's alpha analysis was calculated for each of the 9 variable groups within the technology, organization, and environmental dimension. Since Cronbach's alpha values confirmed the reliability of the research instrument, average values were calculated for each of the 9 variable groups (4 summary variables comprise the technology dimension, 3 summary variables comprise the organizational dimension, and 2 summary variables comprise the environmental dimension).
Finally, the logistic regression model was developed, with the dependent variable measuring the BIS adoption, as the binary variable, and extracted factors reflecting TOE dimensions as the independent variables.
Validity and Reliability Analysis
The authors performed factor analysis to identify selected determinants within each dimension and to check the selected framework applicability for the BIS adoption in Croatian SMEs. The results obtained by the Kaiser-Meyer-Olkin test (>0.5) and the Bartletts sphericity test (statistically significant at 1%) indicated the adequacy of the data for factor analysis (Ferrando and Lorenzo-Seva 2018). Table 2 shows the factor structure matrix after Varimax factor rotation for variables of all three dimensions. The results of the conducted factor analysis imply nine factors as proposed by the initial research instrument. However, only those variables with the factors higher than the cutoff value of 0.5 were retained in the further analysis. Some of the variables were omitted from the analysis, such as TD2_2. The statements within the questionnaire are translated from English into Croatian. Accordingly, our version of the questionnaire was previously tested within the sample, thus assuring the questions' clarity and comprehensibility. Cronbach alpha coefficients were calculated for providing a reliability analysis of the measurement scales. As Zeller (2005) propose, for this study, we used a Cronbach alpha coefficient limit value of 0.7 that indicates the adequacy of the measurement scale used.
Regression Analysis
Since the aim of this study demands modeling the predictors of the binary dependent variable, the authors used logistic regression in this study. Many authors have used logistic regression in researching the adoption of different technologies (Awa et al. 2017;Suvedi et al. 2017), such as e-procurement system (Soares-Aguiar and Palma-dos-Reis 2008), ERP systems (Pan and Jang 2008), cloud computing (Senyo et al. 2016), and e-commerce (Walker et al. 2016). The logistic regression model with the dependent binary variable ABIS was used for testing the impact of technological, organizational, and environmental dimension to the adoption of BIS, on the sample of Croatian SMEs. According to the obtained results shown in Table 3, it can be observed that there is no correlation between the variables, since no value of the variable surpasses the limit value of 0.9 (Hair et al. 2010). Therefore, it is concluded that there is no problem of multicollinearity in the model. Table 4 represents the results of the goodness-of-fit of the research model. Presented value of 2 Log-likelihood is equal to 103,710 and thus statistically significant at 1%. The value of Cox and Snell R Square is 0.281, and the Nagelkerke R Square is 0.377. According to Hair et al. (2010), the model is considered more goodness-of-fit with the lower value of 2 Log-likelihood and the higher value of Cox and Snell R2 and Nagelkerke R2 value. Therefore, given the obtained results, it can be concluded that the model fit has significantly improved from the null model. Table 5 represents the prediction percentage of the research model that forecasted 62.8% BIS non-adopters and 78.9% BIS adopters with 72% overall prognostic correctness. Following all the obtained results of the conducted logistic regression, it is concluded that the research model fit the research data. Regression model, with dependent variable ABIS, is presented in Table 6. The following variables achieve a significant and positive influence on the dependent variable ABIS with 1% and 5% probability: OD2_Organizational readiness, OD3_Data management as a backbone for decision-making processes, ED1_Competitive pressure, and ED2_BIS vendors' quality. The variable TD3_BIS's compatibility with enterprise information system is positively influencing dependent variable ABIS with 10% significance. Finally, four independent variables, namely TD1_Perception of the comparative advantage of BIS, TD2_Perception of BIS's complexity, TD4_Key personnel ability to assess the BIS benefits, and OD1_Top management organizational support, do not attain a significant impact on the observed dependent variable ABIS. Source: Authors' work, 2019. Note: *** statistically significant at 1%, ** 5%, and * 10%.
Discussion
As has been assumed, the risks of enterprises, and especially SMEs, are their limited resources, human, material, and financial. The analysis of the obtained results concludes that Croatian SMEs are dependent on their resources and that their rational use is crucial to them. As Bijker and Hart (2013) stated, a common challenge in using BIS within the enterprise is using it correctly in a purpose to achieve its business value, which demands proper training and educational programs for employees, which consequently can also lead to financial costs. Hence, the risks of insufficient financial resources, lack of professional staff, infrastructure, and other technological resources are just some of the risks that Croatian SMEs want to minimize when adopting new technology such as BIS. Given the nature of systems such as the BIS, which is to perform actions and analytical processes through the processing of internal and external data, Croatian SMEs have recognized the characteristics of the existing data as one of the potential risks that could slow down the BIS adopting process. According to Bijker and Hart (2013), a common challenge in BIS usage is inadequate data quality, improper data level and format, and unavailability of timely and accurate data. Therefore, this result shows that Croatian SMEs believe that, with better quality data and quantitative data processing processes, they can provide a safer path to the BIS's successful adoption in the enterprise's operations.
Contrarily, Croatian SMEs do not perceive organizational support as a potential threat for successful BIS adoption project conduction within their enterprises. The explanation for such results can rely on the fact that the top management of SMEs, in most cases, consists of owners of the enterprises at the same time (Tan 2010). Given the general knowledge by which information is currently the main resource for achieving a competitive advantage on the market, every owner strives to provide their enterprise with quality information and appropriate innovative IT, such as BIS, that will create new business knowledge. Accordingly, this result could potentially imply a high level of top-management innovativeness and IT knowledge within Croatian SMEs.
Within the environmental dimension, the obtained results show how SMEs see the competition and vendors' quality as potential risks that must be dealt with to succeed in the BIS adoption process. This result does not surprise us, since SMEs strive to achieve sustainability in a turbulent market and enhance their competitive advantage in their industry, so they want to minimalize the potential risk of having stronger competition than themselves. Similarly, Ahmad et al. (2020), in their research, emphasize the importance of observing competitive pressure, as well as market trends as determinants that significantly impact the successful BIS adoption project completion. Moreover, a competitive advantage can be easily achieved by effectively using BIS. Since SMEs are often challenged with a lack of technical personnel, as well as financial, technical, and time resources, it is logical that they aspire to achieve a good deal with BIS suppliers. That includes timely and cost-effective project achievement, as well as the vendor support after the adoption project is finished (Stjepić 2020). The risks come from the fact that BIS project adoption can be protracted if the requirements for project implementation are not clearly defined between the enterprise and the software provider (Olexova 2014). That can be explained by the strong enterprises' reliance on IT vendors as their strategic partners in the whole project of BIS, as Bijker and Hart (2013) argued in their work. Consequently, the mentioned reliance increase risks for enterprises of experiencing the adoption project failure, caused by insufficient vendors' skills or knowledge to complete the project on time and within budget. In that case, enterprises could suffer because of additional time or financial costs that they could avoid by developing their own in-house IT experts, as Bijker and Hart (2013) suggest.
Finally, within the technology dimension, Croatian SMEs recognize only BIS's compatibility with enterprise information system as a potential risk that could interfere with the safe path to a successful BIS adoption project. This result is implies a lack of technology infrastructure, as well as a possible lack of BIS integration with existing systems, tools, software, business processes, and values within the Croatian SMEs. In the same way, Ahmad et al. (2020) recognize BIS's compatibility with enterprise information system as a significant determinant that impacts BIS adoption project success. Moreover, they stress the fact that BIS incompatibility with legacy systems, as well as with existent business procedures, could lead to BIS adoption project failure.
On the other hand, SMEs within the sector in the Republic of Croatia do not perceive BIS complexity, perception of the comparative advantage of BIS, and key personnel ability to assess the BIS benefits as potential risks of the BIS adoption success. This result can be supported by fact that the sector is based on working with a large amount of data and information and therefore the necessity of applying BIS in enterprises in this sector develops the motivation and desire of employees to master the application of its functionalities in work as soon as possible. Moreover, various cloud and mobile solutions for BIS, are contributing to BIS's ease of use (Stjepić 2020). A low level of perceived complexity by enterprises can be an indicator of highly educated and skilled employees (Ismail and Mokhtar 2016). Therefore, these results can point to the IT skilled and educated employees within Croatian SMEs for adopting BIS. Moreover, this result can be related to the descriptive analysis, which indicates that most participants of this research finished graduate studies (74%) and work as IT experts (28%) or top managers (24%). The reason why SMEs do not perceive determinant of BIS's comparative advantage as a risk, can rely on the fact that they do not see it as the financial load as they generate a smaller amount of data so they do not use BIS as a stand-alone solution. This result can be confirmed by similar researches on technological innovation adoption that also did not recognize Perception of the comparative advantage of BIS as a potential threat for technological adoption project success (e.g., Wang et al. 2010) SMEs often do not use BIS as a stand-alone solution but as a part of some larger enterprises' system (Puklavec et al. 2018). That could be the reason why they do not perceive BIS's benefits clarity presentation as a crucial determinant for its adoption project success as they do not perceive BIS as a separate tool. This result can be confirmed by the results Olexova (2014), who claims that the possibility of trying out the use of BIS and gaining insight into its benefits before its adoption does not necessarily need to have a significant impact on its use.
Conclusions
The conducted study contributes to the BIS literature, as well as the technological innovation adoption literature in the context of SMEs. The main goal of this paper was to identify the potential risks of the BIS adoption project in Croatian SMEs by investigating success factors within three TOE dimensions.
The results of the research revealed that Croatian SMEs consider organizational risks as most important when it comes to BIS adoption, like insufficient human, technical, and financial resources, as well as quality data managing level. Given the obtained results, Croa-tian SMEs recognized potential risks for conducting BIS adoption within their competition and BIS vendor's support as well. Within the technology dimension, SMEs recognized BIS's compatibility with enterprise information system as a potential problem for enterprises that want to engage in the BIS adoption process.
On the other hand, Croatian SMEs do not experience a lack of organizational support, since this determinant is not identified as a potential threat for BIS adoption. The perception of BIS's complexity determinant is another determinant that is not recognized as a potential threat for the adoption process since today's BIS solutions are available in the form of a cloud or mobile solution and therefore becoming more accessible and easier to use for SMEs. Moreover, BIS software solutions are mostly used as an integrated model within some larger enterprise systems, so SMEs do not perceive their competitive advantage compared to other technological solutions. Hence, perception of the comparative advantage of BIS is the determinant that has also not been revealed as a potential risk for the BIS adoption project's success. Furthermore, the risk of BIS adoption does not arise from the lack of clear and understandable presentation of benefits to company employees, whose need for the presentment of its advantages was considered one of the factors of successful BIS adoption (Rostek 2013). The explanation for such a result may be in today's wide BIS availability and its use as an integrated module within another system.
Even though this paper expands the existing literature in the scientific field of BIS adoption in small and medium enterprises, this research has certain limitations that should be noted. First, this study is conducted on a small sample size that only encompasses the sector. Second, this research is only examined by using quantitative methods. Therefore, it is advisable for further research to conduct this study on a larger sample or within SMEs that operates under the primary or secondary sector. Likewise, the recommendation for future research is to conduct a qualitative analysis of this research to achieve the depth of understanding obtained results and more detailed proof of the relationship between the established variables. | 2021-05-10T00:03:18.759Z | 2021-02-02T00:00:00.000 | {
"year": 2021,
"sha1": "147bfa70c77931c33b2b5fc69c57ab20fb2e097f",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1911-8074/14/2/58/pdf?version=1612334333",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "14863f5b4e3a836ba7e661e64894e231d2cb49c8",
"s2fieldsofstudy": [
"Business",
"Computer Science"
],
"extfieldsofstudy": [
"Business"
]
} |
268697349 | pes2o/s2orc | v3-fos-license | Numerical Analysis of Thermal Flow Dynamics of Arc Plasma and Molten Pool in Hollow Cathode Arc Welding with Oxygen Content
: The mechanism of the pulsed hollow cathode arc welding (HCAW) process was revealed using a fully coupled model with a hollow cathode. We solved the governing equations with the Marangoni effect to study the dynamic behaviors of a molten pool with a square pulsing current (200~400 A, 900 Hz) and varying O 2 content; the dynamics of the arc plasma and the weld pool in the HCAW process were investigated quantitatively. The results show that the intensity of the arc plasma was more significantly weakened by the design of the hollow cathode in HCAW than that in GTAW with a solid hollow cathode. We could obtain a stable molten pool even with a large pulsing current section (200 A–400 A) at higher frequencies. The flow dynamics of the molten pool were mainly dominated by the Marangoni effect with varying oxygen content, and we could promote penetration by increasing O 2 content in HCAW.
Introduction
In order to obtain a stable welding arc and reduce the humping effect which leads to large deformation in higher current arc welding processes, hollow cathode arc welding (HCAW) was first developed for low-pressure applications in the 1960s and proved to be suitable for welding in space.Nerovnyi et al. [1] proved that HCAW can be successfully used for a wide variety of process in vacuum.Nishikawa et al. [2] systemically investigated the fundamental characteristics of gas hollow tungsten arc welding, such as discharge characteristics, plasma properties, and the melting process under a simulated space environment.They reported that the anode size was greater than that for conventional gas tungsten arc welding.
Shobako et al. [3] used the split anode method and probe method to measure the current density distribution and arc floating potential.They confirmed that the heat flux distribution of the HCAW arc was flat.Cho et al. [4] applied the Abel inversion method to calculate the temperature distribution of arc plasma in gas hollow tungsten arc welding.They demonstrated that HCAW was an effective method to control the heat flux in the center of the arc plasma for various gas flow rates.Nishikawa et al. [5] reported that the current distribution on the workpiece surface for HCAW was extremely diffused, and the weld profile was characterized by shallow and wide penetration.They demonstrated that the gas hollow tungsten arc was normally an even heat source.In order to overcome the Coatings 2024, 14, 382 2 of 12 drawbacks of the flat heat flux of HCAW, many efforts have been made by researchers to increase the energy density of arc plasma in HCAW, indicating that the arc characteristics can be flexibly adjusted through the HCAW process.Cho et al. [4] studied the effect of gas flow rates on arc characteristics in HCAW using experimental and CFD modeling methods and reported that a hollow cathode with higher gas flow rates would increase the energy flux intensity of arc plasma so that productivity is improved.Chen et al. [6] and Jiang et al. [7] illustrated that the arc energy density could be modified with a hollow cathode with negative pressure.They observed that the innovative HCAW process with negative pressure in hollow tungsten can decrease the arc pressure but increases the arc energy density to benefit productivity.
However, the industrial applications of HCAW are limited since there is still a lack of complete understanding regarding the dynamics of the arc and the molten pool in HCAW; only Tashiro et al. [8] established a numerical model to analyze the characteristics of HCAW arc plasma, indicating that the maximum velocity (50 m/s) and arc temperature (11,000 K) are 25% and 60% of these in traditional GTAW with a solid cathode, while the influence of an argon arc on the weld pool is not discussed.Saifutdinov et al. [9] established a numerical simulation model to analyze the main characteristics of DC plasma torches, and in particular, the self-consistent determination of the distribution of temperature fields in electrodes.Margarita et al. [10] demonstrated that the self-consistent coupling of a cathode and LTE plasma submodel is advantageous in the modeling of plasma spray torches.Generally speaking, most of the previous research on the numerical modeling of arcs focus on the dynamic behavior of molten pool subjected to the Marangoni effect, including [1][2][3]5,[11][12][13][14][15]; the dynamics of molten pools with gradients of O 2 have rarely been mentioned.
In our work, an innovative approach to control the flow dynamics of the weld pool was proposed using a mixture gas of Ar and O 2 as the shielding gas, and the flow rate ratio was controlled by inserting O 2 in the hollow cathode; the mechanism of the pulsed HCAW process with the mixture gas of argon and O 2 was revealed by a fully coupled model with a hollow cathode.We solved the governing equations with the Marangoni effect to study the dynamic behaviors of a molten pool with a square pulsing current (200~400 A, 900 Hz) and varying O 2 content; the characteristics of the arc plasma and the dynamics of the weld pool in HCAW were investigated.COMSOL5.4 was used to solve the partial differential equations.
1.
The shielding gas is in the Local Thermodynamic Equilibrium (LTE) state; 2.
The multiphase fluids (gas, liquid) are incompressible with a laminar flow; 3.
The Marangoni effect of Fe-O alloy is determined by both temperature and oxygen content [2].
As shown in Figure 1a, a 19 mm length hollow cathode with an inner radius of 1.5 mm and an outer radius of 3 mm was connected to a current inlet B; the square current wave form ranged from 200 A to 400 A with 900 Hz and an EN ratio of 0; A, the Q235 low-carbon steel (O~0.1 wt%), was connected to the ground F. The flow rate of the mixture gas at inlet A1 was 0.01 L/min, while the flow rate of the mixture gas at inlet A2 (with an outer diameter of 5 mm) was 15 L/min; the properties of the hollow cathode were determined by temperature, and the properties of the mixture gas and steel were determined by temperature, level set function, and phase change coefficient.The Maxwell and heat transfer equations with Joule heat and radiation loss were calculated in the whole domain, while all the momentum sources (shear stress from the plasma jet and the surface tension with the Marangoni effect subjected to temperature and oxygen activity: Lorentz force, Darcy damping force, buoyancy force, and gravity) [3,[8][9][10]16,17] were implemented in the momentum equations calculated in the fluid domain.LTE-diffusion approximation [18] was used at both the interfaces between electrodes and the mixture gas, featuring level set delta functions(full derivation in Equations S1).
The physical properties of the mixture gas (Ar + O2), hollow cathode, and the solid and liquid phase of the Q235 low-carbon steel and welding parameters can be found in Table 1.The Maxwell and heat transfer equations with Joule heat and radiation loss were calculated in the whole domain, while all the momentum sources (shear stress from the plasma jet and the surface tension with the Marangoni effect subjected to temperature and oxygen activity: Lorentz force, Darcy damping force, buoyancy force, and gravity) [3,[8][9][10]16,17] were implemented in the momentum equations calculated in the fluid domain.LTEdiffusion approximation [18] was used at both the interfaces between electrodes and the mixture gas, featuring level set delta functions(full derivation in Equation (S1)).
The physical properties of the mixture gas (Ar + O 2 ), hollow cathode, and the solid and liquid phase of the Q235 low-carbon steel and welding parameters can be found in Table 1.
Boundary Conditions and Meshes
As listed in Table 2, A1 is the inlet of O 2 and A2 is the inlet of Ar; the current inlet was set on the hollow cathode on the surface of B. We adopted two meshes for stationary study and transient study to reduce the cost of calculation.The minimum mesh size of 1 × 10 −4 m was near the cathode tip (See Figure 1c).Table 2. Boundary conditions for modeling in the HCAW process.
Boundary (Surface)
The numerical procedure of MHD in HCAW is similar to our previous numerical works of pulsed GTAW [19]; all the partial differential equations were solved using COMSOL5.2.Before the transient study with a square pulsing current from 200 A to 400 A, a stationary study was calculated (I = 200 A) to provide initial values.
Experimental Procedure
As can be seen in Figure 2, the HCAW experimental system included a GTAW power source (Aotai WSME-630II, Aotai, Jinan, China), a shielding gas system, and a water-cooling unit.A hollow tungsten electrode was positioned in the center of the welding arc.A beadon-plate welding wad was used on low-carbon steel plates which were 200 × 200 × 10 mm in size.The base metal surface was cleaned with acetone before welding.The feasibility and accuracy of the fully coupled model based on MHD was verified preliminary by the shape of the welding arc captured by a CCD camera (iX i-SPEED 700, iX Cameras, Rocheford, UK).ing unit.A hollow tungsten electrode was positioned in the center of the welding arc.A bead-on-plate welding wad was used on low-carbon steel plates which were 200 × 200 × 10 mm in size.The base metal surface was cleaned with acetone before welding.The feasibility and accuracy of the fully coupled model based on MHD was verified preliminary by the shape of the welding arc captured by a CCD camera (iX i-SPEED 700, iX Cameras, Rocheford, UK)
Results and Discussion
In this section, we first focus on the dynamics of the weld pool with the square pulsing current within one thermal circle, and then, the comparison of varying oxygen contents is made to reveal the mechanism of the flow dynamics of the molten pool.
The Magneto-Hydro Dynamic Characteristics of Arc Plasma and Weld Pool in HCAW (t = 0.1 s, O2 wt% = 0.1)
As shown in Figure 3a-c, while the input of peak current was 200 A, the electric potential near the cathode was calculated to be −12 V, while the magnetic intensity (magnetic flux density~0.014T) was much smaller than that in GTAW (magnetic flux density~0.1 T), as mentioned in our previous works [20,21], resulting in a smaller Lorentz force both in the arc plasma (1.5 × 10 5 A/m 2 near the hollow tungsten tip) and molten pool (2.8 × 10 4 A/m 2 at the side of the free surface).The maximum arc pressure was 143 Pa in the middle of the hollow cathode tip, while the maximum pressure in the molten pool reached 3521 Pa in the center of the free surface.This was not like the flow pattern in GTAW, in which the maximum velocity was near the cathode tip.The maximum velocity of 24 m/s occurred in the middle of the arc column and hollow cathode tip in HCAW and reduced gradually from the arc column to the side of the work piece, while the weld pool exhibited a typical
Results and Discussion
In this section, we first focus on the dynamics of the weld pool with the square pulsing current within one thermal circle, and then, the comparison of varying oxygen contents is made to reveal the mechanism of the flow dynamics of the molten pool.As shown in Figure 3a-c, while the input of peak current was 200 A, the electric potential near the cathode was calculated to be −12 V, while the magnetic intensity (magnetic flux density~0.014T) was much smaller than that in GTAW (magnetic flux density~0.1 T), as mentioned in our previous works [20,21], resulting in a smaller Lorentz force both in the arc plasma (1.5 × 10 5 A/m 2 near the hollow tungsten tip) and molten pool (2.8 × 10 4 A/m 2 at the side of the free surface).The maximum arc pressure was 143 Pa in the middle of the hollow cathode tip, while the maximum pressure in the molten pool reached 3521 Pa in the center of the free surface.This was not like the flow pattern in GTAW, in which the maximum velocity was near the cathode tip.The maximum velocity of 24 m/s occurred in the middle of the arc column and hollow cathode tip in HCAW and reduced gradually from the arc column to the side of the work piece, while the weld pool exhibited a typical inside-downward flow, and the velocity reached its maximum (0.68 m/s) at the center of the free surface, resulting in a finger-shape weld pool profile with deep penetration.The maximum arc temperature was 14,275 K in the arc column, and the temperature gradient was very small.inside-downward flow, and the velocity reached its maximum (0.68 m/s) at the center of the free surface, resulting in a finger-shape weld pool profile with deep penetration.The maximum arc temperature was 14,275 K in the arc column, and the temperature gradient was very small.Compared to the GTAW process, we could determine that the hollow cathode led to a less intensive arc flow and a more uniform temperature field in the arc plasma, even at a higher current of 200 A. This could be attributed to the less intensive electromagnetic field near the tip of the hollow cathode leading to a less intensive heat source and Lorentz force, so the heating, accelerating and compressive effects of electromagnetic field on the arc plasma were weakened.
The Magneto-Hydro Dynamic Characteristics of HCAW in Ten Circles (t = 0~0.01 s, O2 wt% = 0.1)
As shown in Figure 4, we took point A in the center of the hollow cathode, referring to the arc column, and point B, referring to the molten pool.Generally speaking, we could observe from the line graph of point A that the arc plasma exhibited a periodical expansion and contraction with the same frequency of square current waveform not only for the electromagnetic field (the magnetic flux density at point A fluctuated around 3 E-6 T; see Figure 4b), but also for the thermal flow (the velocity magnitude fluctuated around 12 m/s; see Figure 5d, while the temperature fluctuated around 15,000 K at point A; see Figure 4c).Compared to the GTAW process, we could determine that the hollow cathode led to a less intensive arc flow and a more uniform temperature field in the arc plasma, even at a higher current of 200 A. This could be attributed to the less intensive electromagnetic field near the tip of the hollow cathode leading to a less intensive heat source and Lorentz force, so the heating, accelerating and compressive effects of electromagnetic field on the arc plasma were weakened.As shown in Figure 4, we took point A in the center of the hollow cathode, referring to the arc column, and point B, referring to the molten pool.Generally speaking, we could observe from the line graph of point A that the arc plasma exhibited a periodical expansion and contraction with the same frequency of square current waveform not only for the electromagnetic field (the magnetic flux density at point A fluctuated around 3 E-6 T; see Figure 4b), but also for the thermal flow (the velocity magnitude fluctuated around 12 m/s; see Figure 5d, while the temperature fluctuated around 15,000 K at point A; see Figure 4c).
At the peak current of 200 A, the arc column had a relatively uniform thermal flow with the maximum temperature of 14,361 K and the maximum velocity of 22.8 m/s.As the current increased rapidly from 200 A to 300 A, the maximum arc velocity reached 38.5 m/s, and the arc temperature rose to 15,000 K, the arc shape experienced a relatively small expansion; meanwhile, as the current rose to 400 A, the maximum velocity reached 60.9 m/s and the expansion of the arc shape could be clearly identified with a maximum temperature of 17,000 K.However, even at this high current stage, the thermal flow of the weld pool still remained stable.As the current decreased slightly from 400 A to 360 A, the maximum velocity decreased to a very similar value of 53 m/s, and the arc shape exhibited a clear bell shape with the maximum temperature of 16,000 K.As the current decreased from 260 A to 200 A, the arc shape experienced a contraction process, whereas the magnitudes of temperature and the velocities in the arc plasma decreased, respectively, from 15,000 K&27.6 m/s to 14,000 K&22.8 m/s.We could observe that the arc flow mainly started from the hollow cathode tip, concentrating in the center of arc column, then dispersing to the side of work piece.We could observe that the concentration of O 2 in the gas mixture of arc plasma also experienced a periodical pulsing flow from the hollow cathode to the surface of the melt pool.
to the arc column, and point B, referring to the molten pool.Generally speaking, we could observe from the line graph of point A that the arc plasma exhibited a periodical expansion and contraction with the same frequency of square current waveform not only for the electromagnetic field (the magnetic flux density at point A fluctuated around 3 E-6 T; see Figure 4b), but also for the thermal flow (the velocity magnitude fluctuated around 12 m/s; see Figure 5d, while the temperature fluctuated around 15,000 K at point A; see Figure 4c).The flow in the weld pool remained stable without significant changes within one thermal circle in such a short period of time (1/900 s).We could observe a typical fingershaped profile with an inward-downward flow pattern, in which the melt alloy at the surface of weld pool flowed inward, while the melt alloy in the center of weld pool flowed downward, and the velocity reached its maximum (0.6 m/s) at the very center of the molten pool.From Figure 6, we can see that the melt pool at Point B experienced a very small fluctuation of electromagnetic fields, while the velocity was stable at 0.6 m/s and the temperature remained at approximately 2000 K.
Compared to the GTAW process, we can conclude that the intensity of electromagnetic fields is relatively weakened by the design of a hollow cathode, leading to a low-intensity but stable arc flow.Together with a high-frequency pulsing current, the shear stress and electromagnetic force cannot affect the melt pool flow directly in the high-frequency pulsing current arc welding process with a very small duration time within the fluid dynamics level, so the theory of electromagnetic-induced string is not appropriate for describing flow dynamics in high-frequency arc welding process.This also indicates that we could treat the surface as a fixed flat surface in the HCAW process, and the Marangoni effect is predominant in the flow dynamics of molten pools in HCAW.At the peak current of 200 A, the arc column had a relatively uniform thermal flow with the maximum temperature of 14,361 K and the maximum velocity of 22.8 m/s.As the current increased rapidly from 200 A to 300 A, the maximum arc velocity reached 38.5 m/s, and the arc temperature rose to 15,000 K, the arc shape experienced a relatively small expansion; meanwhile, as the current rose to 400 A, the maximum velocity reached 60.9 m/s and the expansion of the arc shape could be clearly identified with a maximum temperature of 17,000 K.However, even at this high current stage, the thermal flow of the weld pool still remained stable.As the current decreased slightly from 400 A to 360 A, the maximum velocity decreased to a very similar value of 53 m/s, and the arc shape exhibited a clear bell shape with the maximum temperature of 16,000 K.As the current decreased from 260 A to 200 A, the arc shape experienced a contraction process, whereas the magnitudes of temperature and the velocities in the arc plasma decreased, respectively, from 15,000 K&27.6 m/s to 14,000 K&22.8 m/s.We could observe that the arc flow mainly started from the hollow cathode tip, concentrating in the center of arc column, then dispersing to the side of work piece.We could observe that the concentration of O2 in the gas mixture of arc plasma also experienced a periodical pulsing flow from the hollow cathode to the surface of the melt pool.
The flow in the weld pool remained stable without significant changes within one thermal circle in such a short period of time (1/900 s).We could observe a typical fingershaped profile with an inward-downward flow pattern, in which the melt alloy at the surface of weld pool flowed inward, while the melt alloy in the center of weld pool flowed downward, and the velocity reached its maximum (0.6 m/s) at the very center of the molten pool.From Figure 6, we can see that the melt pool at Point B experienced a very small fluctuation of electromagnetic fields, while the velocity was stable at 0.6 m/s and the temperature remained at approximately 2000 K.As described in the previous section, the Marangoni effect dominated the dynamics of the molten pool in the HCAW process, so we assumed the surface tension gradient subjected to oxygen content was the main factor of weld pool dynamics with the mixture gas of O 2. In this section, since our first priority is give a clear understanding of how oxygen content affects melt flow, we made a comparison by setting a group of parameters of oxygen content from 0.01 wt% to 0.1 wt%.The evolution of the molten pool from 0.01 s to 0.1 s with a DC square-pulsed current was studied.Since the oxygen content was relatively small compared to the Ar content, the influence of oxygen content on the arc plasma was not studied in this paper.
With the oxygen content of 0.01 wt% (see Figure 5a), the surface tension gradient subjected to oxygen content remained negative above the critical temperature (~2100 K), since the temperature was mainly above 2100 K in most regions of the molten pool.A shallow weld pool profile induced by an outward flow pattern could be observed for the ∂γ ∂T , which was negative in the molten pool.The velocity reached its maximum at the side of the melt pool surface.As the oxygen content increased to 0.03 wt% (see Figure 5b), the critical temperature was 2390 K, and two vortexes occurred at the same time in the weld pool.An inward flow vortex induced by positive ∂γ ∂T occurred at the edge of the melt pool.In contrast, another upward flow vortex was induced by negative ∂γ ∂T in the overheated region of the molten pool.The velocity reached its maximum (0.4 m/s) at the intersection point of two kinds of melt flows in the free surface, resulting in a finger-shape weld pool profile with a flat bottom.
While the oxygen content was 0.06 wt% (see Figure 5c), the critical temperature was approximately at 2590 K, and the surface tension gradient subjected to oxygen content was positive below 2590 K in most regions of the molten pool.We could observe a deep finger-like weld pool profile induced by the inward flow pattern for ∂γ ∂T , which was positive in the molten pool.The maximum velocity occurred in the center of the melt pool with a magnitude of 0.58 m/s.The case with oxygen content of 0.1 wt% (see Figure 5d) showed the same characteristics as the case with 0.06 wt%, except for a faster melt flow (0.68 m/s) and deeper penetration.Now, we conclude that the flow dynamics of molten pools in HCAW are mainly determined by the Marangoni effect subjected to oxygen content, and other driving forces such as shear stress from the plasma jet, arc pressure, and Lorentz force could all be neglected in the case of hollow cathode arc welding with weakened electromagnetics.If we increase the oxygen content from 0.01 wt% to 0.1 wt%, there is a transition of internal flow patterns from upward-outward melt flow, resulting in a shallow weld pool, to inwarddownward melt flow, resulting in a deep finger-like weld pool.The critical temperature of this transition increases with the oxygen content, indicating that we could promote penetration by increasing O 2 content in HCAW.In the previous section, we proved that the surface tension gradient subjected to constant oxygen content is the main factor in weld pool dynamics; however, in reality, oxygen transfers from a hollow cathode to the weld pool with a transient varying content, and both the dynamics of arc plasma and melt pool are affected by the composition of O 2 .We proposed a fully coupled model of the mass transfer of oxygen and MHD dynamics of arc plasma and weld pools to investigate the interactions between oxygen, arc plasma, and weld pools.We made a comparison by setting a group of parameters of oxygen gas flow from 10 mL/s to 60 mL/s.The evolution of the concentration of O 2 in the arc plasma and molten pool from 0.01 s to 0.1 s with a DC square-pulsed current was studied.Due to the limitation of length, the influence of oxygen content on arc plasma was considered but not discussed in this paper.
With the oxygen flow rate of 10 mL (see Figure 6), an inward flow pattern could be observed with the surface tension gradient subjected to oxygen transfer and constant sulfur content of 0.03.The concentration of O 2 experienced a fluctuation with a maximum of 0.22 at point A of the arc column, while the concentration of O 2 remained stable at 0.02 wt% at point B of the weld pool; as the time passed from 0.01 s to 0.09 s, the penetration of the melt pool increased with a slight rise in O 2 content in the middle of the melt pool.As the flow rate of O 2 increased from 10 mL to 60 mL, we could see that the concentration of the oxygen increased significantly from 0.22 to a maximum of 1 in the arc plasma; in other words, the arc plasma of the mixture gas was completely filled with O 2 .In the meantime, more O 2 was transferred from the arc plasma to the melt pool with an increase from 0.02 wt% to 0.08 wt%, correspondingly, and deeper penetration.
We can conclude that the influence of the gas flow rate of O 2 on the dynamics of melt pools could be attributed to the Marangoni effect subjected to the mass transfer process of oxygen; an increase in flow rate of O 2 leads to a rise in the concentration of O 2 both in the arc plasma and melt pool, finally resulting in a deeper penetration of the weld joint, so we could promote penetration by increasing the flow rate of O 2 in HCAW.
Experimental Validation
We could determine that the calculated arc behavior shows good agreement with the measured arc image in Figure 7; they both exhibited periodical expansion and contraction behavior synchronizing with pulsing frequency.Compared to our previous works, we could observe that the oscillating behavior of the molten pool was diminished in the case of 900 Hz compared to our works with low frequency (10 Hz).
Figure 8a,b show comparisons between our calculated weld profile (left) and the experimental results (right) with oxygen content of 0.01 wt% and 0.1 wt%; we can see that the calculated weld profile is in excellent agreement with the experimental results, and it could precisely predict the shallow-flat molten pool (0.01 wt%) and the finger-like shape with deep penetration (0.1 wt%), implying that our MHD model with a hollow cathode is effective for HCAW applications.
Coatings 2024, 14, x FOR PEER REVIEW 11 of 13 arc plasma and melt pool, finally resulting in a deeper penetration of the weld joint, so we could promote penetration by increasing the flow rate of O2 in HCAW.
Experimental Validation
We could determine that the calculated arc behavior shows good agreement with the measured arc image in Figure 7; they both exhibited periodical expansion and contraction behavior synchronizing with pulsing frequency.Compared to our previous works, we could observe that the oscillating behavior of the molten pool was diminished in the case of 900 Hz compared to our works with low frequency (10 Hz). Figure 8a,b show comparisons between our calculated weld profile (left) and the experimental results (right) with oxygen content of 0.01 wt% and 0.1 wt%; we can see that the calculated weld profile is in excellent agreement with the experimental results, and it could precisely predict the shallow-flat molten pool (0.01 wt%) and the finger-like shape with deep penetration (0.1 wt%), implying that our MHD model with a hollow cathode is
Figure 1 .
Figure 1.(a) Geometry with boundary conditions, (b) initial conditions, and (c) mesh setting near the tip of the hollow cathode.
Figure 1 .
Figure 1.(a) Geometry with boundary conditions, (b) initial conditions, and (c) mesh setting near the tip of the hollow cathode.
Figure 2 .
Figure 2. The mechanism and the experimental system of hollow cathode arc welding (HCAW).
Figure 2 .
Figure 2. The mechanism and the experimental system of hollow cathode arc welding (HCAW).
Figure 4 .
Figure 4.The flow dynamics of arc plasma and weld pool within one and nine thermal circles in the hollow cathode (point A) and in the weld pool (point B) (a), flow dynamics within one thermal circle (b), magnetic flux density in nine thermal circles (c), temperature in nine thermal circles, and (d) velocity in nine thermal circles.
Figure 4 .
Figure 4.The flow dynamics of arc plasma and weld pool within one and nine thermal circles in the hollow cathode (point A) and in the weld pool (point B) (a), flow dynamics within one thermal circle (b), magnetic flux density in nine thermal circles (c), temperature in nine thermal circles, and (d) velocity in nine thermal circles.
Figure 6 .of 12 4. 3 .
Figure 6.Evolution of oxygen content in the mixture arc gas and weld pool of Q235 low-carbon steel within ninety thermal circles at peak current.
Figure 7 .
Figure 7.Comparison of computed arc behavior with experimental results.
Figure 7 .
Figure 7.Comparison of computed arc behavior with experimental results.
Table 1 .
Material properties and welding parameters. | 2024-03-27T15:04:22.036Z | 2024-03-25T00:00:00.000 | {
"year": 2024,
"sha1": "e619527afe3a01ffec10f3c4e2e8dbb5b94951cc",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-6412/14/4/382/pdf?version=1711358679",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "cb250e4bf049ee1a46cf6e9c205419ff8dd419c6",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": []
} |
5079782 | pes2o/s2orc | v3-fos-license | Polymeric Hydrogels as Technology Platform for Drug Delivery Applications
Hydrogels have become key players in the field of drug delivery owing to their great versatility in terms of composition and adjustability to various administration routes, from parenteral (e.g., intravenous) to non-parenteral (e.g., oral, topical) ones. In addition, based on the envisioned application, the design of bioadhesive or mucoadhesive hydrogels with prolonged residence time in the administration site may be beneficial. For example, hydrogels are used as wound dressings and patches for local and systemic therapy. In a similar way, they can be applied in the vaginal tract for local treatment or in the nasal cavity for a similar goal or, conversely, to target the central nervous system by the nose-to-brain pathway. Overall, hydrogels have demonstrated outstanding capabilities to ensure patient compliance, while achieving long-term therapeutic effects. The present work overviews the most relevant and recent applications of hydrogels in drug delivery with special emphasis on mucosal routes.
Introduction
Hydrogels are hydrophilic viscoelastic materials formed by a polymeric network with physical and/or chemical crosslinks that absorb and retain large amounts of water (several times their dry weight) and swell, while maintaining their 3D structure, mechanical strength and elasticity [1][2][3][4][5]. In addition, hydrogels can be easily modified with functional groups and display pores with well-defined sizes that can be controlled by the crosslinking density and that can be sensitive to external stimuli such as pH, temperature and magnetic field [6]. In a hydrogel, polymer chains create a characteristic 3D matrix with interstitial spaces that are capable of harboring aqueous fluids such as the physiological ones [4]. Moreover, the high water content provides an environment for the diffusion of oxygen, nutrients and other small molecules that are critical for cell growth and proliferation [7]. Thus, molecules of different drugs can be incorporated into the hydrogel and diffuse from the interstitial spaces to the biological medium, allowing their use as a reservoir for controlled release applications [1]. Therefore, over the years, hydrogels became key players in the biomedical field in general and pharmaceutical research and development in particular, employing different invasive and minimally invasive administration routes [8]. Peppas et al. [9] have done an enormous contribution to the understanding of the mechanisms involved in the release from different systems, from moderately or poorly swellable [10] to semi-crystalline hydrogels [11].
The choice of the material for the synthesis of the hydrogel depends on the application and ranges from fully synthetic polymers such as poly(ethylene glycol) (PEG) and poly(vinyl alcohol) (PVA) to natural ones such as polysaccharides (e.g., hyaluronic acid, alginate, chitosan) and proteins (e.g., gelatin) [12,13]. For example, the use of chitosan hydrogel scaffolds for controlled and localized delivery of drugs that promote wound healing was recently reviewed by Elviri et al. [14]. Hydrogels can be prepared using different methods and precursors that allow the adjustment of properties such as pore size, mechanical strength, degradability and rate of degradation [13,15]. For example, Silva et al. [16] reviewed the manufacture and use of hydrogels based on fibrous proteins (e.g., silk, keratins, elastin, resilins), which are one of the most versatile biomaterials for tissue engineering. These scaffolds can be also used as carriers for the localized delivery of growth factors, enzymes, and drugs ( Figure 1). [16]. Copyright 2014 Elsevier).
As explained above, properties of hydrogels such as pore size may be controlled by the production conditions and/or the modification of the polymers or the weight ratio between them. For example, Zhang et al. [17] studied the properties of PEG diacrylate (PEGDA) hydrogels by varying the molecular weight of the precursor (400-2000 g/mol) or the ratio between them (100/0, 40/20, 30/30, 20/40, 0/100). Systems were photo-polymerized under visible light at a total polymer concentration of 60% w/w. Results showed that properties such as swelling, wetting, elastic modulus, transparency and microstructure changed when modifying these synthesis parameters.
An important characteristic of hydrogels is their similarity with the extracellular matrix, thus allowing their use as scaffolds for tissue engineering and regenerative medicine [4]. Slaughter et al. [18] extensively reviewed the application of hydrogels in regenerative medicine.
Overall, hydrogels have emerged as an extremely versatile technology platform for tissue engineering and drug delivery. In this work, we will overview the development and use of polymeric hydrogels for advanced drug delivery systems. Even though the literature is very profuse, we will focus mainly on most recent works that introduce new concepts and methods for their synthesis, characterization and/or administration. In this context, the field of tissue engineering is beyond the scope of the review. At the same time, it is worth stressing that aiming to improve tissue regeneration and repair, some researchers used hydrogel scaffolds for the localized delivery of a variety of active compounds, from drugs to biologicals. Moreover, the use of some innovative hydrogels conceived for tissue engineering could be extended to other biomedical applications, including drug delivery. Thus, some examples at the interface of these fields, namely drug delivery and tissue engineering, will be also briefly discussed. [16]. Copyright 2014 Elsevier).
As explained above, properties of hydrogels such as pore size may be controlled by the production conditions and/or the modification of the polymers or the weight ratio between them. For example, Zhang et al. [17] studied the properties of PEG diacrylate (PEGDA) hydrogels by varying the molecular weight of the precursor (400-2000 g/mol) or the ratio between them (100/0, 40/20, 30/30, 20/40, 0/100). Systems were photo-polymerized under visible light at a total polymer concentration of 60% w/w. Results showed that properties such as swelling, wetting, elastic modulus, transparency and microstructure changed when modifying these synthesis parameters.
An important characteristic of hydrogels is their similarity with the extracellular matrix, thus allowing their use as scaffolds for tissue engineering and regenerative medicine [4]. Slaughter et al. [18] extensively reviewed the application of hydrogels in regenerative medicine.
Overall, hydrogels have emerged as an extremely versatile technology platform for tissue engineering and drug delivery. In this work, we will overview the development and use of polymeric hydrogels for advanced drug delivery systems. Even though the literature is very profuse, we will focus mainly on most recent works that introduce new concepts and methods for their synthesis, characterization and/or administration. In this context, the field of tissue engineering is beyond the scope of the review. At the same time, it is worth stressing that aiming to improve tissue regeneration and repair, some researchers used hydrogel scaffolds for the localized delivery of a variety of active compounds, from drugs to biologicals. Moreover, the use of some innovative hydrogels conceived for tissue engineering could be extended to other biomedical applications, including drug delivery. Thus, some examples at the interface of these fields, namely drug delivery and tissue engineering, will be also briefly discussed.
Polymeric Hydrogels for Drug Delivery Applications
An interpenetrating polymer network is a 3D network composed of two or more networks, where the polymer chains are partially or fully intertwined at a molecular level, but not covalently linked to each other and thus, they cannot be separated unless chemical bonds are broken. In this sense, Naseri et al. [19] designed hydrogels of a double cross-linked interpenetrating polymer network based on cellulose nanocrystals in a matrix of sodium alginate and gelatin by freeze-drying. A homogeneous solution of sodium alginate and gelatin was mixed with a suspension of nanocrystalline cellulose and allowed to dry for three days. This dry blend was put in contact with divalent calcium ions (Ca 2+ ) and then with genipin, a natural aglycone derived from the fruit of Gardenia jasminoides; the former crosslinks alginate, while the latter gelatin. Finally, samples were lyophilized for 24 h to obtain sponges for potential use as a substitute for cartilage. Cellulose nanocrystals have garnered much interest in biomedical applications due to have showed improve the dimensional stability and the mechanical strength of hydrogels and facilitate the release of drugs.
Hydrogels formed from self-assembled biopolymer networks are particularly interesting in better mimicking the microenvironment of native tissues as they show structural, chemical and mechanical similarities to the extracellular matrix and good biological compatibility. Tissue engineering scaffolds play a fundamental role in providing a native environment that mimics tissues for the cells, with the aim of promoting regeneration of the desired tissue [16]. One of the main applications of hydrogels is the treatment of wounds, an application that usually demands the use of biomaterials that promote tissue repair or regeneration and prevent infections. In this context, Straccia et al. [20] developed alginate hydrogels coated with chitosan as wound dressing. The coating was used to confer antibacterial activity and to delay the release of charged hydrophilic drugs into the alginate ( Figure 2).
Polymeric Hydrogels for Drug Delivery Applications
An interpenetrating polymer network is a 3D network composed of two or more networks, where the polymer chains are partially or fully intertwined at a molecular level, but not covalently linked to each other and thus, they cannot be separated unless chemical bonds are broken. In this sense, Naseri et al. [19] designed hydrogels of a double cross-linked interpenetrating polymer network based on cellulose nanocrystals in a matrix of sodium alginate and gelatin by freeze-drying. A homogeneous solution of sodium alginate and gelatin was mixed with a suspension of nanocrystalline cellulose and allowed to dry for three days. This dry blend was put in contact with divalent calcium ions (Ca 2+ ) and then with genipin, a natural aglycone derived from the fruit of Gardenia jasminoides; the former crosslinks alginate, while the latter gelatin. Finally, samples were lyophilized for 24 h to obtain sponges for potential use as a substitute for cartilage. Cellulose nanocrystals have garnered much interest in biomedical applications due to have showed improve the dimensional stability and the mechanical strength of hydrogels and facilitate the release of drugs.
Hydrogels formed from self-assembled biopolymer networks are particularly interesting in better mimicking the microenvironment of native tissues as they show structural, chemical and mechanical similarities to the extracellular matrix and good biological compatibility. Tissue engineering scaffolds play a fundamental role in providing a native environment that mimics tissues for the cells, with the aim of promoting regeneration of the desired tissue [16]. One of the main applications of hydrogels is the treatment of wounds, an application that usually demands the use of biomaterials that promote tissue repair or regeneration and prevent infections. In this context, Straccia et al. [20] developed alginate hydrogels coated with chitosan as wound dressing. The coating was used to confer antibacterial activity and to delay the release of charged hydrophilic drugs into the alginate ( Figure 2). All the coated hydrogels retained their transparency, a desirable feature in wound dressings that allows visual inspection of the lesion bed throughout the healing process. In vitro assays with human mesenchymal stromal cells (MSC) confirmed the absence of acute toxicity of the coating biomaterial ( Figure 3) [20].
The release characteristics were assayed using rhodamine B as the hydrophilic model drug and compared with that of the uncoated counterparts. The presence of the chitosan coating served as a All the coated hydrogels retained their transparency, a desirable feature in wound dressings that allows visual inspection of the lesion bed throughout the healing process. In vitro assays with human mesenchymal stromal cells (MSC) confirmed the absence of acute toxicity of the coating biomaterial ( Figure 3) [20].
The release characteristics were assayed using rhodamine B as the hydrophilic model drug and compared with that of the uncoated counterparts. The presence of the chitosan coating served as a barrier that delayed the diffusion of rhodamine B and, therefore, resulted in a decrease of the release kinetics. Finally, the antibacterial activity of coated hydrogels against Escherichia coli was demonstrated.
barrier that delayed the diffusion of rhodamine B and, therefore, resulted in a decrease of the release kinetics. Finally, the antibacterial activity of coated hydrogels against Escherichia coli was demonstrated. Currently, new biomaterials modified with cyclodextrins (CDs) are being investigated as advanced delivery systems for both hydrophilic and hydrophobic drugs [21]. CDs are natural cyclic oligosaccharides consisting of α-D-glucopyranose units joined by α-1,4-type bonds and provided with a central lipophilic cavity and a hydrophilic outer surface that allow the formation of inclusion or non-inclusion complexes, thus improving the physicochemical properties of drugs, in particular, those displaying low aqueous solubility. This ability can be imparted to polymer networks in which the CDs are chemically grafted or cross-linked [22]. The combination of CDs and hydrogels in one single biomaterial leads to synergistic properties: the hydrophilic polymer network improves the biocompatibility and prevents dilution in the physiological environment, increasing the stability of the complexes, while CDs provide a more controlled release of the active cargo [7]. For example, Mennini et al. [23] developed CD-containing ethyleneglycol diglycidyl ether hydrogels for vaginal administration of dehydroepiandrosterone in the treatment of postmenopausal symptoms. Ovules combined the controlled release and mucoadhesion properties of the hydrogel, and the dissolution capacity of the CD to overcome its low bioavailability associated with poor aqueous solubility.
Preformed Hydrogels versus Hydrogels Formed Upon Implantation (In Situ)
In early years, most hydrogels intended for drug delivery were produced prior to administration (known as preformed hydrogels) and depending on their size (macroscopic or microscopic), they were administered by invasive or minimally invasive procedures. In other words, preformed hydrogels are simple viscous solutions or films, which gel outside the body and do not undergo a modification of their properties (e.g., rheological or mechanical) after administration. More recently, the development of systems that are deployable in the body as liquids or semi-solids Currently, new biomaterials modified with cyclodextrins (CDs) are being investigated as advanced delivery systems for both hydrophilic and hydrophobic drugs [21]. CDs are natural cyclic oligosaccharides consisting of α-D-glucopyranose units joined by α-1,4-type bonds and provided with a central lipophilic cavity and a hydrophilic outer surface that allow the formation of inclusion or non-inclusion complexes, thus improving the physicochemical properties of drugs, in particular, those displaying low aqueous solubility. This ability can be imparted to polymer networks in which the CDs are chemically grafted or cross-linked [22]. The combination of CDs and hydrogels in one single biomaterial leads to synergistic properties: the hydrophilic polymer network improves the biocompatibility and prevents dilution in the physiological environment, increasing the stability of the complexes, while CDs provide a more controlled release of the active cargo [7]. For example, Mennini et al. [23] developed CD-containing ethyleneglycol diglycidyl ether hydrogels for vaginal administration of dehydroepiandrosterone in the treatment of postmenopausal symptoms. Ovules combined the controlled release and mucoadhesion properties of the hydrogel, and the dissolution capacity of the CD to overcome its low bioavailability associated with poor aqueous solubility.
Preformed Hydrogels versus Hydrogels Formed Upon Implantation (In Situ)
In early years, most hydrogels intended for drug delivery were produced prior to administration (known as preformed hydrogels) and depending on their size (macroscopic or microscopic), they were administered by invasive or minimally invasive procedures. In other words, preformed hydrogels are simple viscous solutions or films, which gel outside the body and do not undergo a modification of their properties (e.g., rheological or mechanical) after administration. More recently, the development of systems that are deployable in the body as liquids or semi-solids and undergo a transition to solidify in situ emerged as a very appealing strategy to ensure patient compliance and comfort [24]. Different mechanisms can be exploited to achieve the transition, from in situ polymerization by photo-initiated reactions to the use of stimuli-responsive polymers that change their properties in response to changes in temperature, pH, and ionic strength, under physiological conditions. In addition, hydrogels may respond to externally applied stimuli such as ultrasound, electromagnetic field or light that may be employed to release the cargo with modified kinetics (e.g., pulsatile release). In this context, Tsitsilianis [25] reviewed stimuli-responsive reversible hydrogels with tunable gel properties, such as injectability and responsiveness, mesh size, mechanical strength and dynamics. These hydrogels provide promising platforms for the encapsulation and delivery of drugs and cells in a variety of biomedical applications [1,[26][27][28]. For example, one of the most widely exploited polymers for the synthesis of in situ formed hydrogels used in the delivery of drugs are poly(ethylene oxide)-b-poly(propylene oxide) (PEO-PPO) block copolymers known as poloxamers or Pluronic ® (linear and bifunctional) and poloxamines or Tetronic ® (branched and tetrafunctional) [29]. Due to their amphiphilic nature, these copolymers can self-assemble and form spherical polymeric micelles in aqueous solution above the critical micellar concentration (CMC). In addition, above the critical concentration and temperature of gelation, the self-assembled polymeric micelles are tightly bound to produce a physically cross-linked hydrogel structure [30][31][32]. Figure 4 shows the structure of polymeric micelles made of Pluronic ® copolymers and their gelation once the concentration is increased [32]. and undergo a transition to solidify in situ emerged as a very appealing strategy to ensure patient compliance and comfort [24]. Different mechanisms can be exploited to achieve the transition, from in situ polymerization by photo-initiated reactions to the use of stimuli-responsive polymers that change their properties in response to changes in temperature, pH, and ionic strength, under physiological conditions. In addition, hydrogels may respond to externally applied stimuli such as ultrasound, electromagnetic field or light that may be employed to release the cargo with modified kinetics (e.g., pulsatile release). In this context, Tsitsilianis [25] reviewed stimuli-responsive reversible hydrogels with tunable gel properties, such as injectability and responsiveness, mesh size, mechanical strength and dynamics. These hydrogels provide promising platforms for the encapsulation and delivery of drugs and cells in a variety of biomedical applications [1,[26][27][28]. For example, one of the most widely exploited polymers for the synthesis of in situ formed hydrogels used in the delivery of drugs are poly(ethylene oxide)-b-poly(propylene oxide) (PEO-PPO) block copolymers known as poloxamers or Pluronic ® (linear and bifunctional) and poloxamines or Tetronic ® (branched and tetrafunctional) [29]. Due to their amphiphilic nature, these copolymers can self-assemble and form spherical polymeric micelles in aqueous solution above the critical micellar concentration (CMC). In addition, above the critical concentration and temperature of gelation, the self-assembled polymeric micelles are tightly bound to produce a physically cross-linked hydrogel structure [30][31][32]. Figure 4 shows the structure of polymeric micelles made of Pluronic ® copolymers and their gelation once the concentration is increased [32]. Thus, these systems exhibit sol-gel transition properties in response to temperature in aqueous solution usually around the physiological temperature and, therefore, they can be administered as liquids at room temperature and undergo gelation at body temperature. In addition, the gelation temperature of these formulations decreases with a concentration increase [31]. It is worth stressing that high copolymer concentration is usually required to form the gel. For example, 25% w/v poloxamers are required to form a rigid gel after ocular instillation [33]. Moreover, regardless of the concentration, the micro-viscosity properties are relatively poor, which causes fast dilution in the biological fluids and release of the payload ( Figure 5) [34] and eventually elimination from the administration site, especially in those with significant biological fluid flow (e.g., mouth, vagina). Thus, these systems exhibit sol-gel transition properties in response to temperature in aqueous solution usually around the physiological temperature and, therefore, they can be administered as liquids at room temperature and undergo gelation at body temperature. In addition, the gelation temperature of these formulations decreases with a concentration increase [31]. It is worth stressing that high copolymer concentration is usually required to form the gel. For example, 25% w/v poloxamers are required to form a rigid gel after ocular instillation [33]. Moreover, regardless of the concentration, the micro-viscosity properties are relatively poor, which causes fast dilution in the biological fluids and release of the payload ( Figure 5) [34] and eventually elimination from the administration site, especially in those with significant biological fluid flow (e.g., mouth, vagina). Aiming to reduce the copolymer content and improve the gelation properties, these gel-forming biomaterials were modified [35,36]. For example, Cohn et al. [37] polymerized poloxamer precursors and increased their molecular weight, resulting in a dramatic improvement of the rheological properties and the physical stability in aqueous media. In addition, it was found that a reduction in Aiming to reduce the copolymer content and improve the gelation properties, these gel-forming biomaterials were modified [35,36]. For example, Cohn et al. [37] polymerized poloxamer precursors and increased their molecular weight, resulting in a dramatic improvement of the rheological properties and the physical stability in aqueous media. In addition, it was found that a reduction in the concentration of the gel-forming copolymer could be achieved by the incorporation of a suitable viscosity-enhancing polymer [38].
Preformed Hydrogels
As mentioned above, preformed hydrogels constitute the first generation and have been extensively explored in biomedical applications in general, and drug release in particular. This review does not attempt to be a comprehensive compendium of all the literature on the subject and, therefore, some recent works of especial interest are briefly described. Huang et al. [39] obtained a hybrid hydrogel membrane composed of carboxymethylcellulose (CMC) enriched with berberine (anti-inflammatory and antimicrobial agent) and hyaluronic acid to be used as an anti-adhesive barrier and drug delivery system with excellent anti-inflammatory and tissue regeneration properties to prevent post-surgical adhesions. PVA, the surfactant Tween ® 80 and a natural oil were added as membrane-forming agent, emulsifier and lubricant, respectively. The homogenous mixture was placed in a mold and subjected to heating at 37 • C for 3 h to form a dry membrane. A hydrogel of CMC (loaded with berberine) and hyaluronic acid in a ratio of 30:70 presented optimal plasticity and inhibited 60% of lipopolysaccharide-stimulated inflammation in RAW264.7 macrophages in vitro.
Preformed hydrogels can also be injected as solids [40]. In this case, the solid system can be "thinned" by applying a sufficiently high shear stress and thus, reduce its viscosity. Once at the implantation site and the stress being eliminated, the material immediately recovers the original mechanical properties of the solid. Thus, these drug delivery systems become a more benign treatment when compared to conventional surgery that involves tissue damage and later suture. Due to this property, shear thinning and thixotropic systems are often included in the category of injectable biomaterials [40]. Recently, Riber et al. [41] developed a new technology as a drug delivery platform consisting of a network of interpenetrating silicone hydrogel polymers impregnated with a broad-spectrum antimicrobial compound such as irgasan, to eliminate plasmids. The spread of antimicrobial resistance, generally mediated by the horizontal transfer of plasmids, limits treatment options for bacterial infections. Exposure of bacteria (e.g., Escherichia coli K-12) to this interpenetrating network for 24 h resulted in a significant loss of plasmid (2.8-4.7%) (p < 0.05). To further increase the plasmid loss, the concentration of impregnated irgasan should be increased, or conversely, a constant exposure ensured over time. Both options would likely result in severely damaged cell growth conditions and the formation of an adaptive response leading to the development of drug resistance, suggesting the limitations of this compound in future therapeutic and medical applications. However, as a drug delivery device, the hydrogel portion of the matrix network extends the range of drugs that can be transported, stored and subsequently released, including both hydrophilic molecules such as silver ion complexes, sulfonamides and antimicrobial peptides as hydrophobic compounds.
Hydrogel-based contact lenses can also be designed using suitable polymers. For example, Tummala et al. [42] developed a highly transparent hydrogel with high water content (>90%) by combining PVA with nanocellulose (NC) to obtain model contact lenses without compromising their transparency. NC was used to strengthen the PVA hydrogel and facilitate the manipulation of the lenses. Hydrogels showed a refractive index close to that of pure water, high transparency to visible light and very good blocking properties to ultraviolet radiation. In addition, they were rigid and flexible enough to adapt well to a convex surface such as the eye and exhibit high elasticity without being ruptured under stretching. The high water content can favor optical transparency, give good comfort of use, high oxygen permeability, low protein deposition and high biocompatibility. The hygroscopic nature of NC could also contribute to the retention of water over time, a fundamental property for a medical device Gels 2017, 3, 25 7 of 22 that is exposed to air. With the introduction of additional molecular features (e.g., modification with CDs), this work appears as a platform of value for the development of medicated contact lenses [43].
Hydrogels Formed In Situ
As discussed above, in situ generated hydrogels are liquid at the administration time and display a transition due to structural changes induced by external stimuli or in the biological environment [44]. This approach is very advantageous, for example, in ocular delivery as these vehicles can be easily applied as a liquid that ensures complete coverage of the eye being stabilized in situ due to gelation, in contrast to pre-gelled formulations that show lower adjustability [1]. In addition, in a minimally invasive manner, a liquid can be injected for in situ formation of a matrix in the development of injectable intraocular lenses [17]. Following this conceptual approach, Anumolu et al. [1] developed fast crosslinking thiolated PEG hydrogels that are stabilized at physiological pH via S-S bonds. Then, these stable viscoelastic hydrogels were loaded with pilocarpine for controlled ocular delivery and subsequent pupillary constriction. In vivo studies in rabbits showed that the hydrogels ensured pupillary constriction for 24 h after administration, eight times more than an aqueous solution of the same drug ( Figure 6). The same research group developed in situ-forming biodegradable hydrogels based on PEG loaded with the bacteriostatic drug doxycycline for the healing of vesicant-induced ocular wounds in rabbit [45]. The hydrogels withstood shear forces and released the cargo (0.25% w/w) following a bimodal profile with 100% release over one week. The permeation of doxycycline through vesicant wounded corneas was between 2.5-and 3.4-fold higher than in unwounded corneas. Moreover, histology and immunofluorescence studies confirmed a significant decrease of the collagenase matrix metalloproteinase-9 and an improvement of the healing process in vesicant-exposed corneas compared to the free drug delivered in phosphate buffered saline of pH 7.4 (Figure 7). bimodal profile with 100% release over one week. The permeation of doxycycline through vesicant wounded corneas was between 2.5-and 3.4-fold higher than in unwounded corneas. Moreover, histology and immunofluorescence studies confirmed a significant decrease of the collagenase matrix metalloproteinase-9 and an improvement of the healing process in vesicant-exposed corneas compared to the free drug delivered in phosphate buffered saline of pH 7.4 (Figure 7). Hematoxylin and Eosin staining to visualize the histology of CEES and NM-exposed corneas treated for 24 h with doxycycline in solution or in a hydrogel. The damaged area is where the epithelium meets the stroma. The wound-healing efficacy of doxycycline solution was close to the doxycycline hydrogel for CEES exposed corneas, as the extent of damage was comparatively mild. However, a superior wound healing efficacy was observed with hydrogels over solutions when harshly damaged NM-exposed corneas were treated with doxycycline. CEES: half mustard; NM: mustard; DOXY: doxycycline. (Reprinted with permission from reference [45]. Copyright 2010 Elsevier). The regeneration of injured peripheral nerves represents a great challenge for the clinic and it requires the use of scaffolds [46]. Biocompatible hydrogels loaded with drugs could play a fundamental role in this sense because they could be used to accelerate the regenerative process. Guo et al. [47] developed a chitosan conduit containing simvastatin (25 and 50 mg/mL) in Pluronic ® F127 hydrogel (25% w/v) to bind defects in the sciatic nerve of rats. The poloxamer solution is injected into the chitosan conduits and undergoes a phase transition at 37 • C. Preclinical studies in rats showed an improvement in peripheral nerve regeneration with recovery of the functionality. It is important to highlight that injured nerves treated with the drug-loaded hydrogel and the conduit were thicker than controls treated with the hollow duct or with the hydrogel-filled duct without simvastatin, 10 weeks post-surgery. This performance might rely on the increase of the expression of several endogenous neurotrophic factors that has been ascribed to simvastatin.
Regardless of the appeal shown by these systems, it is noteworthy that even though in situ-forming hydrogels can more easily fill well-defined defects and cavities, their free flowing properties may lead to undesired leakage of these precursor solutions into the surrounding tissue or into the bloodstream, unless the defect has well-defined boundaries and the precursor is confined in it until its solidification [40].
Smart Hydrogels
The various therapeutic challenges and the improvement of treatment standards led researchers in both academia and industry to devote efforts towards the design and development of "smart" biomaterials in general and hydrogels in particular. The rationale behind them is that they can interplay with the biological environment in a pre-programmed way and thus, display changes in some of their properties (e.g., viscosity) [4] in response to pH, temperature, electric and magnetic fields, among others [44,48,49].
doxycycline hydrogel for CEES exposed corneas, as the extent of damage was comparatively mild. However, a superior wound healing efficacy was observed with hydrogels over solutions when harshly damaged NM-exposed corneas were treated with doxycycline. CEES: half mustard; NM: mustard; DOXY: doxycycline. (Reprinted with permission from reference [45]. Copyright 2010 Elsevier).
The regeneration of injured peripheral nerves represents a great challenge for the clinic and it requires the use of scaffolds [46]. Biocompatible hydrogels loaded with drugs could play a fundamental role in this sense because they could be used to accelerate the regenerative process. Guo et al. [47] developed a chitosan conduit containing simvastatin (25 and 50 mg/mL) in Pluronic ® F127 hydrogel (25% w/v) to bind defects in the sciatic nerve of rats. The poloxamer solution is injected into the chitosan conduits and undergoes a phase transition at 37 °C. Preclinical studies in rats showed an improvement in peripheral nerve regeneration with recovery of the functionality. It is important to highlight that injured nerves treated with the drug-loaded hydrogel and the conduit were thicker than controls treated with the hollow duct or with the hydrogel-filled duct without simvastatin, 10 weeks post-surgery. This performance might rely on the increase of the expression of several endogenous neurotrophic factors that has been ascribed to simvastatin.
Regardless of the appeal shown by these systems, it is noteworthy that even though in situ-forming hydrogels can more easily fill well-defined defects and cavities, their free flowing properties may lead to undesired leakage of these precursor solutions into the surrounding tissue or into the bloodstream, unless the defect has well-defined boundaries and the precursor is confined in it until its solidification [40].
Smart Hydrogels
The various therapeutic challenges and the improvement of treatment standards led researchers in both academia and industry to devote efforts towards the design and development of "smart" biomaterials in general and hydrogels in particular. The rationale behind them is that they can interplay with the biological environment in a pre-programmed way and thus, display changes in some of their properties (e.g., viscosity) [4] in response to pH, temperature, electric and magnetic fields, among others [44,48,49].
Gupta et al. [26] used the triblock copolymer poly[(propylenesulfide)-(N,N-dimethylacrylamide)-(N-isopropylacrylamide)] to form temperature-sensitive hydrogel endowed with a drug degradation and release mechanism in response to reactive oxygen species (ROS) in vitro (Figure 8). When the hydrogel is exposed to ROS, it slowly undergoes a transition to a more hydrophilic poly(propylene sulfide), and finally to a poly(propylsulfone) that triggers the sustained release of When the hydrogel is exposed to ROS, it slowly undergoes a transition to a more hydrophilic poly(propylene sulfide), and finally to a poly(propylsulfone) that triggers the sustained release of drugs "on demand" and subsequent degradation of the hydrogel. In addition, these hydrogels can be used for the encapsulation and release of cells because they possess inherent cellular protection properties and reduce ROS-mediated cell death in vitro [26]. The synthesis of vehicles that are sensitive to intracellular stimuli is also gaining popularity. These transporters can be created by incorporating various precursors into the hydrogel [15]. Recently, heparin-bound Pluronic ® F127 thermo-sensitive hydrogels that load and release acid fibroblast growth factor (aFCF) were used to treat spinal cord injuries (SCIs) [50]. Heparin was added to increase the loading of aFCF, to stabilize its structure, to protect its activity and to control the release. Preclinical studies in rats with SCI showed that animals that were administered the aFCF-loaded heparin-based hydrogel had attenuated disruption of the blood-spinal cord barrier (BSCB), reduction of neuronal apoptosis, production of reactive astrogliosis and increased neuronal and axonal rehabilitation with respect to controls without growth factor or free growth factor administered by the intravenous route ( Figure 9) [50]. drugs "on demand" and subsequent degradation of the hydrogel. In addition, these hydrogels can be used for the encapsulation and release of cells because they possess inherent cellular protection properties and reduce ROS-mediated cell death in vitro [26]. The synthesis of vehicles that are sensitive to intracellular stimuli is also gaining popularity. These transporters can be created by incorporating various precursors into the hydrogel [15]. Recently, heparin-bound Pluronic ® F127 thermo-sensitive hydrogels that load and release acid fibroblast growth factor (aFCF) were used to treat spinal cord injuries (SCIs) [50]. Heparin was added to increase the loading of aFCF, to stabilize its structure, to protect its activity and to control the release. Preclinical studies in rats with SCI showed that animals that were administered the aFCF-loaded heparin-based hydrogel had attenuated disruption of the blood-spinal cord barrier (BSCB), reduction of neuronal apoptosis, production of reactive astrogliosis and increased neuronal and axonal rehabilitation with respect to controls without growth factor or free growth factor administered by the intravenous route ( Figure 9) [50]. Figure 9. Schematic of aFGF-heparin (HP) thermo-sensitive hydrogels enhance the recovery of spinal cord injury (SCI). The protection of aFGF-HP containing blood-spinal cord barrier (BSCB) protection, neuroprotection, remyelination, attenuation of astrogliosis, axon elongation in three different stages after SCI, which are the main obstacles to recovery of SCI. (Reprinted with permission from reference [50]. Copyright 2017 American Chemical Society).
Nanogels and Microgels
The development of nanometer-sized hydrogels (nanogels) has attracted the attention of researchers because of their great biomedical potential in drug administration and diagnostics [51]. In addition, this nanotechnology platform can be combined with others to synthesize more complex systems. For example, Mekkawy et al. [52] synthesized silver nanoparticles in the 13-19 nm size range and subsequently coated them with PEG 6000, sodium dodecyl sulfate and β-CD ( Figure 10). Then, they studied the antibacterial activity against Gram-positive (e.g., Staphylococcus aureus) and Gram-negative (e.g., Escherichia coli) bacteria. The values of minimum inhibitory concentration (MIC) and minimum bactericidal concentration (MBC) were in the range of 0.93-7.5 and 3.75-15 µg/mL, respectively, that were lower than those reported in the literature. Then, hydrogels loaded with coated silver nanoparticles were prepared by dispersing the same in gelling agents such as sodium CMC, sodium alginate, HPMC, Pluronic ® F127 and chitosan. The development of nanometer-sized hydrogels (nanogels) has attracted the attention of researchers because of their great biomedical potential in drug administration and diagnostics [51]. In addition, this nanotechnology platform can be combined with others to synthesize more complex systems. For example, Mekkawy et al. [52] synthesized silver nanoparticles in the 13-19 nm size range and subsequently coated them with PEG 6000, sodium dodecyl sulfate and β-CD ( Figure 10). Then, they studied the antibacterial activity against Gram-positive (e.g., Staphylococcus aureus) and Gram-negative (e.g., Escherichia coli) bacteria. The values of minimum inhibitory concentration (MIC) and minimum bactericidal concentration (MBC) were in the range of 0.93-7.5 and 3.75-15 μg/mL, respectively, that were lower than those reported in the literature. Then, hydrogels loaded with coated silver nanoparticles were prepared by dispersing the same in gelling agents such as sodium CMC, sodium alginate, HPMC, Pluronic ® F127 and chitosan. In vivo results indicated higher antibacterial activity and wound healing capacity of CMC hydrogels loaded with PEG-coated silver nanoparticles with respect to a commercially available sulfadiazine cream (Figure 11) [52]. In vivo results indicated higher antibacterial activity and wound healing capacity of CMC hydrogels loaded with PEG-coated silver nanoparticles with respect to a commercially available sulfadiazine cream (Figure 11) [52]. Another strategy to capitalize on advantageous features of nanotechnology and drug delivery systems at the macro scale is the development of hydrogels formed by self-assembled nanoparticles. In this case, the drug-loaded nanoparticles self-aggregate giving rise to a hierarchical macroscopic hydrogel. After administration, the nanoparticles undergo dissociation and diffuse freely in the physiological milieu, maintaining the favorable characteristics of the nanometric size as a large surface area and easy access to the intracellular space by means of endocytosis. For example, Huang et al. [53] developed nanoparticles made of an (epsilon-caprolactone-co-1,4,8-trioxa[4.6]spiro-9-undecanone)-PEG-poly(epsilon-caprolactone) copolymer loaded with the anti-cancer drug doxorubicin as model drug for peritumoral chemotherapy. In vivo results showed that one single peritumoral injection was more effective in a murine model than multiple intravenous administrations of the free drug and drug-free nanoparticles. This indicated that hydrogels could sustain the release and thus, reduce the administration frequency and minimize systemic toxicity [53].
The combination of microtechnology with hydrogels is also an interesting field for obtaining improved drug delivery systems. Combination strategies may be based on spherical or fiber-like microparticles of hydrogels or microparticles incorporated within macroscopic hydrogels using different techniques. For example, Ahmad et al. [54] synthesized hydrogels of bacterial cellulose graft copolymers with poly(acrylic acid) by electron beam irradiation. The resulting hydrogel sheets were oven dried at 60 °C until constant weight. Then, microparticles were prepared from the hydrogel by grinding and milling the purified sheets. Subsequently, the microparticles were loaded with insulin solution (0.5 mg/mL), filtered and lyophilized. Results showed that the microparticles increased up to 5.9 times the paracellular transport of insulin across the Caco-2/HT29-MTX Figure 11. Successive images of representative mice skin abrasion wounds infected with MRSA at different time intervals. Two groups were treated with 0.1% silver nanoparticles (AgNPs) hydrogel and 1% silver sulfadiazine cream. The two other groups were the blank hydrogel-treated group and control untreated mice. Abbreviations: MRSA, methicillin-resistant Staphylococcus aureus; AgNPs, silver nanoparticles. (Reprinted from reference [52]).
Another strategy to capitalize on advantageous features of nanotechnology and drug delivery systems at the macro scale is the development of hydrogels formed by self-assembled nanoparticles.
In this case, the drug-loaded nanoparticles self-aggregate giving rise to a hierarchical macroscopic hydrogel. After administration, the nanoparticles undergo dissociation and diffuse freely in the physiological milieu, maintaining the favorable characteristics of the nanometric size as a large surface area and easy access to the intracellular space by means of endocytosis.
In vivo results showed that one single peritumoral injection was more effective in a murine model than multiple intravenous administrations of the free drug and drug-free nanoparticles. This indicated that hydrogels could sustain the release and thus, reduce the administration frequency and minimize systemic toxicity [53].
The combination of microtechnology with hydrogels is also an interesting field for obtaining improved drug delivery systems. Combination strategies may be based on spherical or fiber-like microparticles of hydrogels or microparticles incorporated within macroscopic hydrogels using [54] synthesized hydrogels of bacterial cellulose graft copolymers with poly(acrylic acid) by electron beam irradiation. The resulting hydrogel sheets were oven dried at 60 • C until constant weight. Then, microparticles were prepared from the hydrogel by grinding and milling the purified sheets. Subsequently, the microparticles were loaded with insulin solution (0.5 mg/mL), filtered and lyophilized. Results showed that the microparticles increased up to 5.9 times the paracellular transport of insulin across the Caco-2/HT29-MTX monolayer model of the intestinal epithelium in vitro when compared to an insulin solution. In addition, insulin-loaded microparticles showed a greater hypoglycemic effect and an increase in relative oral bioavailability of up to 7.45-fold with respect to an insulin solution after oral administration in diabetic rats [54].
Administration Routes of Hydrogels
Hydrogels have found application in different minimally invasive and invasive administration routes. In the next sections, we will exemplify the great versatility of this technological platform in pharmaceutical development.
Minimally-Invasive Administration Routes
Buccal administration is becoming a very popular administration route owing to numerous advantages, including the entry of the drug into the systemic circulation and self-administration. However, it requires the appropriate design of the delivery system to maintain its position in the mouth avoiding involuntary ingestion and minimizing the continuous dilution of the drug by the salivary flow. In this sense, hydrogels may be a suitable drug delivery system by this route [55]. For example, Choi et al. [31] developed a thermo-responsive oral mucoadhesive hydrogel for the administration of the anti-cancer drug paclitaxel in the oral mucosa that provides a high local concentration of the drug and it is intended to decrease the systemic side effects normally associated with the intravenous administration. The formulation consisted of a dimethyl-β-CD/paclitaxel inclusion complex which improves drug solubility within a physical hydrogel of Pluronic ® F127 with mucoadhesive and sustained release properties. The cytotoxicity of the gels loaded with the inclusion complex was evaluated by the 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) assay using human oral cancer cells (KB cells, a subline of the ubiquitous keratin-forming tumor cell line HeLa). The lowest cell viability was >80% for hydrogels without drug, suggesting that they are not cytotoxic for KB cells. However, the cellular viability of drug hydrogels was considerably lower due to intact paclitaxel cytotoxicity. Since the formulation undergoes gelation at 37 • C, it emerges as a promising approach to treat several oral cancers with reduced systemic toxicity [31].
The oral route is very advantageous over injectable administration due to better patient comfort and compliance and provides a large absorption surface [56]. In this context, Dafe et al. [57] developed a hydrogel made of food grade starch/pectin for the encapsulation and colonic release of the probiotic bacterium Lactobacillus plantarum that was encapsulated by the extrusion method. Results demonstrated that the encapsulated cells are resistant to adverse conditions of the simulated gastrointestinal tract and to a bile salt solution as compared to free ones. Then, at more neutral pH conditions that mimic the colon fluids, cells were released and proliferated (Figure 12). developed a hydrogel made of food grade starch/pectin for the encapsulation and colonic release of the probiotic bacterium Lactobacillus plantarum that was encapsulated by the extrusion method. Results demonstrated that the encapsulated cells are resistant to adverse conditions of the simulated gastrointestinal tract and to a bile salt solution as compared to free ones. Then, at more neutral pH conditions that mimic the colon fluids, cells were released and proliferated (Figure 12). The oral administration of peptides and proteins has become a great challenge in pharmaceutical research [58,59]. In this context, O'Connor et al. [60] developed and characterized systems made of methacrylic acid (MAA), N-vinyl pyrrolidone (NVP) and PEG monomethyl ether monomethacrylate (PEGMMA) for the oral delivery of two proteins of different molecular weight, namely insulin and porcine growth hormone. These terpolymer hydrogels are sensitive to different pH conditions. Thus, at pH values <4.8 (e.g., stomach), the carboxylic acid group of MMA is protonated, maintaining the hydrogel in a collapsed state capable of protecting and preventing the release of the encapsulated proteins. As the pH rises above 4.8, as in the small intestine, the carboxylic acid groups are deprotonated, allowing the system to swell and release the therapeutic payload. In addition, these hydrogels exhibit promising mucoadhesive properties for interaction with the lining of the upper small intestine mucosa. Finally, the compatibility of the hydrogels in two model intestinal cell lines (colon adenocarcinoma cells, Caco-2, and mature differentiated goblet cells, HT29-MTX) was investigated and a cytotoxicity of at least 2.5 mg/mL was found [60]. More recently, Treenate and Monvisade [61] developed a pH-sensitive hydrogel of hydroxyethylacryl chitosan and sodium alginate cross-linked with different divalent cationic cross-linkers such as Ca 2+ , Zn 2+ and Cu 2+ , and used paracetamol as water-soluble model drug. Findings indicated that this combination delayed the degradation time of the hydrogel and that the amount of paracetamol released in the simulated gastric fluid is relatively low (<20%) (Figure 13). Thus, this drug delivery system could be used in site-specific release, such as in small intestine or colon. The oral administration of peptides and proteins has become a great challenge in pharmaceutical research [58,59]. In this context, O'Connor et al. [60] developed and characterized systems made of methacrylic acid (MAA), N-vinyl pyrrolidone (NVP) and PEG monomethyl ether monomethacrylate (PEGMMA) for the oral delivery of two proteins of different molecular weight, namely insulin and porcine growth hormone. These terpolymer hydrogels are sensitive to different pH conditions. Thus, at pH values <4.8 (e.g., stomach), the carboxylic acid group of MMA is protonated, maintaining the hydrogel in a collapsed state capable of protecting and preventing the release of the encapsulated proteins. As the pH rises above 4.8, as in the small intestine, the carboxylic acid groups are deprotonated, allowing the system to swell and release the therapeutic payload. In addition, these hydrogels exhibit promising mucoadhesive properties for interaction with the lining of the upper small intestine mucosa. Finally, the compatibility of the hydrogels in two model intestinal cell lines (colon adenocarcinoma cells, Caco-2, and mature differentiated goblet cells, HT29-MTX) was investigated and a cytotoxicity of at least 2.5 mg/mL was found [60]. More recently, Treenate and Monvisade [61] developed a pH-sensitive hydrogel of hydroxyethylacryl chitosan and sodium alginate cross-linked with different divalent cationic cross-linkers such as Ca 2+ , Zn 2+ and Cu 2+ , and used paracetamol as water-soluble model drug. Findings indicated that this combination delayed the degradation time of the hydrogel and that the amount of paracetamol released in the simulated gastric fluid is relatively low (<20%) (Figure 13). Thus, this drug delivery system could be used in site-specific release, such as in small intestine or colon.
with the lining of the upper small intestine mucosa. Finally, the compatibility of the hydrogels in two model intestinal cell lines (colon adenocarcinoma cells, Caco-2, and mature differentiated goblet cells, HT29-MTX) was investigated and a cytotoxicity of at least 2.5 mg/mL was found [60]. More recently, Treenate and Monvisade [61] developed a pH-sensitive hydrogel of hydroxyethylacryl chitosan and sodium alginate cross-linked with different divalent cationic cross-linkers such as Ca 2+ , Zn 2+ and Cu 2+ , and used paracetamol as water-soluble model drug. Findings indicated that this combination delayed the degradation time of the hydrogel and that the amount of paracetamol released in the simulated gastric fluid is relatively low (<20%) (Figure 13). Thus, this drug delivery system could be used in site-specific release, such as in small intestine or colon. Currently, the development of new strategies to increase the residence time of the active substances is a field of constant research in ophthalmic delivery because conventional formulations such as eye drops are easily removed by nasolacrimal drainage and >90% of the drug is lost. Thus, the use of semi-solid formulations that display longer residence time in the corneal surface could be an effective strategy [62]. For example, Huang et al. [63] developed an ophthalmic drug delivery system based on an in situ gelation hydrogel of Pluronic F127 ® (22% w/v) and Pluronic ® F68 (3.5% w/v) loaded with betaxolol hydrochloride, a beta blocker used in the treatment of glaucoma. Pharmacokinetic and pharmacodynamic assays in rabbits indicated that the formulation improves the bioavailability and significantly lowers the intraocular pressure with respect to the free drug. Recently, Fabiano et al. [64] developed a thermo-sensitive hydrogel containing mucoadhesive chitosan nanoparticles loaded with 5-fluorouracil for transcorneal administration in the treatment of ocular cancer. The hydrogel could be instilled as drops allowing easy administration and precise dosing. After instillation, the solution gels fast at the temperature of the eye (35 • C). In addition, nanoparticles promoted transcorneal penetration of 5-fluorouracil, at least based on studies in rabbits by aqueous humor analysis. Moreover, the thermo-sensitive hydrogel increased the area-under-the-curve (AUC) between 0 and 8 h, 3.5 times compared to clear eye drops. The maximum drug concentration from the hydrogel with nanoparticles reaches a plateau (0.25-0.3 µg/mL) in a time interval of 0.5 to 7 h, due to the ability of the hydrogel to control the release of the drug following zero-order absorption kinetics [64].
The nasal mucosa has become an appealing administration route not only for local treatments (e.g., allergies) but also for systemic delivery because it is self-administered, it does not require sterility and it is virtually painless. From a pharmacokinetic point of view, absorption is rapid due to the existence of a rich vascularization and a highly permeable structure within the nasal membranes and it also surpasses hepatic first-pass metabolism [65]. An additional advantage is that depending on the administration site in the nose, drugs could follow the nose-to-brain pathway, a transport that is favored for nanoparticles and that overcomes the presence blood-brain barrier that prevents the arrival of many drugs from the systemic circulation [66]. In this framework, Khan et al. [67] developed in situ gelling formulations using chitosan and hydroxypropyl methylcellulose (HPMC) for the intranasal administration of ropinirole, a drug used in the treatment of Parkinson's disease, with the objective of its direct arrival in the brain. In vivo bioavailability and brain uptake assays were performed in albino rats following intranasal gel administration with ropinirole-99m Tc, an intranasal ropinirole solution and an intravenous ropinirole solution. The absolute bioavailability of the drug from the in situ gel formulation was 82%. The AUC value in the brain after administration of the gel in situ between 0 and 480 min was 8.5 times greater than that obtained intravenously and 3 times higher than that obtained with the solution of the intranasal drug. This confirmed that the nose-brain transport of the gel formulation in situ with ropinirole could represent a breakthrough in the treatment of Parkinson's disease [67]. However, it is important to note that the site of administration within the nasal cavity should be appropriate to favor passage into the central nervous system and reduce passage through systemic pathways. On the other hand, Mahajan and Gattani [68] obtained in situ gels of metoclopramide hydrochloride, a potent antiemetic, for intranasal administration. The gels were prepared using different ratios of gellan gum, carbopol 934 P and mannitol. In vitro drug permeation studies across the nasal sheep mucosa showed that the effective passage way could be increased using a formulation with 0.15% w/v carbopol or more. In addition, histological examination confirmed no mucosal damage during in vitro permeability studies. Finally, the bioavailability assay of the drug in rabbits showed a significant increase from 40.7% in the case of the drug in solution to 54.6% in the case of the gel [68].
The skin is an attractive and accessible route for the systemic administration of drugs from matrices such as those formed by hydrogels, but the presence of the stratum corneum should be surpassed to ensure efficient absorption. For example, Arafa and Ayoub [69] designed a hydrogel of HPMC and carbopol 934 loaded with cholesterol niosomes and Span ® 60 loaded with pregabalin to prevent or diminish the adverse effects produced by the drug after oral administration. Pregabalin is a drug used in fibromyalgia syndrome and acute pains such as sciatic neuralgia. However, it produces many adverse effects such as dizziness, drowsiness, dry mouth, blurred vision, difficulty concentrating, hypersensitivity and decreased platelet count. The parameters of the hydrogel were optimized using an experimental design. In vitro release assays evidenced a pattern of controlled release. In addition, the study of permeation in excised skin of rats using Franz diffusion cells showed that the passage of drug-loaded niosomes was significantly greater than the conventional pregabalin hydrogels [69].
The vaginal route is an excellent route for both systemic and local applications since gastrointestinal fluids and first-pass liver metabolism are bypassed. In addition, dense vascularization and high mucosal permeability make this pathway widely used for the administration of many drugs such as hormones, peptides, antimicrobials and antifungals, among others [70]. In this framework, Malli et al. [71] designed a thermo-sensitive and mucoadhesive hydrogel of Pluronic ® F127 and chitosan containing metronidazole for topical application to the vaginal mucosa to treat infections by the protozoan microorganism Trichomonas vaginalis. This topical administration system could reduce the adverse effects produced by the drug in standard oral therapy because as it limits the absorption through the vaginal mucosa, while preserving its antiparasitic activity. In addition, the mucoadhesive properties of the hydrogel controlled the release over time ensuring a more prolonged activity and gelation in situ facilitates self-administration of a liquid capable of achieving better coverage of the vaginal mucosa that becomes semisolid at body temperature. The accumulated absorption of the drug through ex vivo pig vaginal mucosa mounted on a Franz diffusion cell presented a reduction with respect to the drug in solution and thus, it is expected to result in less systemic adverse effects. Finally, the activity of metronidazole formulated in the hydrogel was preserved since the viability curve evaluated in vitro was similar to the free counterpart [71]. Vaginal microbicidal hydrogels could be a barrier against viral infections and prevent the spreading of the virus through the vaginal mucosa. In this sense, Bouchemal et al. [72] developed thermo-sensitive hydrogels of Pluronic ® F127 (20% w/w) and HPMC (1% w/w) containing carboxyl groups-modified polystyrene particles to mimic the size and surface charge of Type 1 human immunodeficiency virus (HIV-1). Analysis of the trajectories of the fluorescently labeled particles showed that the mobility was decreased by the hydrogels compared to cervico-vaginal mucus of traditional hydroxyethyl cellulose hydrogels used as negative controls. In addition, the incorporation of the CD4 M48U1 mini-peptide used as an anti-HIV-1 molecule in a mixture of the polymers did not affect its anti-HIV-1 activity as compared to the conventional hydrogel. Therefore, this type of formulation could be used as a topical microbicide containing an anti-HIV compound, as it would simultaneously act as a physical and pharmacological barrier [72].
Another administration route gaining interest is the rectal one because it can reduce the adverse effects of orally administered drugs, improve patient compliance with respect to injections and be used in special conditions such as unconscious patients, with difficulty of swallowing or under chronic therapies where other routes are not possible. It can also be used in local treatments. For example, Cole et al. [73] developed a 4000 g/mol PEG-based hydrogel in the form of morphine-containing suppositories for sustained release of the drug following rectal administration. Pharmacokinetic results in 5 healthy volunteers showed that the drug was released at a constant rate for at least 12 h. This could improve the pharmacotherapy of patients with chronic pain.
Invasive Administration Routes
Injectable drug administration involves intravenous, intramuscular, intradermal and subcutaneous, among others. It may also include intra-or peritumoral administration when the drug is delivered directly into the tumor or at a site close to it. The drug carriers could be aqueous or oily according to the route selected. Hereinafter, we will give some examples of drug delivery via injection of hydrogels.
Chen et al. [30] developed an injectable thermo-responsive hydrogel with mechanical stability and biocompatibility as a controlled drug delivery vehicle for cancer therapy. The matrix used was Pluronic ® F127 that incorporated hexamethylene diisocyanate links to increase the mechanical stability. The hydrogel structure was maintained for 30 days ensuring sustained release of the incorporated drug over an extended period. This copolymer was then added to hyaluronic acid, a natural biocompatible polymer, to obtain the final nanocomposite hydrogel system that can self-assemble spontaneously into a micellar structure with a size of 100 to 200 nm. In vitro studies evidenced that the incorporated drug, doxorubicin, was released for more than 28 days. By using this strategy, the viability of tumor cells (human breast cancer cells, MCF-7) and tumor size in mice was significantly reduced with the incubation time [30]. Seib et al. [74] developed self-assembling silk protein hydrogels also loaded with doxorubicin for the treatment of focal breast cancer and compared this system with the drug administered intravenously. These hydrogels released the drug for 4 weeks in amounts that could be precisely adjusted by varying the silk content in the matrix. In addition, its use improved both the safety and efficacy of the drug. First, the cytotoxicity of doxorubicin-loaded and doxorubicin-free hydrogels was evaluated using MDA-MB-231 and MCF-7 (human) breast cancer cell lines and 3-day exposures. The viability of MCF-7 was reduced to 10% after exposure to the free drug and to 20% for silk hydrogels. For MDA-MB-231 cells, viability was reduced to 23% for all treatment groups. In addition, a long-term test was performed for 12 days. Control cultures of MDA-MB-231 and MCF-7 showed exponential cell growth at days 2 to 6 and lower growth over the remaining days. In contrast, doxorubicin-treated groups showed no or minimal growth during the first 6 days in the culture. However, they exhibited significant growth over the rest of the study. In contrast, doxorubicin-loaded hydrogels inhibited cell growth. Regarding efficacy, doxorubicin-loaded hydrogels administered by bilateral injections near the tumor site showed better antitumor activity than the equivalent amount of drug administered intravenously. In addition to reducing the primary tumor in mice with human breast cancer tumor cell xenografts, hydrogels reduced metastasis and were well tolerated. Thus, two of the five animals treated with doxorubicin-loaded silk hydrogels presented complete tumor regression, determined by bioluminescence and necropsy. Therefore, this system could be suitable for local chemotherapy administration and promising in breast cancer therapy to improve loco-regional control of breast cancer [74]. More recently, Wu et al. [75] developed a thermo-sensitive hydrogel of cross-linked nanoparticles of poly(vinyl caprolactam)-poly(vinyl acetate)-PEG (known as Soluplus ® ) and tacrolimus for local therapy of rheumatoid arthritis. The therapeutic efficacy in rats with adjuvant-induced arthritis significantly increased from day 10 to 17 after a single dose of FK-506 loaded in 10% and 20% Soluplus ® hydrogels with respect to uncharged hydrogels owing to sustained drug release and prolonged retention at the injection site.
Conclusions and Future Directions
The physical and mechanical properties of hydrogels, together with their biocompatibility and biodegradability characteristics, have made these materials attractive to a wide range of technological applications. Adjustment of the hydrogel preparation conditions by varying the different parameters allows designing suitable systems for potential applications in the fields of drug delivery, tissue engineering and 3D cell culture and has found its space in the market. The use of "smart" polymers capable of responding to various stimuli such as changes in pH, temperature, ionic strength or enzymatic degradation allows administration of polymeric liquids that undergo gelation under physiological conditions leading to the formation of hydrogels in situ. However, preformed hydrogels are useful in various applications as wound scaffolds while maintaining the characteristics of transparency allowing inspection of wounds, as well as the ability to release antimicrobial or anti-inflammatory drugs and growth factors from their structure by aiding the regeneration of the tissue. They also allow administration by injection of the solid hydrogel capable of "thinning" during its administration and subsequent recovery of the mechanical and morphological properties of the original solid. In addition, hydrogels can be functionalized with a radiopaque that provides X-ray opacity and allows them to be used as biomedical implants for in vivo visualization and evaluation of the ability of the hydrogel to prevent postoperative adhesions [76]. As for the administration routes, hydrogels are very versatile allowing oral, injectable, dermal, vaginal, ophthalmic, and nasal delivery. In summary, hydrogels represent one of the most versatile technological platforms for pharmaceutical innovation. On the other hand, the successful bench-to-bedside translation remains elusive and it is not in accordance with the vast scientific literature probably due to the challenges faced for the establishment of standardized, scalable and economically viable production processes. This appears as a critical stage of development that is neglected in most scientific works. Only then, the true potential of hydrogels could impact the quality of the treatments available for the benefit of patients. | 2017-12-15T16:33:00.877Z | 2017-07-03T00:00:00.000 | {
"year": 2017,
"sha1": "f4959bc4fa4482b7ffe0002737678a70ac0e8c73",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2310-2861/3/3/25/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f4959bc4fa4482b7ffe0002737678a70ac0e8c73",
"s2fieldsofstudy": [
"Medicine",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
20209434 | pes2o/s2orc | v3-fos-license | The effect of agomelatine and melatonin on sleep-related eating: a case report
Background Sleep-related eating may occur in the context of mental illness, sleep disorders, or psychopharmacological treatment. Frequently, sleep-related eating leads to severe weight gain and, so far, there are no treatment options for the condition. Case presentation We report the case of a 54-year-old white woman with depression, panic disorder, and sleep apnea under treatment with various antidepressants who developed severe sleep-related eating. Her sleep-related eating completely vanished after addition of agomelatine, it reoccurred after cessation of agomelatine, and vanished again after her re-exposure to another melatonergic drug, extended melatonin. Conclusions This case suggests that melatonergic drugs lead to relief from sleep-related eating, even when the condition occurs in the context of physical and mental disorders as well as psychopharmacological treatment.
Background
In humans, eating and sleeping are organized in a circadian manner. In the case of sleep-related eating (SRE), this circadian regulation is disturbed and uncontrollable eating occurs during nighttime sleep. Episodes of SRE mainly take place upon transition from sleep to wakefulness. Usually, there is reduced consciousness and in the morning there might be no recall of the nighttime eating episodes [1]. The disorder mainly occurs in non-rapid eye movement (NREM) sleep [2] and SRE is considered a parasomnia disorder, frequently associated with sleepwalking [3]. Patients often have undesirable weight gain due to a preference for high-calorie food during SRE and as a result have health consequences.
The American Academy of Sleep Medicine operationalized SRE disorder (SRED) as: 1. Recurrent episodes of dysfunctional eating that occur after an arousal during the main sleep period. 2. The presence of (a) the consumption of peculiar forms or combinations of food or inedible or toxic substances or (b)sleep-related injurious or potentially injurious behaviors performed while in pursuit of food or while cooking food or (c)adverse health consequences from recurrent nocturnal eating. 3. The partial or complete loss of conscious awareness during the eating episode with subsequent impaired recall. 4. The disturbance is not better explained by another sleep, mental, or medical disorder, medication, or substance use [4].
A SRED is more common in patients with sleep disorders like sleepwalking, periodic limb movement, restless legs syndrome, and sleep apnea or mental conditions like posttraumatic stress disorder or major depressive disorder. Phase delays in melatonin secretion suggest that SRE should be considered a circadian condition. Moreover, there are cases of SRE under treatment with various psychopharmacological agents (for review see [1]).
Usually SRED is treated with selective serotonin reuptake inhibitors at mean dosages. Topiramate at 100 to 300 mg/day and clonazepam at 0.5 to 2.0 mg/day can be valid alternative options [5]. Melatonergic drugs were tried on night eating syndrome but to our current knowledge they have not been tried on SRED [6]. This study was to determine if there is a benefit of melatonergic drugs on SRED.
Case presentation
In June 2016 a 54-year-old white woman was admitted for treatment of her chronic major depressive episode in the course of a major depressive disorder as well as panic disorder. She complained of severely depressed mood, loss of pleasure and interest, loss of drive and energy, disturbed sleep, and increased weight. The current depressive syndrome began approximately 5 years before admission. Her first episode occurred 36 years ago at the age of 18. Moreover, she had spontaneous and situationally induced panic attacks with severe avoiding behavior that did not allow her to leave the house by herself for at least 2 years. She lives with her husband (married for 22 years) in a house and has two adult children.
She has hypertension, which is treated with 5 mg ramipril, and adiposity. Between the age of 10 and 16 she sleepwalked once a month. No other physical illness is known.
Ten years ago a therapy with doxepin, venlafaxine, and lithium was started. Episodes of dysfunctional eating at nighttime made their first appearance in the course of this treatment. Observed by her husband approximately once or twice a week she had nightly eating and SRE, mostly without any recall in the morning. During these episodes, she ate large amounts of unusual foods (glass jars of marmalade, several bars of chocolate, and so on), brought food into her bed, and gained approximately 20 kg in the last years. SRE started with a frequency of approximately once a week and continued after the cessation of drug treatment. From 2015 to 2016 she took doxepin once or twice a week when she felt sleepless. Besides this she was on no other medication. During these nights with intake of doxepin she noticed SRE events. She became aware of the nightly eating through food wrappings in her bed and her husband confirmed the consumption through observance. Her body mass index (BMI) was 48.8 kg/m 2 at admission to our hospital. She had normal blood values except for cholesterol (217 mg/dl), C-reactive protein (CRP; 17.4 mg/l; permanently until discharge), gamma-glutamyltransferase (GGT; 44 U/l), and blood glucose (123 mg/dl). Urine analysis was within normal limits. Her blood pressure was 140/90 mmHg under medication with ramipril 5 mg. An electroencephalogram (EEG), electrocardiogram (ECG), and thyroid scintigraphy showed no pathological result. A magnetic resonance imaging (MRI) was refused on the basis of her panic disorder and accompanying anxieties.
A head, eye, ear, nose, and throat physical examination was within normal limits. A chest examination was clear to auscultation bilaterally. An examination of her heart was notable for a normal S1, S2, and it was without rubs, murmurs, or gallops. Her pulse was 84 beats/minute. Her abdomen was soft, obese, with no organomegaly, and normoactive bowel sounds in all quadrants. A neurologic examination revealed that all her cranial nerves were grossly intact. Her strength was 5/5 throughout with 2+ reflexes. Her sensation to fine touch was intact throughout.
In our hospital, SRE continued under antidepressant treatment with sertraline plus doxepin. Under therapy with 150 mg sertraline, 50 to 100 mg melperone, and 150 to 300 mg bupropion the frequency of nighttime eating increased to at least once a night (Fig. 1) With regard to sleep disorders, in the presence of severe adiposity, we found evidence for a moderate sleep apnea syndrome with an apnea-hypopnea index of 6.9 per hour in polygraphy. She complained about an irresistible urge to move her legs almost exclusively during nighttime, with temporary relief from this urge during movements, but had no sensations of pain or other unpleasant sensations and, therefore, did not completely fulfil the criteria for restless legs syndrome.
In order to treat SRE we added 50 mg agomelatine, which is in accordance with reports of treatment of a similar disorder, nighttime eating [6]. During the following 14 nights, SRE completely vanished and reoccurred after stopping agomelatine for a week. Then we added 4 mg melatonin extended release and again, immediately, SRE vanished and she lost 3 kg of weight in the following 3 weeks. Her BMI fell to 46.5 kg/m 2 and she lost 6.1 kg in total at discharge from our hospital. A follow-up 2 months after discharge showed a further reduction of her BMI to 45.7 kg/m 2 . At the second follow-up 10 months after discharge we found that she remained well until 5 months after discharge and then developed a further episode of major depression mainly attended by avoidance behavior. She refused complementary individual and group psychotherapy as well as her support group. A voracious appetite emerged during the day and she consumed a huge amount of fast food through which she gained weight up to a BMI of 49.4 kg/m 2 . Nightly eating occurred up to once a week from that time on but never again reached a level of seven times a week as in August 2016.
Discussion
Our patient had a clear syndrome of SRE. The syndrome started before the onset of the current depressive episode and its treatment. Moreover, the frequency of SRE was associated with antidepressant treatment, especially doxepin, sertraline, and bupropion. In addition, she had a mild sleep apnea syndrome and fulfilled three out of four criteria of restless legs syndrome. Therefore, the condition might have, at least partly, been due to a medical condition. In fact, nighttime eating disorders are frequently associated with mental disorders, for example affective disorders, as well as sleep disorders [7] or treatment with antidepressants. So far, there is evidence from a case series that agomelatine may ameliorate night eating [6]. Since agomelatine affects the melatonergic as well as the serotonergic system, the exact mode of action remained unclear. Our case showed complete recovery from SRE with agomelatine as well as melatonin. These findings suggest (1) that the melatonergic properties of agomelatine may mediate the SRE-suppressing effect, (2) that melatonin is a potential alternative to agomelatine treatment of SRE, and (3) that even in the presence of other causes, like affective or sleep disorders (here, depression, panic disorder, sleep apnea), and psychopharmacological treatment, a treatment using a circadian mechanism is effective. To our current knowledge melatonergic drugs have not been tried on SRED before. Clearly, a placebo-controlled study with melatonin in patients with SRE is warranted.
Conclusions
This case suggests that melatonergic drugs lead to relief from SRE, even when the condition occurs in the context of physical and mental disorders as well as psychopharmacological treatment. | 2017-10-10T00:07:35.051Z | 2017-09-28T00:00:00.000 | {
"year": 2017,
"sha1": "38434fd2b7beea3af6c17908561b3b3b19360061",
"oa_license": "CCBY",
"oa_url": "https://jmedicalcasereports.biomedcentral.com/track/pdf/10.1186/s13256-017-1438-5",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "38434fd2b7beea3af6c17908561b3b3b19360061",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
12850991 | pes2o/s2orc | v3-fos-license | Transcriptional gene silencing in humans
It has been over a decade since the first observation that small non-coding RNAs can functionally modulate epigenetic states in human cells to achieve functional transcriptional gene silencing (TGS). TGS is mechanistically distinct from the RNA interference (RNAi) gene-silencing pathway. TGS can result in long-term stable epigenetic modifications to gene expression that can be passed on to daughter cells during cell division, whereas RNAi does not. Early studies of TGS have been largely overlooked, overshadowed by subsequent discoveries of small RNA-directed post-TGS and RNAi. A reappraisal of early work has been brought about by recent findings in human cells where endogenous long non-coding RNAs function to regulate the epigenome. There are distinct and common overlaps between the proteins involved in small and long non-coding RNA transcriptional regulatory mechanisms, suggesting that the early studies using small non-coding RNAs to modulate transcription were making use of a previously unrecognized endogenous mechanism of RNA-directed gene regulation. Here we review how non-coding RNA plays a role in regulation of transcription and epigenetic gene silencing in human cells by revisiting these earlier studies and the mechanistic insights gained to date. We also provide a list of mammalian genes that have been shown to be transcriptionally regulated by non-coding RNAs. Lastly, we explore how TGS may serve as the basis for development of future therapeutic agents.
The history of RNA-directed transcriptional gene silencing (TGS)
Almost three decades ago, Marjorie Matzke et al. observed that over-expression of a transgene led to DNA hypermethylation and transcriptional silencing in doubly transformed tobacco plants (1) (Figure 1). Mechanistically, this type of silencing in plants was found to be the result of small non-coding RNAs directing epigenetic changes, specifically DNA methylation, to those loci containing sequences homologous to the small RNA. The phenomenon was termed small RNA-directed transcriptional gene silencing (TGS). TGS was later shown in Arabidopsis to require the action of RNA-dependent DNA methylation (2,3) and members of the Argonaute protein family (4). A few years later RNA interference (RNAi), mediated by double-stranded RNAs, was discovered as a powerful post-TGS (PTGS) system against messenger RNAs (mRNAs) in plants (5), and a few months later in Caenorhabditis elegans (6).
Transcriptional gene silencing in humans
The study of small non-coding RNA-directed TGS has been carried out in various model organisms such as plants (Arabidopsis thaliana), yeast (Saccharomyces pombe), flies (Drosophila melanogaster) and worms (C. elegans) (reviewed extensively in (7,8)). A decade ago, the first report of RNA-directed TGS in human cells was observed when exogenous siRNAs were used to silence a transgenic elongation factor 1 ␣ promoter driving a Green Fluorescent Protein (GFP) reporter gene (9) (Figure 1). Importantly, the observed silencing was clearly at the transcriptional level, as indicated by nuclear run-on analysis. Moreover, silencing was also epigenetic: inhibition was abrogated by 5 Aza-cytadine (5 AzaC) and Trichostatin A (TSA), com- pounds involved in inhibiting DNA methylation and histone deacetylalion, respectively (9). This early observation was soon followed by other studies (10,11), all of which confirmed that small non-coding RNAs could functionally control gene transcription and epigenetic states in human cells. But the underlying mechanism of action remained unknown.
Mechanisms of small non-coding RNA-directed TGS
TGS is mechanistically distinct from the abundantly studied PTGS pathway of RNAi. One notable difference is that TGS results in long-term stable epigenetic modifications to gene expression that can be passed on to daughter cells during cellular division (reviewed in (12)). Early observations postulated that siRNA-directed TGS functioned through an epigenetic nuclear mechanisms distinct from RNAi-mediated PTGS in the cytoplasm (13). For instance 5 AzaC and TSA were functional in reverting the siRNA targeted TGS, indicating epigenetic modes of gene regulation were at play in siRNA-directed TGS, and not via a PTGS-based mechanism (9). Indeed, recent studies have observed that two different siRNAs, one targeted to the promoter and one targeted to exon 1 of the coding transcript, can functionally repress the targeted gene in a TGS or PTGS based manner (14). A lot has been gleaned over the last decade regarding the mechanism of action for RNAdirected TGS in human cells. Studies carried out to determine the underlying mechanism of siRNA-directed TGS revealed that RNA-mediated TGS is operative through RNAdirected methylation of histone 3 lysine's 9 and 27 (H3K9 and H3K27, respectively) and DNA methylation at the targeted promoter (9,11,(15)(16)(17)(18)(19)(20)(21) (Figure 2). These promoterdirected siRNAs interact with a low level expressed (∼1-2%) promoter associated RNA, which is essentially the 5 UTR of the protein coding gene (16,22) (Figure 2). It is worth noting that most genes and gene promoters appear to be transcribed to some extent (23,24) and experimental observations suggest that non-coding RNAs interact with target loci via Watson-Crick-based RNA:RNA hybridization (16,22) and not by double-stranded DNA invasion. Temporal studies have determined that exogenously introduced siRNAs targeted to a promoter region interact first with Argonautes 1 and 2 (AGO1 and AGO2) (17,25,26). SiRNA and AGO interactions is found within the first 24 h, at the siRNA targeted promoter and is followed shortly thereafter with the recruitment of the H3K9me2 and H3K27me3 silent state epigenetic marks (17), and later by the recruitment of DNA methyltransferase and DNA methylation at 72-96 h for some genes (14). It should be noted, however, that the role of DNA methylation in TGS in human cells is not as clearly understood as in plants; DNA methylation at the targeted promoter is not always observed in human TGS applications (Table 1). These effects may be explained by the duration of RNA targeting to the promoter, the presence of robust siRNA targeting (e.g. delivery to the nucleus), the presence and abundance of promoter-occupied RNAs and/or the dynamic interplay of proteins interacting with the promoter. Despite differences in the various experimental observations, a key consistent feature has been the observations that promoter-directed small RNAs can modulate gene transcription and that some level of epigenetic based silencing is ongoing in the observed silenced genes.
The endogenous pathway of TGS in human cells; rise of long non-coding RNAs
While small RNAs were observed early on to regulate gene transcription in human cells by the targeting of epigenetic silencing complexes to those loci containing complementarity to the small RNAs (Figure 1), the endogenous mechanism(s) driving this form of gene regulation in the context of human cells remained largely unknown. MicroRNAs (miRNAs) have been shown to be endogenous drivers of TGS with some genes in human cells (27)(28)(29)(30) (Table 1). In 2005, through the efforts of the FANTOM and ENCODE consortia, it started to become apparent that a large fraction of the human genome was generating long non-coding RNAs (lncRNAs) and that many of these transcripts were antisense to protein-coding counterparts (31,32). Several of these sense/antisense or bidirectionally-transcribed genes are evolutionarily conserved, suggesting some functional cues for retention of these elements (33,34). Indeed, studies with imprinted genes and X-inactivation found that cis acting long non-coding RNAs (lncRNAs) were actively in- volved in epigenetic regulation of these, dosage-dependent, regulated loci (35).
Mechanisms of small and long non-coding RNA-directed TGS
To date there are ∼55 reports of small RNA-directed TGS and ∼10 of antisense lncRNA-directed TGS (Table 1). Mechanistically, much of what we know about how small non-coding RNAs, such as siRNAs, miRNAs and small antisense RNAs (sasRNAs)-directed TGS, have been determined from cell culture studies. The promotertargeted small RNAs interact with various proteins to guide TGS, beginning in the first 24 h, with direct interactions with AGO1 and AGO2 (17,18,25) followed shortly thereafter by interactions at the targeted promoter with DNMT3a (14,18,77,78), HDAC1 (14,20) and resulting ultimately in histone 3 lysine 9 di-methylation and histone 3 lysine 27 tri-methylation (H3K9me2 and H3K27me3, respectively)(14,16-18,20,79,80) (Figure 3). SiRNA-directed TGS has also been observed to occur in the absence of DNA methylation, suggesting that alternative routes may be present for RNA-mediated transcriptional and epigenetic silencing (10). Small RNA-directed TGS appears to require a template or target transcript at the corresponding targeted promoter (16,22), similar to the method by which plants utilize RNA Polymerase V transcribed and processed siRNAs to regulate DNA methylation and TGS (reviewed in (81)). Notably, in plants there is a requirement for RNAdependent RNA polymerase (RdRP) activity to amplify RNA polymerase V transcript-directed TGS (81), whereas humans lack such a polymerase, which opens up a methodology for specific RNA-directed epigenetic modes of regu-lation. Curiously, this is exactly what lncRNAs appear to be doing in human cells via cis and trans-specific targeting of epigenetic complexes to particular loci ( Figure 3 and Table 1), similar to what is also observed in Saccharomyces cerevisiae, which also lacks RdRP activity (82,83).
Early studies carried out with S. cerevisiae indicated that antisense non-coding RNAs function endogenously to direct epigenetic gene silencing in place of RdRP-mediated mechanisms (82,83). The parallels between S. cerevisiae and previous observations of small antisense RNA-directed TGS in human cells (18) have emerged, suggesting that antisense transcripts also function to direct TGS (Figure 3). Most notable are the observations that particular antisense lncRNAs, first observed in tumor suppressor genes, p15 (38) and p21 (39), function to epigenetically modulate their protein-coding counterparts ( Figure 1, Table 1). One interesting, and surprisingly overlooked, early study found antisense transcription was involved in DNA methylation in Thalassemia (40), and even early work linked antisense transcripts and DNA methylation in regulating HIV (51) and MYC (84,85).
Mechanistically, far less is known about how antisense lncRNAs direct epigenetic silencing in human cells. Studies carried out with the lncRNA, HOTAIR, indicate that bimodal chromatin modifying complexes can be localized to the HOX locus via the action of this lncRNA (86). A common theme is also evident with Kcnq1ot1 (45) and the p53 regulatory lincRNA p21, which indicates that the entire p53 expressed pathway is controlled by the action of this lncRNA at the p53 locus (57). Indeed, many lncR-NAs have been observed to be associated with chromatin (87), but mechanistic insights into the process of lncRNAdirected gene regulation remain less clear. Interesting insights into the mechanism of action of lncRNA-directed TGS came from a recent study looking at the PTEN pseudogene. It had been reported previously that the PTEN pseudogene functions as a miRNA 'sponge' (64), similar to the CEBPA lncRNA that acts to sponge DNMT1 away from the CEBPA promoter (88). Studies to interrogate the PTEN pseudogene in greater detailed determined that this pseudogene also expressed an antisense lncRNA in trans which functions to direct TGS to the PTEN promoter and control PTEN expression epigenetically (53). Mechanistically, the PTEN pseudogene expressed antisense lncRNA modulated PTEN transcription by recruiting DNMT3a and EZH2 to the PTEN promoter. The parallels between the functions of the PTEN pseudogene and previous observations with small antisense ncRNA-directed TGS are notable, as both involved the action of DNMT3a (Figure 3). It is noteworthy that DNMT3a is the only known de novo DNA methyltransferase in human cells (89) and has been observed previously to be the only DNA methylatransferase to bind non-coding RNAs including small ncRNAs, both antisense and double stranded RNAs (18,77,78,90), and lncR-NAs (53,91). There is an interesting connection between DNMT3a and epigenetic silencing, which including studies indicating DNTM3a co-immunoprecipitates with HDAC1 (92,93) and EZH2 (94), as well as early predictions that DNA methylation is an active participant in X-inactivation (95), one of the first bona fide lncRNA regulatory pathways described. Collectively, a paradigm is emerging in human cells, which proposes that non-coding RNAs, both small and long forms (Figure 3), function through the action of DNMT3a to modulate chromatin and epigenetic states of gene expression. While there are several other mechanisms of action described for lncRNAs in human cells, the interactions with DNMT3a and targeting of transcriptional and epigenetic states is of particular interest, as this mode of gene regulation has the potential to be long-lasting, heritable and may be of significant relevance to the development of targeted therapeutics (reviewed in (96)).
Therapeutic applications of RNA-directed epigenetic regulation of gene expression
The utility of small RNA-induced TGS as a therapeutic has been largely ignored, mainly due to the pervasiveness of using RNAi targeted approaches to degrade mRNAs. The main concern with RNAi and post-transcriptional mechanisms of gene silencing (Figure 2) is the duration of their therapeutic effect. The effector siRNAs required to drive RNAi must be administered continuously to repress a therapeutic target gene. This is not the case with RNA-induced TGS, where stable, long-term, silencing can be achieved following a relatively short duration of promoter targeting with the siRNAs (19,20,(97)(98)(99)(100) or small antisense RNA (14,101). This is because the mode of action for the observed gene silencing is transcriptional and driven ultimately by epigenetic silencing (79,102) and not 'slicing' of the genes messenger RNA as is the case with RNAi. One universal hurdle that both RNAi and RNA-induced TGS face with is the targeted delivery of the effector RNAs to those cells requiring treatment. One approach is to utilize synthetic antisense oligonucleotides targeted to promoters of interest. This approach has worked with regards to blocking transcription (103) but was not found to induce robust epigenetic silencing, unless the particular oligonucleotides were RNA based (104). However, it may be that better interrogation of each non-coding RNA targeted promoter is required to delineate the best promoter-associated transcripts to target and that many of the earlier studies may have neglected this notion. Indeed, establishing TGS in the absence of a target promoter RNA has not been reported and attempts by some groups, including ours, have proven fruitless. Another approach might be to deliver the effector RNAs using receptor targeted aptamers, which has shown promise for targeting HIV infected cells (105,106). While delivery remains an important concern, the notion that one needs to only target a particular gene for 2-4 days to instill stable epigenetic silencing is promising with regards to minimizing the need for sustained delivery. Recent studies suggest that small RNA-directed TGS is feasible and that stable epigenetic marks can be imposed at small RNA target loci in vivo (99,107).
Another area of therapeutic utility can be found in the plethora of lncRNAs that are appearing to be involved in various diseases. Emerging evidence suggest that noncoding RNAs play a wide role (108) in various disease states in humans. Genome-wide observations of diseased states, such as heart failure (109), indicate significant differential and discordant expression between protein-coding and non-coding antisense and pseudogenes is prevalent (110). To date the list of those lncRNAs involved in human diseases is expanding at an unprecedented rate. LncR-NAs have been observed in disease ranging from Cancer (57,86,105,106), to HIV (111,112), to autism (113), to pluripotency and differentiation (114)(115)(116). It is worth underscoring that many of the disease relevant lncRNAs have been observed to be antisense to particular protein coding genes. A significant obstacle to using RNAi and other post-transcriptional effectors for targeting antisense lncR-NAs is the fact that double stranded siRNAs have an ability to target both sense and antisense transcripts (117). The use of RNA-directed TGS avoids this issue by targeting the lncRNA promoter with single stranded antisense transcripts (52). The targeting of endogenous effector antisense lncRNAs can result in the de-repression and subsequent transcriptional activation of the lncRNA targeted locus (Figure 4). Using this mode of action, it becomes feasible to activate gene expression to affect those proteincoding genes under sustained lncRNA-directed TGS (Figure 4). This has proven an effective approach to inducing genes both in vitro (39,52,(118)(119)(120)(121) and in vivo (42,107,122), but presupposes that there are known antisense lncRNAs regulating the therapeutic target gene. Collectively, the advantages to using RNA-directed TGS as a therapeutic are many and include: (i) strand specific targeting of a gene, (ii) stable long-term epigenetic based silencing can be established to particular genes of therapeutic interest and (iii) antisense RNA-based approaches work as well, if not better than double stranded RNAs, as the endogenous pathway of RNA-directed TGS appears to contain significant overlap with small antisense RNAs and antisense lncRNAs ( Figure 3).
CONCLUSION
It has been roughly 10 years since the first observation that promoter-directed RNAs can affect gene transcription (Figure 1 and Table 1). This seminal observation in 2004 (9) was indicative of a role for RNA in regulating gene expression, a notion proposed ∼5 decades ago but largely overlooked (123,124). Possible reasons for the poor early adoption of RNA-directed TGS (Table 1) are varied but may include (i) the unfortunate retraction of a similar paper published in Nature (125), and/or (ii) the overwhelmingly positive response to PTGS and the rejection of any RNAi-related phenomena occurring in the nucleus, despite the fact that RNAi was shown to be functional in the human nucleus in 2005 (126) and confirmed in many subsequent studies (25)(26)(27)(127)(128)(129)(130)(131).
The notion that RNA may function as the master gene regulator in the cell was something proposed by Britten and Davidson in 1969 (123), which at the time was largely neglected by the broad scientific community. With the advent of high-throughput technologies and the findings from EN-CODE, that most of the human genome is transcribed and likely plays a functional role (132)(133)(134)(135)(136)(137)(138)(139), it is becoming apparent that Britten and Davidson's theory should be reappraised. Certainly, lncRNAs are abundantly active in the nucleus, and many of them are active modulators of transcriptional and epigenetic modes of gene expression (reviewed in (37,76) and appear to share many of the mechanistic characteristics observed in small RNA-directed TGS ( Figure 3). Collectively, the mounting observations that an- tisense non-coding RNAs, both small and long RNAs, directed to gene promoters can affect transcription by the recruitment of silent state epigenetic complexes suggests that a pervasive and underappreciated role for non-coding RNAs is part of the basic fabric of life. Knowledge of this molecular pathway may prove incredibly insightful with regards to the development of disease, including epigenetic silencing of gene expression and the development of newtargeted therapeutics aimed at specifically affecting gene ex-pression. The next decade could prove an exciting time for our understanding of non-coding RNAs in the transcriptional gene expression and their application as novel therapeutics. | 2018-04-03T04:30:09.459Z | 2016-04-07T00:00:00.000 | {
"year": 2016,
"sha1": "c55c52f988e8cb6421915443bebb22ddc42ba761",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/nar/article-pdf/44/14/6505/7626334/gkw139.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c55c52f988e8cb6421915443bebb22ddc42ba761",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
1797890 | pes2o/s2orc | v3-fos-license | Vortex Dynamics in Selfdual Maxwell-Higgs Systems with Uniform Background Electric Charge Density
We introduce selfdual Maxwell-Higgs systems with uniform background electric charge density and show that the selfdual equations satisfied by topological vortices can be reduced to the original Bogomol'nyi equations without any background. These vortices are shown to carry no spin but to feel the Magnus force due to the shielding charge carried by the Higgs field. We also study the dynamics of slowly moving vortices and show that the spin-statistics theorem holds to our vortices.
Introduction
We consider the theory of complex and neutral scalar fields coupled to a gauge field with the Maxwell kinetic term in three dimensions. There exist topological vortices of nonzero magnetic flux in the broken phase. [1] We introduce a nondynamical uniform background electric charge density to the system, which would be shielded by the electric charge carried by the Higgs field. In this case, vortices are claimed to carry nonzero spin and feels the fluid dynamical Magnus force, a Lorentz force, due to the shielding charge carried by the Higgs field. [2] In addition, vortices in charged fluid with a nondynamical background magnetic field have been studied to see whether they are anyons. [3] Here we choose a set of special values for coupling constants so that there is a bound on energy, which can be saturated by the vortex configurations satisfying the selfdual equations, generalizing the Bogomol'nyi case. [4] We study in detail the vortex dynamics in this selfdual model.
Recently, there has been a renewed interest in the possibility of the Magnus force in real superconductor. [5] The question is whether there is a nonzero Berry's phase gained by a vortex wave function due to the Magnus force when it goes around a closed loop on a plane. In real superconductors, the copper pairs condense with net nonzero electric charge, which is neuralized by the background electrons and positive ions. The Maxwell-Higgs system with the background electric charge density is thus more closely related to the real superconductor rather than the Maxwell-Higgs system without as noted by Davis. [2] Our analysis confirms the presence of the Magnus force more clearly. Since our system is selfdual or there is no static interaction between vortices, we can regard our model lying at the boundary between type I and type II superconductors.
The Magnus force is the fluid dynamical force responsible for curve balls. The Magnus force also plays an important role in the vortex dynamics in superfluid. [6] In superfluid, vortices carry infinite angular momentum density per unit length and feel finite amount force per unit length. In large distance, the Magnus force can induce a Berry's phase. For example, in Chern-Simons-Higgs systems, vortices carry nonzero magnetic flux and charge. These vortices carry nonzero spin and their statistics can be explained only when one put together the naive Aharanov-Bohm phase due to the charge and magnetic flux and that due to the Magnus force. [7] In our case, it turns out to be subtle to define the conserved angular momentum in the field theory due to the divergent contribution from the spatial infinity. We provide a satisfactory modification of the Noether angular momentum in the field theory. The selfdual configurations are degenerated in energy but not in angular momentum. The total angular momentum is a complicated function of the vortex positions. We show that there is however no intrinsic spin carried by our vortices, which does not contradict the fact that our vortices carry nonzero magnetic flux but no electric charge. In our systems, the magnus force is in a way decoupled from the spin of vortices.
As there are no massless excitations in our systems as we will see, we expect little or no radiations emitted when vortices are moving very slowly. The field configuration of these slowly moving vortices would be very close to that of vortices at rest. There would be an effective Newtonian action describing these vortices, which would be made of terms linear and quadratic in velocities. The interesting goal would be then to find this effective action. The effective action for slowly moving solitons has been first studied by Manton. [8] There have numerical and analytical studies of this effective action for slowly moving vortices when there is no background charge. [9] As these approaches are not directly applicable in our case, we take a little bit different approach. [7] We find the first order terms explicitly. The angular momentum calculated from this effective action turns out to be identical to that obtained from the field theory. The linear term has an interesting implication in the vortex dynamics. A single vortex would move a circle due to the Magnus force, which implies a nontrivial Berry's phase in quantum mechanics. For a system of two vortices, the Magnus force become more complicated when two vortices are close to each other since the charge density is not uniform. When two vortices are separated in large distance, we will show that however there is no additional Berry's phase which can be attributed to the statistics between vortices, confirming the spin-statistics theorem for our vortices.
This paper is organized as follows. In Sec. 2 we introduce the selfdual Maxwell-Higgs systems with uniform background electric charge density and study their basic properties. We show that the naive conserved linear and angular momenta have divergent contributions from the spatial infinity. We show that our selfdual equations can be reduced to those found by Bogomol'nyi. In Sec. 3, we study the rotationally symmetric vortices numerically. In Sec. 4 we redefine the linear and angular momenta and show that our vortices do not carry any spin. In addition, we provide an explicit expression of the angular momentum as a function of vortex positions. In Sec. 5, we study the effective action of slowly moving vortices. This action contains the terms linear and quadratic in vortex velocities. The linear terms describe the magnetic interaction between vortices themselves and between vortices and the shielding charge carried by the Higgs field. We use this linear term to calculate the statistics of our vortices. In Sec. 6, we conclude with some remarks and questions. In the appendix, we calculate the nonconserved angular momentum derived from the symmetric energy momentum tensor as a function of vortex positions.
Model
We consider the theory of charged and neutral scalar fields φ = f e iθ / √ 2, N coupled to a photon field A µ . We assume that there is a unform background electric charge density ρ e , which is no dynamical. (A uniform external magnetic field plays a role of chemical potential for a magnetic flux after a field shift and so no role in the classical dynamics of vortices.) The lagrangian for this theory is The Lorentz symmetry is explicitly broken due to the external electric charge density. The charge conjugation, θ → −θ, A µ → −A µ , is also explicitly broken by the external charge density. The parity transformation, ( , is not broken. The time reversal, t → −t, A 0 → −A 0 , is also explicitly broken. However, CTP is still a good symmetry. Usually the selfdual systems are related to the N = 2 supersymmetry with a central term, and the neutral scalar field and the gauge field are a part of the vector multiplet.
The system is invariant under the local gauge transformation, θ → θ +Λ, A µ → A µ − ∂ µ Λ/e, leading to Gauss's constraint from the variation of A 0 , where the dot denotes the time derivative. Since the background charge density is uniform, the action is invariant under the spacetime translation and there is a conserved energy momentum tensor as a Noether current, satisfying, ∂ µ T µν = 0. The total energy E = d 2 rT 00 can be expressed as after a partial integration and using Gauss's law (2.2), where the energy density is given by The conserved linear momentum d 2 xT 0i is Under the local gauge transformation θ → θ + eΛ and A i → A i − ∂ i Λ, T 0i is invariant up to a total derivative due to Gauss's constraint, leaving the linear momentum invariant. As the action is also invariant under rotation, there is a conserved angular momentum current, with T µν given by Eq.(2.3), satisfying ∂ µ J µ = 0. The conserved Noether angular which is also invariant under local gauge transformations. Note that under the translation of the whole system by It turns out that the above definition of the momenta is not entirely correct. When there are n vortices, we will see that ∂ i θ → − n rφ i and ef 2 A 0 → ρe e for large r. Thus, there is a divergent contribution from the spatial infinity to these momenta. The definition of finite, well-defined and conserved momenta will be given in Sec.4.
For the reasons which will be clear in a moment, we consider a specific potential This potential has a term for the interaction between the external charge and the neutral scalar field. While the physical origin of this interaction is obscure, we can imagine such possibility. This potential is not bounded from below, which we will see is not a problem due to Gauss's law.
After some algebra and by using Gauss's law (2.2), we get for any physical (2.10) The ground state of this energy functional will be in phase where < f 2 > = 0. As there will be no massless mode, the gauge invariant quantities, F 0i , ∂ i θ + eA i would vanish exponentially at the spatial infinity, making the boundary contribution vanish. After integrating Eq.(2.10) over space, we get a bound on the total energy, because the rest of terms are positive definite.
This bound is saturated by the time independent configurations in a gaugė θ = 0 which satisfy Gauss's law (2.2) and For these selfdual configurations Gauss's law (2.2) becomes Note that Eq.(2.12) for A i , f, θ is identical to the selfdual vortex equations obtained by Bogomol'nyi. [4] As far as the scalar field and the vector potential are concerned, the selfdual vortex configurations without background charge density is identical to those with. The existence and uniqueness of the selfdual solutions for the scalar field and the vector potential, describing the 2n parametered configurations of n vortices, have been studied and proven. [10] We will see that A 0 satisfying Eq.(2.13) can be expressed explicitly in terms f . Thus, in our system also there exist unique selfdual configurations of vortices parameterized by the vortex positions.
The ground state is a homogeneous configuration of zero energy. From Eqs. (2.12) and (2.13) we get this ground state described classically by in a unitary gauge. (We can imagine inhomogeneous configurations, which have a finite region where f = 0 and N being arbitrary large, making the contribution from the potential energy to be arbitrary negative. The bound (2.11) tells us that these configurations have positive energies.) The small fluctuation analysis around this vacuum implies that in the unitary gauge where δf, δN, δA µ ∼ e iwt+i k· r . There are three different massive modes in long distance, or small k, with eigenvalues, There is no instability due to these modes. We can see easily that the first spectrum describes a vector boson of spin ±1 and the last two spectra describe two scalar bosons.
For the selfdual configurations of the positive magnetic flux, we choose the upper sign of Eq.(2.12), For the positive magnetic flux configurations, the vorticity in θ turns out negative as we will see. We describe n vortices located at positions q a by We see that f should vanish as | r − q a | at the position of each vortex for the complex scalar field φ to behave well. Putting together Eqs.(2.17) and (2.19), we get In addition, we see that the excessive flux For a given f configuration satisfying Eq.(2.20), Gauss's law (2.13) determines the A 0 configuration. It turns out that A 0 can be solved explicitly in terms of f , Notice that the quantity in the right side is regular at the vortex positions and approachs the right asymptotic value at spactial infinity and that ∂/∂ r = − a ∂/∂ q a on f due to the space translation invariance.
Rotationally symmetric solutions
The rotationally symmetric ansatz for n positive magnetic flux vortices at origin is given in the rotational coordinate (r, ϕ) as f (r), θ = −nϕ, A ϕ (r), A 0 (r). The total magnetic flux is then 2πn/e. Eq.(2.17) become with the prime denotes d/ds. The boundary conditions are F (s) ∼ s 2n for small s and F (∞) = 1.
The scalar potential in Eq.(2.21) become Since we are interested in regular solutions, there is one parameter near s = 0 is to be adjusted to get the right asymptotic behavior at r = ∞.
where the dots indicate the even power series in s with higher power than what is shown and whose coefficients are fixed in terms of a and b. For large r, the regular solution would be in leading orders.
As Eq.(3.2) is identical to the selfdual equations without any background electric charge density, their properties have been well studied numerically. The new aspect here is that the behavior of the electric charge density, e 2 f 2 A 0 − ρ e , is nontrivial. At the origin the Higgs field vanishes and so the background electric charge is exposed. As this exposed charge is screened, the total electric charge density would go from negative to positive and then falls to zero exponentially.
Linear and angular momenta
Our vortices carry nonzero magnetic flux but no net electric charge. Thus, we can consider our system on a large sphere rather than on a plane. On the sphere there would be a conserved angular momentum vector, which remains finite in the infinite volume limit. In this limit, the angular momentum vector would become the linear and angular momenta on the plane. Here we will simply guess the correct answer on the plane. We first notice that when there are vortices, the net magnetic flux on sphere is nonzero and that the vector potential A i would have a Dirac string. The vector fieldĀ is gauge invariant and well-defined except at the position of vortices where f = 0. A i would also carry the same magnetic flux as A i . Thus there should be Dirac strings forĀ i , whose only possible positions are at the center of vortices whereĀ i is ill-defined. We want express the momenta in terms ofĀ i rather than A i .
First, we use the field equations to rewrite the energy momentum tensor (2.3) as where the nonconserved symmetric energy momentum tensor is defined by 3) Note that T 00 = T S 00 to a total space derivative. The angular momentum current (2.7) can be rewritten as After throwing away any total derivatives, we still get a divergent contributions to the angular momentumJ = d 2 xJ 0 for the vortex configurations.
Now consider a generic θ configuration given as where η is a single-valued function. We then introduce a current whose divergence is If we neglect the divergence from the spatial infinity and the mild singularity at the origin, we get a conserved charge, We introduce the new angular momentum, as the sum ofJ + ∆J, The divergent contributions at the spatial infinity cancel each other in J since ∂ j θ + eA j falls off exponentially. There are also no divergent contributions from the vortex positions because ∂ i θ ∼ 1/| x − q a |. This angular momentum is also conserved. We also note that J would generate the same transformation of the fields as the canonical angular momentumJ. Hence the angular momentum given by Eq.(4.9) is as good as we hope for. The well-defined linear momentum is obtained from the simple observation that under the spatial translation, x → x + a, J → J + ǫ ij a i P j . This linear momentum is We are now in the position to explore the angular momentum for the selfdual vortex configurations. These configurations are time-independent and satisfy Eqs.(2.17) and (2.13). Since there are no divergent contributions from the vortex position, we can take out these positions from the integration domain without changing the value of the angular momentum. We call this domain of integration Eq.(4.9) for these selfdual configuration becomes where the line integration is over the vortex positions and l i a = x i − q i a . The above angular momentum gets separated naturally into two pieces: the extrinsic part proportional to q i a and the intrinsic part proportional to l i a .
To evaluate the total angular momentum, let us study the behavior of the fields near q a . As f ∼ | r − q a | at each vortex, we can expand f near q a to get
.1) and (2.17) then implȳ
We also get from Eq.(2.21) We calculate the line integrals in Eq.(4.11) by using dl i a l j a /| l a | 2 = πδ ij . By using Eq.(4.14), we get the intrinsic part as The extrinsic part with Eq.(4.14) becomes With Eq.(4.15), the total angular momentum (4.11) can be expressed as The angular momentum is now expressed rather explicitly in terms of the selfdual configurations.
Since the scalar potential A 0 is a smooth function of r and q a , the external part (4.17) is a smooth function of the vortex position. When all vortices come together at the origin, the configuration is symmetric and so the total angular momentum vanishes. This means that these vortices do not carry any spin, contrast to the claims in Ref. [2] The part of the angular momentum which is proportional to a | q a | 2 is due to the Magnus force by the shielding charge. Such a term arises whenever charged particles move in a uniform magnetic field. We will derive Eq.(4.18) in the next section via a rather different path.
If we have considered a general Maxwell-Higgs system, vortices would have a static force between them and would not stay stationary in general. We can still calculate the spin of vortices by considering a single vortex sitting at the origin. Following the similar line of reasoning as before and using Eq.(4.9) which is correct for any potential, one can see easily that the spin of vortices is zero.
For two vortices located at points q 1 = q/2 and q 2 = − q/2, the symmetry of the configuration implies that f ( r; q) = f ( r; − q) = f (− r; − q), which in turn In addition, the f configuration is invarariant under the reflection which exchanges two vortices, implying b 1 =qB(q) (4.19) where q = | q|. The conserved angular momentum (4.18) becomes We show in the appendix that B(q) = 1/q + O(q) near q = 0. Thus Eq.(4.20) goes to zero when q = 0, which again implies our vortices carry no spin.
A rather similar expression as in Eqs.(4.18) and (4.20) has been obtained for vortices in the selfdual Chern-Simons Higgs systems. [7] In that case, vortices carry nonzero spin s v and the total angular momentum of two vortices decreases from 4s v to 2s v as the distance between two vortices increases from zero to infinity.
Slow motion of vortices
The selfdual configurations of n vortices are characterized by the vortex positions, q a . Let us now consider the dynamics of slowly moving vortices. When they move slowly enough, we expect that the classical radiation is very small and their dynamics is described by the Newtonian mechanics. Thus, we hope that their dynamics could be summerized by an effective Newtonian lagrangian or action. Our goal would be to find this effective action, which would be expressed in terms of the selfdual configurations.
We expect that the field configuration for a given trajectory { q a (t)} of slowly moving vortices is very close to the selfdual configuration because there is no or little radiation. Even though the Gallileo transformation is no longer a symmetry of the system due to the background electric charge density, we can get a hint for the field configuration of a slowly moving vortices from this transformation. When there is no background charge density, the configuration for uniformly moving vortices is corrected linearly in velocities (especially the vector field) and satisfy the lagrangian field equation to the first order in velocities. [7] Thus, it seems sensible to assume that the field configuration of slowly moving vortices has first order corrections in general, as f ( r, q a (t)) + ∆f , N( r, q(t)) + ∆N, A µ ( r, q a (t)) + ∆A µ , and θ = − a Arg( r − q a ). and that they satisfy the field equations to the first order in velocity. We have chosen a gauge where there is no correction to θ. The zeroth order terms satisfy the selfdual equations, (2.12) and (2.13) with the upper sign. The first order corrections then satisfy the the field equations to the linear order in velocities, We evaluate the origin field action for this corrected field configurations for slowly moving vortices. The zeroth order lagrangian will be simply the negative of the total mass. The first order correction is given by Since the zeroth field configurations satisfy the time independent field equations, the above expression can be simplified as The selfdual equations (2.17) imply that the zeroth order A i is transverse, making the first term of ∆ 1 L a total derivative. We now use Gauss's law (2.13) to get The right hand side of Eq.(5.4) is infrared divergent. To understand the infrared divergent term, let us define Since the contribution from the point r = q a is nonsingular, we take out this point from the integration domain. V i a is a vector potential as ∆ 1 L = aq i a V i a . Since r= qa (5.6) where we used dl i l j /| l 2 | = πδ ij and its generalizations. We use Eqs. (4.13) and (4.15) to get which indicates that the linearly divergent part is independent of the q b . we have used here the relation c ∂b i a /∂q k c = 0 due to the translation invariance. Integrating Eq.(5.7), we can choose a gauge so that Even though b i a s fall to zero exponentially when the mutual distances between vortices increase, it does not mean there is no nontrivial statistical phase when two vortices are exchanges because the above derivation assumes that vortices are not overlapped and so could miss a singular potential with zero magnetic field when vortices are separated. As we will see later, a better guide would be whether a vortex feels any additional magnetic field besides the average uniform magnetic field due to the presence of other vortices.
The second order correction to the effective lagrangian is given by We can imagine the possible contribution from the second order corrections of the field configuration to ∆ 2 L. However, the first order field equation implies this possible contribution vanishing, making our approximation consistent. After using Eq.(5.1) satisfied by the first order corrections, we get In principle, we can solve Eq.(5.1) and express the first order corrections of the fields in terms of the selfdual configurations, leading to ∆ 2 L fully expressed in the selfdual configurations. Samols in Ref. [9] managed to express the second order terms explicitly for the case when there is no background electric charge density.
It would be interesting to find such expression in our case too.
We have now an effective lagrangian for slowly moving vortices. Since the energy of vortices at rest does not depend on the vortex positions, there is no static force between them. However, the shieding charge carried by the Higgs field manifest itself as a uniform magnetic field acting on vortices. The Magnus force due to the shielding charge density is now a Lorentz force by this magnetic field. In addition, there is a magnetic interaction between vortices because the shielding charge density around vortices is not uniform.
Let us study the effective action for slowly moving vortices in more detail. The effective lagrangian can be written figuratively as The dynamical equation for the a-th vortex is The conserved linear momentum is then The rotational invariance implies that The conserved angular momentum is given as Under the translation x → x + a, the angular momentum transforms as J → J + ǫ ij a i P j .
We can calculate the energy, linear momentum and angular momentum for our configurations of slowly moving vortices from Eqs.(2.4), (4.10) and (4.9). The interesting question is whether for our slowly moving vortices these field theoretical conserved quantities are identical to those from the above effective action. For vortices at rest, we can compare easily. The total energy in the field theory would be just the total mass. The kinetic energy vanishes. The total angular momentum (5.18) from the effective action with Eq.(5.12) becomes This is exactly what is given in Eq.(4.18). Since these results are agreeing each other, we have some confidence in the linear part of our effective action.
We have the effective lagrangian for slowly moving vortices. Let us first apply the effective action to a single vortex at the position q. The effective action would be then just where T ij aa = mδ ij with the particle mass m. (We have not shown that this mass is the rest mass of vortices.) This is a lagrangian for a charged particle moving on a uniform magnetic field background. The Magnus force manifests itself as the Lorentz force due to this magnetic field. Since our vortices do not carry any spin and the Magnus force is usually associated with nonzero spin, we are in a somewhat ironic situation. A single vortex would move a circle. Quantum mechanically there will be a nonzero Berry's phase on the wave function when the vortex moves around a closed loop. The argument of the phase would be proportional to the total magnetic flux encircled by the loop, Let us now consider the system of two vortices moving with the positions at q 1 = q/2, q 2 = − q/2 as before. The total angular is now given in Eq.(4.20). The first order part of the effective action from Eqs.(5.11) and Eq.(5.12) becomes The magnetic field felt by the reduced system is given by Since qB = 1 at q = 0 and B goes zero exponentially at the spatial infinity, one can deduce from Eqs.(4.20) and (5.22) that the total magnetic flux felt by the reduced particle when it goes around a circle of large q would be just the total area times the flux ρ e /e. This implies that there is no nontrivial statistical phase between two vortices in large separation, proving the spin-statistics theorem. In finite distance, the matter is more complicated. Obviously B(q) is a complicated function of q and would lead to an interesting dynamics of two vortices.
By using the spin-statistcs theorem, we can argue that our vortices should not carry any spin. The spin-statistcs theorem in three dimensions implies that particles and antiparticles carry the same sign spin s. When there is no background magnetic field, the statistics works out because the theorem implies that the orbital angular momentum between two partices is 2l + 2s and that between a particle and an antiparticle is 2l − 2s. In our case vortices and antivortices have the same charge profile and the opposite electric current. Thus, if they carry any spin, the spin of vortices should have the opposite sign to that of antivortices. Since CTP is a good symmetry of the theory, the spin-statistics theorem should however be correct, implying the same sign. Hence there is no conflict if vortices do not carry any spin, which we have shown.
Conclusion
We has studied the vortex dynamics in selfdual Maxwell-Higgs systems with uniform background electric charge density. We have found a well-defined modification of the Noether angular momentum. Our vortices are shown to carry no spin but feels the Magnus force due to the shielding charge carried by the Higgs field. We have studied dynamics of the slowly moving vortices, proving the spin-statistics theorem of vortices. There are many directions we can take from here. The further investigation of the slowly moving vortices, especially quadratic terms would be interesting.
Acknowledgement
The author thank Choonkyu Lee, Mel Ruderman, Erick Weinberg for useful discussions. He also thank the organizers of the small scale structure of spacetime workshop in ITP where the part of this work is done.
APPENDIX
Here we calculate the contribution of the symmetric energy tensor to the angular momentum of the selfdual vortex configurations. The angular momentum is where the symmetric energy momentum tensor is given in Eq. This is not conserved but well defined without any divergent contribution from the spatial infinity or the Antivortices would have the same charge profile but the opposite current. Thus J S of antivortices would have the opposite sign to that of vortices.
We can express J S more explicitly for the selfdual configurations. To achieve this goal, first we express the angular momentum in terms ofĀ i ≡ ∂ i θ + eA i . As Note that J S ( q) is a continuous function of the vortex positions.
Consider two vortices at rest with positions q 1 = q/2, q 2 = − q/2. Following the argument before Eq.
FIGURE CAPTIONS
1) The vortex configuration of unit vorticity 2) The vortex configuration of n = 2 | 2014-10-01T00:00:00.000Z | 1993-11-17T00:00:00.000 | {
"year": 1994,
"sha1": "889225e6348adcf94e78f6604cdd6442e545cfa1",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-th/9311100",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "5d68a74cf9a5f42eb9a9c530a0d173962abc5fee",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
3630469 | pes2o/s2orc | v3-fos-license | Efficacy of cardiac shock wave therapy in patients with stable angina: The design of randomized, triple blind, sham-procedure controlled study
Objective: Despite revascularization and optimal medical treatment (OMT), patients with angina often have a reduced quality of life due to inadequate relief from symptoms. Recent studies have shown that the application of shock waves may reduce angina symptoms and improve quality of life, exercise capacity, and myocardial perfusion due to the stimulation of angiogenesis. However, there is limited evidence due to small, single-arm, single-center studies of low to moderate quality. The purpose of this study is to evaluate the impact of cardiac shock wave therapy (CSWT) on exercise tolerance and angina symptoms in patients with coronary artery disease and objective evidence of myocardial ischemia who cannot undergo traditional revascularization and experience angina despite OMT in comparison to sham procedure. Methods: We designed a randomized, triple-blind, placebo-controlled, multicentre trial (NCT02339454) to assess the efficacy of CSWT in addition to OMT in patients with stable angina and myocardial ischemia documented by exercise treadmill test (ETT). All patients were treated with stable doses of standard medical treatment 4 weeks before screening. An increase in the total exercise duration on ETT by ≥90 s from the baseline at the end of the study was set as the primary endpoint. Secondary endpoints included angina class, Seattle angina questionnaire scores, symptoms, and ECG changes during stress test. Patients underwent nine sessions of CSWT or corresponding sham procedure applied to all segments of the left ventricle, within 9 weeks. Endpoint assessments were performed at 6-month follow-up. The imaging substudies assessed the potential of CSWT to reduce stress-induced myocardial ischemia detected by dobutamine stress echocardiography, cardiac single-photon emission computed tomography, and cardiac magnetic resonance imaging. Results: At two centers, 72 of the 323 screened patients were randomized in two groups (ratio 1:1): active treatment and placebo control. Study patients were predominantly males (70.8%); the mean age of the patients was 68.4±8.3 years. Of these, 44 patients had angina Canadian Cardiovascular Society class III, and 66.7% of the patients had a history of myocardial infarction. Conclusion: Using sham applicators, blinding study participants, investigators, and endpoints assessors to the study data as well as centralized randomization ensures rigorous methodology and low risk of bias in this large randomized controlled CSWT study.
Introduction
Coronary artery disease (CAD) is being recognized as a leading reason of adult mortality worldwide. According to the ESC guidelines on the management of stable CAD (1), medical treatment, percutaneous coronary intervention (PCI), and coronary artery bypass grafting (CABG) are the main therapeutic options. New pharmacological agents such as ranolazine (2) and ivabradine (3) have been suggested for patients with refractory angina. However, surveys show that despite recommended care, up to 14% of patients may continue to be limited with angina, which can markedly affect their quality of life (4)(5)(6).
Alternative techniques to enhance myocardial perfusion and reduce symptoms in patients with refractory angina include enhanced external counterpulsation (EECP) (7) and spinal cord stimulation (SCS) (8), as well as sophisticated modalities such as transmyocardial laser revascularization (9), myocardial or intracoronary application of proteins (10) or genetic vectors en-Objective: Despite revascularization and optimal medical treatment (OMT), patients with angina often have a reduced quality of life due to inadequate relief from symptoms. Recent studies have shown that the application of shock waves may reduce angina symptoms and improve quality of life, exercise capacity, and myocardial perfusion due to the stimulation of angiogenesis. However, there is limited evidence due to small, single-arm, single-center studies of low to moderate quality. The purpose of this study is to evaluate the impact of cardiac shock wave therapy (CSWT) on exercise tolerance and angina symptoms in patients with coronary artery disease and objective evidence of myocardial ischemia who cannot undergo traditional revascularization and experience angina despite OMT in comparison to sham procedure. Methods: We designed a randomized, triple-blind, placebo-controlled, multicentre trial (NCT02339454) to assess the efficacy of CSWT in addition to OMT in patients with stable angina and myocardial ischemia documented by exercise treadmill test (ETT). All patients were treated with stable doses of standard medical treatment 4 weeks before screening. An increase in the total exercise duration on ETT by ≥90 s from the baseline at the end of the study was set as the primary endpoint. Secondary endpoints included angina class, Seattle angina questionnaire scores, symptoms, and ECG changes during stress test. Patients underwent nine sessions of CSWT or corresponding sham procedure applied to all segments of the left ventricle, within 9 weeks. Endpoint assessments were performed at 6-month follow-up. The imaging substudies assessed the potential of CSWT to reduce stress-induced myocardial ischemia detected by dobutamine stress echocardiography, cardiac single-photon emission computed tomography, and cardiac magnetic resonance imaging. Results: At two centers, 72 of the 323 screened patients were randomized in two groups (ratio 1:1): active treatment and placebo control. Study coding proteins with angiogenesis potential (11), and stem cellbased therapies (12). Currently, only EECP treatment has been approved and recommended for the management of Canadian Cardiovascular Society (CCS) class III and IV refractory angina (1,13). Moreover, other therapies are invasive, expensive, and have not yet been proven to be clinically feasible and effective.
Cardiac shock wave therapy (CSWT) has been newly developed based on the lithotripsy method; it uses noninvasive application of low-intensity shock waves to stimulate angiogenesis (14). Several experimental studies have demonstrated that the application of low-intensity shock waves (SW) might induce the release of angiogenic factors such as endothelial nitric oxide synthase, vascular endothelial growth factor, and proliferating cell antinuclear antigen (14)(15)(16)(17). Furthermore, many published clinical studies have demonstrated the efficacy and safety of CSWT in patients with refractory angina (18)(19)(20)(21)(22)(23)(24)(25)(26). A recently published meta-analysis of CSWT studies showed significantly decreased nitroglycerine consumption and angina frequency; improved CCS angina class, Seattle angina questionnaire scores, and NYHA class; and increased myocardial perfusion and exercise capacity (27). However, the analysis of methodological quality of the majority of available randomized controlled trials (RCTs) on CSWT demonstrated only low to moderate scientific quality because of the high risk of bias in terms of attribution, sample size calculation, blinding of participants, and outcome assessment (27). These findings indicate the need for an adequately powered study that will eliminate the methodological weaknesses of previous research. Thus, we designed a new RCT with sham applicators aiming to ensure proper procedures of blinding, outcome assessments, data reporting, and interpretation.
Study design
This study was a randomized, triple-blind, sham procedurecontrolled, multicentre trial, which was designed to assess the antianginal efficacy of CSWT in addition to standard optimal medical treatment (OMT) in patients with stable angina. The study protocol was created according to the CONSORT statement recommendations for parallel group randomized trials (28). The study was conducted in Lithuania and Russia in accordance with Good Clinical Practice, Declaration of Helsinki, 2013. This study was approved by two ethical committees (Vilnius Regional Ethics Committee, Approval No. 158200-13-616-187 and Moscow State University of Medicine and Dentistry Local Ethics Committee, Approval No. [10][11][12]; it is registered at ClinicalTrials.gov (NCT02339454).
We hypothesized that CSWT reduced angina symptoms and improved exercise tolerance in patients with stable angina more effectively than OMT alone.
The primary endpoint was the total exercise duration in modified Bruce treadmill test at 6-month follow-up. The secondary endpoints were changes in the following: 1) ST-segment depres-sion during treadmill test, 2) angina symptoms during treadmill test, 3) number of angina attacks per week, 4) number of sublingual nitroglycerine consumption per week, 5) CCS angina functional class, and 6) Seattle angina questionnaire (SAQ) score at 6-month follow-up.
Study population
The study cohort included patients with CAD and exerciseinduced angina not controlled by the standard OMT, who fulfilled the inclusion/exclusion criteria and had provided informed consent for participation in the study. Patients were found eligible if there was no technical possibility for further revascularization procedures. Table 1 This study consists of four phases: screening for eligibility criteria, randomization, treatment, and follow-up.
Screening
The screening phase included evaluation of symptoms, medical history, physical examination, and vital signs, as well as review and adaptation of medical treatment according to the guidelines. Four-week period was kept to ensure clinical stability and stable doses of medication. During the second part of the screening, patients underwent exercise treadmill test (ETT).
Randomization
After the baseline evaluation, consecutive subjects who met the inclusion criteria were assigned to study group A or B with a 1:1 ratio using a random allocation sequence table. For this trial, a professional statistician generated random allocation sequences for two centers. Using centralized randomization, a password-protected access to the random allocation lists was granted only to one principal investigator (JC) for both the centers. The study investigators who performed patients' screening were blind to the allocation sequence.
Similar to the patients, the investigators (clinicians and data assessors) and statistician were blinded to treatment allocation; therefore, the design was fitted for a triple-blinded study. The randomization code was disclosed after the last visit of the last patient during the primary statistical analysis.
Treatment
All patients were maintained on stable doses of optimal medical therapy (1) for 4 weeks before treatment and during the study period. All patients received antiplatelet therapy with aspirin 75-150 mg per day or clopidogrel 75 mg per day, if aspirin intolerance was present. Few patients received dual antiaggregant therapy. All patients received cholesterol-lowering therapy (atorvastatin in most cases) with a target level of low-density lipoprotein (LDL)<1.8 mmol/L. Anti-ischemic therapy included long-acting beta-blockers, calcium channel blockers, and prolonged nitrates as first-line treatment and trimetazidine, ivabradine, or ranolazine as second-line treatment, along with angiotensin-converting enzyme inhibitors as standard secondary prevention.
CSWT consisted of nine sessions with three sessions per week and was performed on 1 st , 5 th , and 9 th study week. The treatment intensity was 100 impulses applied to one spot with up to 1200 impulses to the patient per session or corresponding duration of placebo application (Fig. 1). A specific sham applicator whose external appearance and behavior were similar to that of an active applicator was used.
During the 1 st , 5 th , and 9 th study week, SWs (up to 10800 impulses in a patient) were delivered to the basal, middle, and apical segments of the left ventricle, respectively. A 3-week treatment-free interval was kept after the 1 st and 5 th treatment week ( Fig. 1).
SWs were generated by discharging a high-voltage spark under water or electromagnetic impulse. CSWT was performed using a Cardiospec TM device (Medispec Ltd, Germantown, USA) coupled with a cardiac ultrasound imaging system (Vivid i, GE Healthcare, Horten, Norway) to target the treatment area. Lowintensity SW (100 impulses/spot; energy flux, 0.09 mJ/mm 2 ) were delivered using a special applicator through the anatomical acoustic window to the treatment area under electrocardio- • Male and female patients aged ≥18 years with obstructive coronary artery disease confirmed by angiography, prior MI, prior revascularization (PCI, CABG) and with exercise angina not controlled by the optimal medical therapy; • ST-segment depression ≥1 mm during exercise ECG; Exclusion criteria • Angina at rest; • ECG abnormalities at rest: left bundle-branch block, ST-segment depression ≥1 mm at rest, WPWsyndrome; • Planned coronary revascularization procedure (PCI or CABG) within 6 months; • Heart failure (class III or IV NYHA); • Thrombus in LV; • Moderate to severe uncontrolled hypertension (systolic BP>160 mm Hg and/or diastolic BP>100 mm Hg); • Hypotension (systolic BP<100 mm Hg); • Acute coronary syndrome or coronary revascularization procedure within the prior 3 months before enrolment; • Severe concurrent pathology, including terminal illness (cancer); • Contraindications for exercise testing (e.g., acute myocarditis, pericarditis, deep venous thrombosis, severe aortic stenosis); • Conditions which in the investigator's opinion may interfere with the study's execution or due to which the patient should not participate for safety reasons; • Risk of low patient cooperation; • Patient is simultaneously participating in another device or drug study, or has participated in any clinical trial involving an experimental device or drug, including other drugs or devices enhancing cardiac neovascularization, or any cardiac shock wave therapy machine of a competitor company within 3 months of entry into the study.
BP -blood pressure, CABG -coronary artery by-pass grafting, ECG -electrocardiogram, LV -left ventricular, MI -myocardial infarction, NYHA -New York Heart Association, PCIpercutaneous coronary intervention, WPW -Wolf-Parkinson-White syndrome graphic R-wave gating. The patient was positioned on a device table and connected to the ECG monitor. An ultrasound probe was used to identify the target area. The shock wave applicator was connected with the ultrasound transducer and placed through a membrane in contact with the skin at the target treatment zone, which was visualized on the ultrasound screen (Fig. 2a). For optimal therapy, the treatment area was divided into target zones corresponding to the size of the focal zone of the SW applicator (1-cm diameter circle) (Figure 2b). The distance to these target zones was measured and marked on the ultrasound screen, enabling the operator to see the treated zone in real time. The SW applicator was fixed at the measured distance. An inflatable silicon cushion was filled, and ultrasound gel was used for optimal delivery of shockwaves into the body. Follow-up visits were performed and outcome measures were assessed at 3 and 6 months after randomization. Table 2 summarizes the follow-up scheme.
Efficacy assessment
All study patients underwent ETT using modified Bruce protocol at baseline and 3 and 6 months after treatment. Betablockers were stopped for 48 h and nitrates and other antiangi-nal medicines for 24 h before the ECG stress test in all patients. During the test, 12 ECG leads were continuously monitored, and the blood pressure (BP) was measured at 2-min intervals. The exercise duration in seconds, maximum cardiac workload [expressed by metabolic equivalent (MET)], peak heart rate, and ST-segment depression were recorded. The criteria to stop ECG stress test included ECG changes (≥2 mm ST-segment depression and complex or sustained arrhythmias), severe angina, fatigue, and abnormal BP responses. The ST-segment deviation was measured at 60 ms after J point compared with the resting values during peak exercise and was considered significant if there was ≥1 mm horizontal or down-sloping ST-segment depression in computer-averaged complexes.
The level of angina was classified according to the CCS angina classification. During follow-up visits, patients were asked about the number of angina episodes and sublingual nitroglycerine doses taken in the past week. Quality of life was assessed using SAQ, which is a 19-item self-administered questionnaire that measures five clinically important dimensions of health affected by angina in patients with CAD: physical limitation, angina stability, angina frequency, treatment satisfaction, and quality of life.
Substudy
The substudy, performed only at the Vilnius site, assessed the potential of CSWT to reduce myocardial ischemia determined by dobutamine stress echocardiography (DSE), cardiac single-photon emission computed tomography (SPECT), and cardiac magnetic resonance imaging (CMRI).
For the substudy, patients underwent DSE, SPECT, and cardiac MRI before CSWT and at 6-month follow-up, with DSE performed additionally at 3 months. The analysis of each DSE, SPECT, and CMRI study images was performed by two independent observers who were blinded to the study data. Discordant assessments were jointly reviewed. Myocardial perfusion, regional wall motion, and early and late contrast-enhanced images were performed using the LV 17-segment model (29,30).
During SPECT and CMRI tests, pharmacologic stress was induced by infusing adenosine at a standard rate of 140 μg/kg/min (maximal total infusion duration of 6 min) (29). All stress tests were performed under continuous monitoring of heart rate and BP.
The segmental wall motion was semi-quantitatively graded as follows: normal; hypokinetic, marked reduction of endocardial motion and thickening; akinetic, defined as the virtual absence of inward motion and thickening; and dyskinetic, corresponding to paradoxic wall motion away from the center of the left ventricle in systole. The sum of all segment scores made up the wall motion score (WMS), divided by the number of interpretable segments made WMSI.
Myocardial perfusion imaging SPECT A 1-day ECG-gated stress and rest SPECT protocol was used. After 3 min of adenosine infusion, patients were intravenously injected with a body mass index adjusted dose (250-350 MBq) of Figure 2. The methodology of shock wave therapy (a) Shock wave generator system and cardiac imaging system. (b) Shock wave focal zone alignment: Position of the subsegment on the two-dimensional image determined by X and Y coordinates (1). The shockwave applicator position is identically adjusted along X-and Y-axes corresponding to the X and Y coordinates of the ultrasound image (2) technetium 99m ( 99m Tc)-sestamibi (MIBI). Rest myocardial perfusion imaging (MPI) was performed on the same day 4 h after the stress MPI with identical acquisition protocol. Gated SPECT studies were performed 60 min after 99m Tc-MIBI injection using a dual-head INFINIA GP3 (GE Medical Systems, Waukesha, WI, USA) gamma camera.
SPECT stress tests were performed and digitally recorded at baseline and 6-month follow-up. Gated and nongated SPECT MPI image sets were reconstructed using OSEM iterative reconstruction with the dedicated Xeleris 2.1 workstation using Cedars-Sinai QGS/QPS software package (Cedars-Sinai, Los Angeles, CA, USA). Perfusion defects were scored using a 5-point scoring system. SPECT analysis included summed rest, stress, and difference scores. A summed difference score of 0 was considered as normal, of 1-4 as mild ischemia, of 4-7 as moderate ischemia, and >7 as severe ischemia of myocardium (31).
Dobutamine stress echocardiography
Electrocardiogram and echocardiogram were performed at rest, and intravenous access was secured. Dobutamine was in-fused at 5, 10, 20, 30, and 40 μg/kg/min for 3 min at each stage. When no end point was reached, atropine (in four divided doses of 0.25 mg, up to a maximum of 1 mg) was added to the continuing 40 μg/kg/min dobutamine infusion.
Transthoracic stress echocardiographic studies were performed using a commercially available ultrasound machine (System Vivid 7 and 9, GE Healthcare, Horten, Norway) with a 1.5-4.6 MHz transducer. From the parasternal window, the long and short axis of LV, and from the apical window, the 4-, 3-, and 2-chamber views were acquired for comparing the four stages of the stress test. The images were stored digitally and analyzed offline using customized software (Echopac PCBT08, GE Healthcare). For DSE evaluation, moderate ischemia was defined as ≥3 segments with stress-induced hypokinesis or akinesis (32).
Speckle tracking images were recorded at baseline and peak dobutamine levels with breath holding. The frame rate of stored apical 2-, 3-, and 4-chamber cine-loops for speckle tracking analysis was in the range of 70-90 frames/second. Graphical displays of deformation parameters (reflecting the average value of displacement markers in each segment) were then automati- cally generated for six segments in each view. The peak longitudinal systolic strain at rest and during stress was measured using automated vendor-suggested software. Baseline and 3-and 6-month follow-up studies were digitally recorded. Two experienced independent observers blinded to the study data performed the analysis. DSE analysis included WMS, global myocardial strain analysis, and LV ejection fraction with Simpson's biplane method.
Cardiac magnetic resonance imaging CMRI was performed using a 1.5 T MR scanner (Avanto, Siemens Medical Solutions, Erlangen, Germany). After acquiring standard cine scans for the assessment of left ventricular function, a turbo gradient echo pulse sequence (Turbo FLASH) was acquired for perfusion imaging. After at least 3 min of adenosine infusion, Turbo FLASH sequence was repeated for stress firstpass perfusion imaging [(intravenous bolus application of 0.15 mmol/kg of gadolinium-based contrast agent (Magnevist)]. After a 10-min waiting period, late gadolinium enhancement (LGE) imaging was done in the identical short-axis geometry with full coverage of the left ventricle.
CMRI examinations were analyzed with Argus software (Siemens Healthcare GmbH, Erlangen, Germany) by two expierenced observers. Short-axis endocardial contours were manually traced in end-diastole (start of R-wave) and end-systole (smallest cavity area). Papillary muscles and trabeculations were included in the LV cavity (according to the ASE criteria). The volumes of LV were automatically computed in milliliters using the modified Simpson's rule by summing the cross-sectional areas contained by the endocardial borders of all short-axis slices included in the analysis. Segmental perfusion was interpreted as normal or abnormal. LGE was assessed on a 5-grade scale as follows: 0=no hyperenhancement, 1=hyperenhancement of 1%-25% of the tissue in each segment, 2=hyperenhancement of 26%-50% of the tissue, 3=hyperenhancement of 51%-75% of the tissue, and 4=hyperenhancement of 76%-100% of the tissue. The LGE-score was obtained by summing the scores of the 17 segments of the LGE images (29,32).
Statistical analysis
Baseline patient characteristics were descriptively summarized: continuous variables were expressed as mean value±standard deviation (SD), whereas categorical variables were expressed as absolute numbers (percentages). In the first step, the paired parameters were tested for normal distribution with the Shapiro-Wilk test. Chi-square tests or Fisher exact test were used to compare categorical variables. Normally distributed variables were analyzed using parametric test (t-test); not normally distributed variables were analyzed using nonparametric tests (Mann-Whitney U test).
P<0.05 (two-sided) was considered significant. The overall effect of CSWT was evaluated by comparing the average change of variable in the treatment group with the average change of variable in the placebo group. Statistical analyses were performed with SPSS 20.0 (SPSS, Chicago, IL, USA).
Calculation of sample size
For the sample size estimation, a power of 90% and a twosided type I error of 5% were chosen. On the basis of published data (24), we assumed a SD of 110 s for total exercise duration; this produced 33 patients per group necessary to detect a ≥90 s difference. Estimating withdrawal of 10% of patients after randomization, approximately 73 patients would have to be included in the study.
Results
Baseline patient characteristics A total of 72 patients who met the inclusion criteria were randomized (Fig. 3). The mean age was 68.4±8.3 years; 61.1% of patients were in CCS class III, 33.3% were in CCS class II, and others in CCS class I. The cardiac risk factor profile was high as each patient had at least two risk factors for cardiovascular disease. History of myocardial infarction was present in 80% of patients in group A and 51.4% of patients in group B (p=0.011). A majority of patients (78%) had multivessel disease, and 96% were not candidates for further revascularization due to the extent and severity of disease, previous interventions, or risk/benefit ratio. There were 58 patients who had previously undergone (Table 3), except for more frequent positive family history of CAD in group A (p=0.020). The exercise capacity was moderately reduced in both study groups (total exercise duration in minutes was 6.5±2.7 in group A and 6.1±2.3 in group B, p=0.479). There were no significant differences between countries in any baseline characteristics (age, cardiovascular risk factors, medical history, and clinical parameters).
Discussion
Despite major advances in the management of CAD, this condition is recognized to be a leading reason of adult mortality worldwide and is responsible for 20% of deaths each year in Europe (33), with stable angina being the most frequent clinical presentation.
Many patients experience persistent symptoms despite revascularization procedures and modern medical treatment. Thus, there is a crucial need for the development and investigation of novel pharmacological, invasive or noninvasive treatment modalities, aimed at improving care and quality of life for this challenging patient population.
CSWT is a novel approach that is potentially effective for the treatment of patients with refractory angina that reduces symptoms and improves quality of life. SWs belong to acoustic waves that can be transmitted through a liquid medium and focused with a precision of several millimeters to any intended treatment area inside the body. SWs are delivered to the targeted area to potentially induce neovascularization from the healthy area to the ischemic zone through shear stress. The noninvasive nature and lack of significant adverse events make it an attractive op-tion for patients suffering from refractory angina. However, limited information is currently available on the actual efficacy of this new modality.
Recently published systematic review of CSWT studies in stable CAD demonstrated a significant improvement of clinical variables including angina class and quality of life, as well as positive changes in LV function and perfusion. A meta-analysis showed moderate improvement in exercise capacity. Overall, CSWT seems to be a potentially effective, new, and noninvasive option for patients with CAD; however, evidence is limited to small, single-center studies with a high risk of bias due to the absence of credible control and allocation procedures (27). Thus, more data derived from randomized and placebo-controlled trials are required for its widespread use.
The measurement of exercise tolerance by time to ischemic ECG changes or development of symptoms during ETT is a widely used outcome in CAD studies. The subjective physical and emotional impact of angina pectoris is assessed using SAQ (34). SAQ is commonly used for measuring health status in coronary patients, which has been confirmed as a valid, reproducible, and sensitive performance measure for assessing the quality of CAD care (35). Therefore, ETT, CCS score, and SAQ were chosen as efficacy parameters in our study. The advantages of these tests are their simplicity, safety, negligible cost, and wide accessibility.
The novelty and better quality evidence of this study include several aspects. Patients were enrolled to a multicenter, randomized, placebo-controlled trial on the basis of myocardial ischemia proven by several stress tests. A new treatment protocol was produced to facilitate the application of SW to all segments of LV. In previously published studies, SWs were applied only to the ischemic segments of LV. The new protocol aims to extend the indications for widespread use of CSWT that is not based on the results of imaging tests or coronary angiography, which are unavailable sometimes. The application of SW to all segments of LV may provide beneficial therapeutic effects by not only reducing ischemia but also attenuating inflammation and suppressing oxidative stress and fibrosis in nonischemic segments as well, potentially preventing LV remodeling. Therefore, compared with previous randomized CSWT trials, we consider our study to be at a low risk of bias in terms of methodology.
Study limitations
As treatment area needs to be localized, the patients without an adequate echocardiographic window (e.g., overweight, pulmonary disease) cannot receive CSWT. The safety of CSWT use in patients with pacemakers of implantable defibrillators has not been defined yet.
Conclusion
Using sham applicators, blinding study participants, investigators, and endpoints assessors to the study data as well as centralized randomization ensures rigorous methodology and low bias in this large, randomized, controlled CSWT study. | 2018-04-03T03:03:26.841Z | 2018-02-01T00:00:00.000 | {
"year": 2018,
"sha1": "796c50edaae96c23002f149966956c7aaf6bb863",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.14744/anatoljcardiol.2017.8023",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "796c50edaae96c23002f149966956c7aaf6bb863",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
236916129 | pes2o/s2orc | v3-fos-license | Rationalising oxygen usage in a level II special newborn care unit in Madhya Pradesh, India
Non-judicious oxygen use in preterm infants is associated with increased risk of retinopathy of prematurity, bronchopulmonary dysplasia and longer hospital stay. Despite established guidelines on oxygen therapy, compliance with the best oxygen practices remains suboptimal. Excessive use of oxygen also consumes a large proportion of the annual maintenance budget of special newborn care units (SNCUs) in the districts. In this project, we aimed to reduce the oxygen consumption in the SNCU at Sehore, Madhya Pradesh, India from eight to four cylinders per day, by rationalising the indications, monitoring and method of oxygen delivery. We tested two sets of interventions using the Plan–Do–Study–Act (PDSA) approach. The first intervention was the introduction of a written ‘oxygen policy’ regarding indications of starting/stopping oxygen and the use of saturation targets. The second was using short binasal infant prongs (at 0.5–1 L/min), instead of oxygen hoods as the primary method of oxygen delivery in spontaneously breathing neonates requiring oxygen. In the first PDSA cycle, we assessed the feasibility of the intervention in a small set (n=30) of neonates and later scaled up to all eligible neonates in the second phase. We observed a significant reduction in oxygen consumption (from median (IQR) 8 (7–8) to 3 (3–4) cylinders per day) that can lead to a direct saving of 590 000 Indian rupees (US$9000) per year. There was a significant reduction in the number of neonates on oxygen support on a given day. We did not observe any increase in mortality or nasal injury. The change was sustained for the next 8 months. We conclude that by having a contextual oxygen policy and using nasal prongs instead of oxygen hoods as the preferred delivery method, we can achieve a sustainable reduction in oxygen consumption.
We tested two sets of interventions using the Plan-Do-Study-Act (PDSA) approach. The first intervention was the introduction of a written 'oxygen policy' regarding indications of starting/stopping oxygen and the use of saturation targets. The second was using short binasal infant prongs (at 0.5-1 L/min), instead of oxygen hoods as the primary method of oxygen delivery in spontaneously breathing neonates requiring oxygen. In the first PDSA cycle, we assessed the feasibility of the intervention in a small set (n=30) of neonates and later scaled up to all eligible neonates in the second phase. We observed a significant reduction in oxygen consumption (from median (IQR) 8 (7)(8) to 3 (3)(4) cylinders per day) that can lead to a direct saving of 590 000 Indian rupees (US$9000) per year. There was a significant reduction in the number of neonates on oxygen support on a given day. We did not observe any increase in mortality or nasal injury. The change was sustained for the next 8 months. We conclude that by having a contextual oxygen policy and using nasal prongs instead of oxygen hoods as the preferred delivery method, we can achieve a sustainable reduction in oxygen consumption.
BACKGROUND
Oxygen is the most used drug in neonatal intensive care units. Though judicious use of oxygen is lifesaving, excessive exposure of preterm neonates to oxygen increases the risk of retinopathy of prematurity (ROP) and bronchopulmonary dysplasia, and prolongs the hospital stay. 1 Oxygen usage consumes a large proportion of the annual maintenance budget of the special newborn care units (SNCUs) in India. Previous studies have highlighted multiple barriers including limited or no access to pulse oximetry, inadequate staffing, lack of knowledge and training among the healthcare staff in ensuring optimal oxygen therapy in children. 2 3 Some of few bottlenecks are system driven and require additional resources, but many of them can be managed using simple interventions. 3 4 Problem At Sehore SNCU in Madhya Pradesh, India, oxygen consumption was exceedingly high (8-10 jumbo cylinders/day), and a significant portion of the annual maintenance budget was spent on the procurement of oxygen.
Setting
The government of India has established SNCUs in each district to provide level II care to sick and preterm neonates. The SNCU at Sehore, Madhya Pradesh is a typical 20-bedded, government-funded, level II care unit, staffed by 4 doctors (one paediatrician and three medical officers) and 20 nurses. Sharing of bed by two or more neonates is a common scene. In each shift, a single doctor looks after the neonates as well as emergencies, and attends resuscitation calls. The bedside nurse has a major role and often has to make management decisions, in telephonic consultation with the doctor. The main morbidities among the SNCU admissions are prematurity, low birth weight, respiratory distress, sepsis, birth asphyxia and hyperbilirubinaemia. Nearly 3000 sick neonates are admitted per year, of which 65%-70% receive oxygen. The SNCU has two bubble continuous positive airway pressure (CPAP) machines and used hoods as the primary mode of oxygen delivery. Oxygen is supplied as jumbo cylinders which are connected through a mini-manifold and are refilled as required. The SNCUs follow Facility Based
Open access
Newborn Care (FBNC) guidelines which recommend flow rates of 5-10 L/min while using an oxygen hood. 5
Rationale
Oxygen hood is a high-flow device (4-6 L/min) and leads to a lot of wastage, whereas nasal prongs are low-flow devices requiring a flow rate of 0.5-1 L/min in neonates. We can save 5000-7000 L of oxygen per neonate per day, by switching from hood to nasal prongs. The WHO strongly recommends use of nasal prongs for neonates. 6 If we follow the best practices of oxygen therapy, a significant reduction in consumption can be achieved.
Specific aim
We aimed to reduce the oxygen consumption in Sehore SNCU from eight cylinders per day to four cylinders per day (50% reduction) within 8 weeks using a quality improvement (QI) approach.
METHODS Design
We executed the following steps: (1) formation of a QI team comprising SNCU in-charge paediatrician, one medical officer and two nurses; (2) collection of baseline data comprising daily admissions, number of infants receiving respiratory support (oxygen hood and CPAP) on a given day, duration of oxygen therapy, number of oxygen cylinders consumed per day and mortality for 3 weeks (1-21 December 2017); (3) identification of possible reasons for increased oxygen consumption using fishbone analysis (figure 1); (4) group discussion to elicit change ideas; and (5) testing change ideas using a series of Plan-Do-Study-Act (PDSA) cycles.
Measurements and analysis
Our primary outcome measure was the number of oxygen cylinders consumed per day. As a balancing measure, we tracked the number of admissions, the number of neonates receiving oxygen, CPAP and mortality. The SNCU online database was used to extract some of these parameters including the duration of hospital stay and duration of oxygen use for more than 24 hours. There were concerns from the staff that the cost of nasal prongs will add to the budget, therefore, we also did a crude cost analysis. We used descriptive statistics for the baseline variables and Χ 2 test or one-way analysis of variance or Kruskal-Wallis test as applicable for comparison of the parameters across the epochs. We used run charts to display the serial measurements. SPSS V.21 was used for statistical analysis and a p value of less than 0.05 was taken as significant. We used Standards for Quality Improvement Reporting Excellence V.2.0 guidelines for reporting. 7
Patient involvement
Patients were not directly involved in the implementation.
Strategy
We used the PDSA cycle approach for quality improvement. Feasibility was tested in PDSA cycle 1. Various interventions were tested in three subphases during PDSA cycle 2. These were followed by the sustenance phase. The details of various PDSA cycles are described in table 1.
PDSA cycle #1
In the first PDSA cycle (22 December 2017-5 January 2018), we drafted an 'oxygen policy' after having group discussions involving SNCU in-charge paediatrician, medical officers and the staff nurses. The policy was approved by the chief medical officer of the hospital.
Open access
The key highlights of the policy were (1) at admission triage, the staff nurse will first check oxygen saturation and assess for respiratory distress, and the decision for starting oxygen will be based on these two parameters. The previous practice was to first start oxygen by the hood in all babies reporting to the triage and then do the assessments. (2) All those who meet the criteria for oxygen supplementation shall be started on nasal prongs oxygen at 0.5-1 L/min. The contraindications of using nasal prongs will be the presence of congenital airway anomaly, pre-existing nasal trauma or poor respiratory efforts. (3) Whenever possible, continuous pulse oximetry will be used and oxygen flow in the nasal prongs titrated based on saturation targets. If number of pulse oximeters is not sufficient, intermittent readings will be taken every 2 hours. (4) The starting and stoppage of oxygen therapy will be documented along with the indications.
The staff were in agreement on all aspects of the policy, except the universal use of nasal prongs. They were apprehensive as they had not used them earlier. Therefore, our first PDSA cycle aimed at assessing the feasibility and safety of using nasal prongs. We randomly enrolled 30 neonates (two neonates per day) needing oxygen as per policy and applied nasal prongs in them. To assess the safety and feasibility of using nasal prongs, we recorded details of nasal trauma, tolerability of prongs, any difficulty in using prongs, whether the prongs fit well or not, and desaturations if any. In the case of desaturations, the nurses first ensured that the prongs were within the nostrils and properly fixed, and then hiked the flow rate Open access to 1 L/min. If the desaturations persisted, they switched to oxygen hood or CPAP depending on the severity of respiratory distress. In this PDSA involving 30 neonates, we observed desaturations in two neonates. Both neonates were on continuous pulse oximetry monitoring and the desaturations were mild. During these episodes, the nasal prongs were found dislodged from the nostrils due to improper fixation. Following this, a standard operating procedure was created for the fixation of nasal prongs, and all nurses were trained until they became confident. To assess compliance to the new oxygen policy, we recorded the following parameters on daily basis: (1) proportion of neonates reporting to triage, in whom the staff nurse first checked oxygen saturation before starting oxygen, (2) proportion of neonates on oxygen in whom continuous and intermittent pulse oximetry were used, (3) proportion of neonates on oxygen, in whom starting, and stoppage of oxygen was documented. During this PDSA, the team confirmed that we do not have enough pulse oximeters for continuous monitoring of all neonates on oxygen. However, intermittent monitoring was feasible in all neonates on supplemental oxygen. Therefore, we decided to prioritise the continuous saturation monitoring for neonates who were on CPAP support or who were having frequent desaturations or increased work of breathing. For stable neonates on supplemental oxygen, we planned to use continuous pulse oximetry only if a pulse oximeter was spare, else to use 2-hourly intermittent monitoring.
PDSA cycle #2
We conducted the second PDSA from 6 January to 16 February 2018, over 6 weeks targeting all admitted neonates requiring oxygen therapy. This PDSA cycle had three subphases A, B and C (table 1). We continued the ongoing training comprising education about the revised oxygen policy, saturation targets, importance of pulse oximetry and early identification of nasal trauma. To keep the enthusiasm, we held weekly meetings to discuss the progress and introduced the practice of recognising the nurse champion of the week who showed the best compliance to the oxygen policy in the past week. After 2 weeks, we noticed that the nurses were confident in deciding when to start oxygen therapy, but less attention was being given to stopping oxygen. Therefore, we introduced a new change that in each shift, the doctor will enquire the reason for continuing oxygen therapy in the bedside rounds and the nurse will document the reason in the notes. During this period, the team identified two neonates with moderate respiratory distress, in whom the hiking of respiratory support from nasal prongs to CPAP was delayed. The reason was inability of the physician to visit and examine the infants to take a decision, due to preoccupation in the delivery room. Discussions revealed that this was not an uncommon scenario. In the weekly team meeting, it was decided to train the nurses for better identification of moderate to severe respiratory distress and empower them to initiate CPAP. With the help of videos and demonstrations, they were trained to categorise the severity of respiratory distress and identify those needing CPAP. They were authorised to initiate CPAP with information to the on-call duty doctor. As the nurses were confident in using nasal prongs, in this phase, we limited data recording to the number of babies admitted, number requiring oxygen, number of oxygen cylinders consumed, nasal injury and mortality.
Sustenance phase
Following the two PDSA cycles, the use of nasal prongs became a routine practice instead of oxygen hood, and the staff were happy about it. The initially drafted oxygen policy was modified and finalised to accommodate the learning and changes made during its testing. During this phase (21 February 2018-30 September 2018), we continued with monthly audits of compliance to components of oxygen policy, number of admissions, number of babies receiving oxygen, oxygen cylinder consumption and mortality.
RESULTS
In the baseline period, a total of 133 neonates were admitted, of whom 107 (80.4%) received oxygen. Among them, 68 (64%) received oxygen for more than 24 hours. All were given oxygen through hood and on average, eight jumbo oxygen cylinders were consumed per day.
In the first PDSA (feasibility phase), in 2 of the 30 neonates, there were recurrent but mild desaturations due to improper fixation, and both neonates were switched to headbox. In the remaining 28 of 30 neonates, we did not face any difficulty in using nasal prongs and they tolerated it very well. None of the neonates had nasal trauma. In all 30 neonates, nurses first checked oxygen saturation at triage before deciding to start oxygen. Continuous pulse oximetry was used in 12 (40%) neonates only, however, we were able to record and document intermittent oxygen saturation at 2-hourly intervals, in all eligible neonates. The nurses documented the time and reason for starting and stopping oxygen therapy in all 30 neonates.
During the second PDSA cycle (intervention phase), a total of 296 neonates were admitted, of whom 230 (77.8%) received oxygen therapy at admission. As compared with baseline, there were no differences in the demographic parameters or proportion of neonates requiring oxygen at admission (table 2). We observed a significant reduction in the consumption of oxygen cylinders from median (IQR) 8 (7-8) per day to 3 (3-5) per day (figure 2). Moreover, this change was sustained over the next 8 monthsmedian (IQR) 3 (3-4) ( figure 3).
With the introduction of 'oxygen policy' and changes made in it during the second PDSA cycle, we were able to discontinue oxygen within 24 hours of admission in a significantly higher proportion of neonates who were started on oxygen at admission, as compared with the baseline situation (64.7% vs 36.4 %, p<0.001) (table 2). This led to a 45% reduction in the number of neonates Open access on oxygen on a given day from a median of 11 per day to 6 per day ( figure 4A).
As balancing measures, we also tracked the mortality and the number of neonates requiring higher respiratory support (CPAP). There was no significant difference in mortality in the baseline, feasibility phase and intervention phases. However, we observed a significant increase in the daily number of neonates on CPAP support (figure 4B). It most likely reflects the effect of education and empowerment of nurses in identification of moderate to severe respiratory distress and timely hiking of respiratory support. In the pre-intervention Open access period, if a neonate had worsening respiratory distress, the bedside nurse used to increase the oxygen flow rate and waits for the doctor to examine the baby and take a decision. As there is a single doctor in a shift attending to SNCU babies, outpatient department as well as delivery room, and doing administrative work, this often led to delay. Therefore, the nurses were trained in timely identification of moderate to severe respiratory distress and empowered to initiate CPAP. They were also retrained in setting up of CPAP machine and titrating the positive end expiratory pressure and fractional inspired oxygen as per need. We also observed a significant reduction in the daily census in the intervention phase (figure 4C) as well as the overall duration of hospital stay for each neonate (table 2). As there was no reduction in weekly admissions (rather there was an 11% increase), this change is likely a result of early weaning from oxygen therapy combined with another QI initiative of early initiation and rapid advancement of feeds.
Qualitative feedback
We received positive feedback from the nurses and mothers. Nurses felt that it was easy to apply and maintain nasal prongs. Also, it was much easier to observe the babies and administer gavage feeds to neonates on nasal prongs. Mothers were also happy that they can even Open access breast feed their babies while they are on nasal prongs, which was not feasible previously on oxygen hoods.
Cost analysis
There was apprehension that the use of disposable nasal prong for each neonate requiring oxygen will incur additional costs and may further adversely affect the budget. Therefore, we did a crude cost analysis taking existing rates and conversion units of
DISCUSSION
We showed a significant and sustained decline in the consumption of oxygen cylinders using the principles of QI, within existing resources and manpower. We could achieve a 63% reduction in the consumption of oxygen cylinders and more than 40% of the annual maintenance budget. The interventions were simple and can be replicated in any resource-limited settings. The savings from the budget can be used for other essential requirements like meeting the shortage of pulse oximeters.
In the first cycle of PDSA, everyone agreed that there is a need for an 'oxygen policy' and we should actively look for opportunities to stop oxygen early. In India, FBNC guidelines of 2011 are followed in all SNCUs. FBNC guidelines endorse all the components of oxygen policy formulated by us except the use of nasal prongs. The FBNC guidelines give option of using nasal prongs or oxygen hood or nasal catheter. 5 The updated WHO guidelines recommend the use of nasal prongs and were shared and discussed with the team members. 6 However, as with any change in practice, initially there were apprehensions for using nasal prongs. We valued the team's apprehensions and concerns and tested the change at a small scale. The problems faced in the small test were addressed and the confidence of the staff increased. The acceptance by the nursing staff and their active participation is reflected in the sustenance of the change after 8 months. The nursing staff felt that they had become more confident in managing neonates on oxygen and taking decisions of increasing or downgrading the respiratory support. The cost analysis helped in convincing the state policymakers and administrators. They were happy to ensure a regular supply of nasal prongs and extend the practice to other SNCUs of the state.
Oxygen is the most commonly misused drug in the neonatal period. 8 Studies have shown that irrespective of the diagnosis, more than 90% of the spontaneously breathing infants receiving oxygen by headbox are hyperoxemic. 9 Most of the level II SNCUs in India face overcrowding and lack sufficient pulse oximeters for continuous oxygen saturation monitoring, therefore further increases the risk of adverse effects (like ROP) secondary to hyperoxemia. The WHO updated its oxygen policy in 2016 and recommended using nasal prongs while discouraging the used of oxygen hood. The forthcoming revision of national FBNC guidelines will be modifying the recommendations in favour of using nasal prongs. Active efforts are required to reduce the gaps between evidence and policy in a timely manner. This can be achieved by formulation of scientific expert groups in collaboration with professional associations, tasked with providing real-time updates in policies. The utilisation of oxygen in the SNCUs should be evaluated regularly and correlated with ROP rates and all-cause mortality.
There is inadequate knowledge and lack of adherence to saturation targets. 10 Nurses can play a pivotal role in ensuring optimal oxygen therapy and therefore reducing its undesirable consequences like ROP. 11 Similar context adapted changes in the oxygen delivery practices have helped in achieving better compliance to the best oxygen therapy practices in other settings. 3 4 12 13 With improving survival of premature neonates in level II SNCUs, the ROP burden is also increasing, and by using packages of simple QI measures, we can contribute to controlling the third epidemic of ROP in developing countries. 14 We could not assess the impact on ROP during the study period because of transfer-out of the district ophthalmologist.
CONCLUSION
We were able to rationalise and reduce the use of oxygen in the SNCU by introducing a context-specific oxygen policy and adapting it as per recommended best practices. We were able to reduce the amount of oxygen consumed and save a significant amount of annual maintenance budget.
Twitter Anu Thukral @dranu2014 and Praveen Kumar @praveenpginicu data analysis, and drafting and critically reviewing the manuscript. All the authors approved the final version to be published and agreed to be accountable for all aspects of the work.
Funding This quality improvement work was part of a project on 'Reducing blindness due to retinopathy of prematurity by improving quality of care in special care newborn units' funded by Queen Elizabeth Diamond Jubilee Trust, UK. Publication of this article is made open access with funding from the Nationwide Quality of Care Network.
Competing interests None declared.
Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research.
Patient consent for publication Not required.
Ethics approval The project was approved by the ethics committees of PGIMER Chandigarh; AIIMS, New Delhi and GMCH, Chandigarh. NHM MP also approved the project. We implemented evidence-based strategies at the system level. No alterations were made in the management of individual patients.
Provenance and peer review Not commissioned; externally peer reviewed.
Data availability statement Data are available on request.
Open access This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http:// creativecommons. org/ licenses/ by-nc/ 4. 0/. | 2021-08-05T06:18:20.877Z | 2021-07-01T00:00:00.000 | {
"year": 2021,
"sha1": "02efff908e571988c6f22d55f24713e77b73d63f",
"oa_license": "CCBYNC",
"oa_url": "https://bmjopenquality.bmj.com/content/bmjqir/10/Suppl_1/e001386.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fcf6e9db29d2b8731a531272d94864fee591e665",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
252153189 | pes2o/s2orc | v3-fos-license | HSOA Journal of Genetics & Genomic Sciences ACE, PPARG, SIRT1 Gene Polymorphisms but Not PPARGC1A Polymorphism are Risk Factors for Gestational Diabetes in the Russian Population
Gestational Diabetes Mellitus (GDM) is the most frequently encountered extra genital pathology of gestation and represents a serious medical and social problem, increasing the incidence of unwanted pregnancy outcomes for both the mother and the fetus. The pathogenesis of GDM is complex and includes risk factors such as age, obesity, and a family history of diabetes. Studies have shown that genetic factors also play a role in the pathogenesis of GDM. The aim of our study was to assess the effect of Single Nucleotide Poly- morphisms (SNPs) rs4646994 АСЕ gene, rs8192678 PPARGC1A gene, rs1801282 PPARG and rs7895833 SIRT1gene gene on the development of GDM in the Russian population. The study used the genomic DNA derived by phenol-chloroform extraction method from venous blood samples in 272 pregnant women, including DNA samples of 136 pregnant women with GDM and DNA samples of 136 pregnant women with normal glucose tolerance. Genotyping of the selected polymorphisms was performed by real-time PCR with detection by competing TaqMan probes. A statistically significant association with GDM was found in homozygous carriers of the I allele of the rs4646994 and heterozygous carriers of the G rs7895833 allele in the general inheritance model, heterozygous carriers of the G rs1801282 allele in the general and dominant inheritance models. The data we obtained point to the need to further investigate the polymorphic loci we selected in a larger patient sample, which will enable using this genetic marker in the future as the assessment criterion in the individual outlook of preeclampsia development in GDM pregnant women to take efficient preventive measures to timely remedy and improve the pregnancy outcome.
Background
GDM is a diabetes that is first diagnosed in the second or third trimester of pregnancy that is not clearly either preexisting type 1 or type 2 diabetes [1]. Hyperglycemia in the mother, not diagnosed at the stage of pregnancy planning and in the first trimester, can have an adverse effect on both the pregnant woman's body and the fetus [2].
Over the past 20 years, the worldwide distribution of gestational diabetes mellitus has increased and prevalence rates vary depending on population characteristics, screening methods and diagnostic criteria from 1 to 28% [3,4]. This indicator is estimated at 4,5% in Russia [5]. It is expected that by 2045 the prevalence of GDM will reach 16% in all regions of the world [6]. GDM is characterized by the relative inability of pancreatic β-cells to adequately respond to the increased need for insulin during pregnancy, which contributes to the development of hyperglycemia of varying degrees [7]. Under conditions of hyperglycemia, glucose freely penetrates into the vascular endothelium and causes functional disorders of its cells [8]. In addition, lipid metabolism may change during pregnancy due to the effects of estrogen and insulin resistance. The development of dyslipidemia also leads to endothelial dysfunction [9].
To date, about 40 genetic loci associated with the synthesis and secretion of insulin, with the transmission of the insulin signal, and regulating carbohydrate and lipid metabolism have been identified [10]. Previous studies have shown that some of the genetic loci that predispose to the development of type 2 diabetes may also predispose to HDM [11,12].
Angiotensin-Converting Enzyme (ACE) plays not only a key role in the regulation of blood pressure, but also participates in the processes of the local renin-angiotensin system in pancreatic tissues, regulating local blood flow and insulin biosynthesis of islet β-cells [13,14]. The gene encoding the angiotensin converting enzyme is located at the 17q23.3 locus on the 17th chromosome on the plus strand of DNA. Of the more than 100 allelic variants of the ACE gene, the most important is the insertion-deletion I/D polymorphism in the 16th intron, which consists in the insertion (I) or loss (D) of the Alu repeat. Deletion of the Alu repeat leads to increased expression of the ACE gene [15]. I/D polymorphism of the ACE gene, according to various studies, increases the risk of such diseases as type 2 diabetes mellitus, diabetic nephropathy, diabetic neuropathy, and diabetic retinopathy [16]. SIRT1 belongs to the family of proteins with deacytylase activity and regulates energy balance, lipid homeostasis, and protects the structure and functions of microvascular endothelium [17,18]. SIRT1 is highly expressed in pancreatic β-cells and has the ability to enhance glucose sensitivity [19]. A decrease in SIRT1 expression is facilitated by increased insulin resistance and an inadequate response of β-cells, followed by an increase in glucose concentration, which may play a key role in the development of vascular dysfunction [20,21]. SIRT1 gene is located in the 10-th chromosome in the q21.3 locus and comprises 11 exons. Previous studies have shown that point mutations in promoters are more common than in coding regions of a gene and can affect its expression [22]. The rs7895833 polymorphism in the SIRT1 gene promoter region is associated with type 2 diabetes and obesity, according to few studies [23,24].
PGC-1α belongs to the family of transcription coactivators and is involved in the regulation of cellular energy metabolism, mitochondrial biogenesis in the vascular endothelium, carbohydrate and lipid metabolism [25][26][27]. Although the expression of PGC-1α in the pancreas is low compared with tissues such as the heart, skeletal muscle, liver, brown adipose tissue, and brain, it plays an important role in glucose transport in pancreatic β-cells [28]. The PGC-1α protein is encoded by the PPARGC1A gene, which is mapped to the 4p15 locus. 2 and consists of 24 exons. Among the numerous variations in PPARGC1A gene, of special interest is the substitution the nucleotide A for nucleotide G in position 1444 of the 8-th exon, which causes serine substitution for glycine in position 482 of PGC-1a protein (Gly482Ser polymorphism), which is accompanied by the reduction in PGC1a gene expression [29]. The single nucleotide polymorphism of PPARGC1A (rs8192678) is associated with type 2 diabetes and its complications, relative obesity risk and insulin resistance [30].
Deacetylation of PGC-1α through the action of SIRT1 increases the activity of PGC-1α, which leads to the coactivation of several transcription factors, including nuclear receptors such as PPARγ [31,32].
Peroxisome Proliferator-Activated Receptor-γ (PPARγ) is a ligand-dependent transcription factor and belongs to the nuclear hormone receptor superfamily. PPARγ plays an important role in lipid and glucose metabolism and controls endothelial function [33,34]. A moderate amount of PPARγ is expressed in pancreatic β-cells [35]. El Midaoui et al., (2006) showed that the infusion of insulin and glucose is a factor contributing to the decrease in the level of PPARγ protein [36]. The PPARG gene is located on the human chromosome locus 3p25 and consists of nine exons. The more common allele (C) of the single nucleotide polymorphism rs1801282 in exon B of the PPARG gene, also known as Pro12Ala, encodes the amino acid "Pro". The Pro to Ala substitution at codon 12 is associated with a decrease in PPARγ activity, which may be a risk factor for obesity, type 2 diabetes and GDM [37][38][39][40].
Identifying additional genetic risk markers for GDM may be useful in early diagnosis and would also allow for earlier prevention and treatment. In this regard, the purpose of this study was to study the relationship between GDM and the selected polymorphisms of the ACE, SIRT1, PPARG, and PPARGC1A genes, which affect the function of pancreatic beta cells and vascular endothelial cells, glucose sensitivity, carbohydrate and lipid metabolism.
Research objects
The design of the study and the use of human material were approved by the ethics committee of the Research Institute of General Pathology and Pathophysiology. The study included patients who, in the period from April 2019 to December 2022, were observed and delivered in the Maternity Department of the State Clinical Hospital No. 29 (N.E. Bauman Hospital) of the Healthcare Department of Moscow. All respondents were native Russian speakers of indeterminate ethnicity (due to the ethical standards of the local medical register) and gave written consent to participate in the study. The diagnosis of GDM was established in accordance with the IADPSG recommendations and based on the criteria of the Russian National Consensus clinical guidelines "Gestational diabetes mellitus: diagnosis, treatment, postpartum care" [41,42]. The exclusion criteria were type 1 and type 2 diabetes mellitus, acute and chronic diseases in the acute stage, autoimmune, neuropsychiatric and oncological processes of any localization. The study did not include women with multiple pregnancies, other pregnancy complications, as well as disorders affecting glucose metabolism. QUANTO quantification software (Version 1.2.4, https://bio.tools/QUANTO), which takes into account the frequency of SNPs in the population and the prevalence of the disease [43]. In accordance with the above parameters, a sample size of 136 case-control pairs is required to identify the association between the selected polymorphisms and the risk of GDM. Blood samples were collected from pregnant women with GDM and pregnant women with normal glucose tolerance. All blood samples were obtained by venipuncture after an overnight fast and stored at -20 ℃ until analysis.
Genomic DNA extraction and Polymerase Chain Reaction (PCR)
DNA was extracted from venous blood with the standard phenol-chloroform extraction method of Maniatis et al., [44]. The blood cell element lysis was conducted with the Kunkel method [45]. The high-molecular DNA was desiccated at ambient temperature and dissolved in ТЕ buffer, and the resulting DNA was stored at -20 ℃. All DNA extractions were performed by a single investigator only.
Genotyping polymorphisms rs7895833 of the SIRT1 gene in the promoter region, rs8192678 within the coding region of the PPARG-C1A, rs1801282 of the PPARG gene and rs4646994 of the ACE gene was performed by real-time using the technology of competing TaqMan probes according to the method taken from the literature. All primers and Taq-man probes were synthetically produced by Evrogen LLC, Russia ( Table 1).
Amplification was carried out in the CFX 96 programmable amplifier (Bio-Rad, USA) with the subsequent thermocycling parameters for rs7895833, rs8192678, rs1801282 and rs4646994: initial denaturation for 5 minutes at 95℃; then 40 cycles including To eliminate genotyping errors, 30% of randomly selected samples were re-genotyped and the results obtained were additionally evaluated.
Statistical analysis
Statistical analysis was performed using SPSS 17.0 (SPSS, Chicago, IL, USA). Continuous data were shown as mean ± standard deviation (±SD) if normally distributed. Differences in age between groups were analyzed using Student's t-test. The Hardy-Weinberg equilibrium test was performed using the chi-square test in cases and controls separately for each variant before association analysis. Differences in allele and genotype frequencies between groups were analyzed using Pearson's chi-square test. Logistic regression analysis was used to evaluate associations between SNP genotypes and alleles and GDM risk by calculating Odds Ratios (ORs) and their 95% confidence intervals (CIs). The anticipated risk factor was regarded as significant for pathology if OR adjusted by CI was greater than 1. The level of significance was considered significant at p < = 0.05.
Clinical characteristics
The study used DNA samples from 136 women with GDM (mean age 31.72±4.95 years) and 136 pregnant women with normal glucose tolerance (mean age 31.01±4.83 years). There were no significant differences in the average age indicators (p>0.05).
Association analysis
Analysis of the polymorphic loci rs4646994 of АСЕ gene, rs1801282 of the PPARG gene, rs8192678 of the PPARGC1A gene and rs7895833 of the SIRT1 gene made it possible to estimate the frequency of occurrence of alleles and genotypes of the polymorphic loci of the studied genes ( Table 2).
The distribution of frequencies of genotypes and alleles of polymorphic loci of the studied genes in the control group corresponded to that expected under the Hardy-Weinberg equilibrium.
The distribution of the rs4646994, rs1801282, rs8192678 and rs7895833 genotypes in the GDM group differed from that expected for HWE. The probable reason for the deviation of the observed genotype frequencies for the three polymorphisms in this group is not the genotyping error, but the association of loci with the disease.
Significant differences in the frequency of alleles and the distribution of genotypes between the studied groups were revealed. The prevalence of allele I and homozygous genotype II rs4646994 in the group of pregnant women with GDM were higher compared to the control group (79.0% vs. 51.8%, p=0,002; 54.4% vs. 27.2%, p=0.007). The frequency of occurrence of the G allele and heterozygous genotype CG rs1801282 in the group of pregnant women with GDM were higher in the group with GDM compared to the control group (22.4% vs. 12.5%, p=0.002; 36.0% vs. 12.5%, p=0.009). The frequency of occurrence of the G allele and the heterozygous genotype AG rs7895833 in the group of pregnant women with GDM were higher than in the control group (31.2% vs. 21.3%, p=0.009; 55.1% vs. 33.8%, p=0.002). Differences in genotype frequencies as well as rs8192678 allele frequency between GDM and healthy pregnancies were not statistically significant (p>0.05), so we excluded this variant from further analysis. The analysis of associations established the relationship of polymorphisms rs4646994 of the ACE gene, rs1801282 of the PPARG gene, and rs7895833 of the SIRT1 gene with GDM (Table 3). Thus, homozygous genotype II rs4646994 and heterozygous genotype AG rs7895833 in the general model of inheritance are genetic predisposition factors for this pregnancy complication, increasing the risk of its development by more than 2 times (p=0.009 and p=0.001 respectively). The heterozygous genotype CG rs1801282 in the common and dominant inheritance patterns increases the risk of developing GDM by 2.1 (p=0.007) and 2,2 (p=0.003) times, respectively. A significant GDM risk association was observed in carriers of the G rs1801282 allele in an additive inheritance model (2.024; p=0.002). The risk of developing GDM increased by 1.6 times in carriers of the I allele rs4646994 (p=0.002) and A allele of the rs7895833 (p=0.009).
A review of the world literature shows that Gestational Diabetes Mellitus (GDM) is the most frequently encountered extra genital pathology of gestation and represents a serious medical and social problem, increasing the incidence of unwanted pregnancy outcomes for both the mother and the fetus. The study of gene polymorphisms that affect the function of pancreatic beta-cells and vascular endothelial cells, insulin resistance, carbohydrate and lipid metabolism will allow not only early diagnosis, but also earlier preventive and therapeutic measures.
The main goal of this study was to study the relationship between the polymorphic loci allele of the ACE gene, rs7895833 of the SIRT1 gene, rs1801282 of the PPARG gene, and rs8192678 of the PPARG-C1A gene and the risk of developing gestational diabetes mellitus in the Russian population. The study involved 272 patients with GDM (136 women with GDM and 136 pregnant women with normal glucose tolerance). Our results showed that these ACE, PPARG, and SIRT1 gene polymorphisms were associated with the risk of developing GDM. PPARGC1A gene polymorphism was not associated with gestational diabetes mellitus, which is consistent with the results of studies by Leipold H et al., Shaat N et al., and Franzago M et al., [46][47][48].
According to experimental and clinical studies, I/D polymorphism of the ACE gene is associated with the risk of developing type 2 diabetes mellitus, diabetic nephropathy, and diabetic retinopathy [16]. The results of the association studies of I/D polymorphism with GDM are controversial. Therefore, Dostálová Z et al., Aggarwal P et al., Mirfeizi M et al., did not reveal a significant association in the distribution of genotypes or allele frequency between the control group and women with gestational diabetes mellitus [49][50][51]. Whereas Khan IA et al., (2014) showed in their study that the frequency of the I/D genotype significantly differs between individuals with and without GDM [52].
The Gly482Ser (rs8192678) polymorphism of the PPARGC1A gene is one of the most studied variants. Research results have shown that the minor allele of this polymorphism is associated with susceptibility to type 2 diabetes, relative risk of obesity, insulin resistance, and decreased beta-cell function [30,[53][54][55]. Even though the previous studies suggested that GDM and type 2 diabetes share genetic polymorphisms with the same effect size for the same risk alleles, the findings of the studies by Leipold H et al., Shaat N et al., and Franzago M et al., did not reveal any association between the rs8192678 of the PPARGC1A gene with GDM development risk [56][57][58].
The rs1801282 polymorphic variant, also known as Pro12Ala, is associated with obesity, type 2 diabetes, and GDM [37][38][39][40]. However, studies have shown conflicting results regarding the role of Pro12Ala in the development of GDM. Anghebem-Oliveira et al., found no relationship between the rs1801282 polymorphism and the risk of developing GDM in the Brazilian population [56]. The results of the study by Ustianowski et al., indicate that PPARG gene polymorphism (rs1801282) is not a significant risk factor for GDM in the Polish population [57]. Lin et al., after analyzing sixteen studies involving 3129 women with GDM and 7168 without it, found that the protective G allele of the rs1801282 polymorphism was associated with a reduced risk of GDM in Asians, especially Chinese, but not South Koreans [58]. Data from meta-analyses of studies of the genetic association of Pro12Ala polymorphism with the risk of developing GDM by Wu et al., and Wang et al., suggest a potential role for the Pro allele in the pathogenesis of GDM in Asian populations, but not in the Caucasian population [39,40].
According to previous studies, the rs7895833 polymorphism in the promoter region of the SIRT1 gene is associated with type 2 diabetes mellitus and obesity, but no research has focused on its relationship with GDM [23,24]. It should be noted that studies to identify the association of polymorphic variants of the SIRT1 gene with the risk of developing GDM have not been previously conducted. The genetic variant rs7895833 that we have chosen in association with GDM has also not been studied.
In conclusion, the results of this study suggest the ACE (rs4646994), PPARG (rs1801282), SIRT1 (rs7895833) gene polymorphisms are significant risk factors for the development of GDM in the Russian population. The small sample size of the study groups is the key limitation of this study. Nonetheless, the data obtained point to the need to further investigate the polymorphic loci we selected in a larger patient sample, which will enable using this genetic markers in the future as the assessment criterion in the individual outlook of GDM pregnant women to take efficient preventive measures to timely remedy and improve the pregnancy outcome.
Data Availability
The data used to support the findings of this study are included within the article. | 2022-09-16T16:31:44.815Z | 2022-08-01T00:00:00.000 | {
"year": 2022,
"sha1": "d8391dcdfd876a55527dac6939bbdc5176e40c92",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.24966/ggs-2485/100035",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "d8391dcdfd876a55527dac6939bbdc5176e40c92",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
237197240 | pes2o/s2orc | v3-fos-license | The role of the particle size reduction and morphological changes of solid substrate in the ultrasound-aided enzymatic hydrolysis of cellulose
Graphical abstract
Introduction
Owing to the growing environmental awareness, the extraction of cellulose from the lignocellulose-based biomass has great industrial importance. The conversion of cellulose into valuable products and their utilization have attracted significant attention and generated innovations in several industrial sectors. Conversion by enzymes has huge potential since the enzymes are highly selective and act at low temperatures and near-neutral pH. This biotechnological process is essential in the fuel, food and chemical industries [1,2].
The rate and extent of the heterogeneous enzymatic hydrolysis of cellulose were widely investigated. Despite the synergy of enzymes (i.e. endo-and exo-glucanases, β-glucosidase) the hydrolysis of cellulose is a time-consuming process. The action of enzymes on cellulose may be affected by various factors including the properties of the solid substrate (format and size, morphology, type and amount of accompanying noncellulosic components, etc.), the application and intensity of different agitating systems (magnetic stirring, horizontal shaking, vertical agitation, etc.) and the parameters of hydrolysis (substrate and enzyme loading, temperature, duration, etc.) [3][4][5]. Among the several factors that can have a significant effect on the rate of enzyme-catalyzed depolymerization, the substrate properties were found to be limiting, since the accessibility of cellulose to the enzyme depends on the physical-chemical and structural parameters of the cellulosic material [6,7].
The accessible specific surface area of cellulose is determined by the particle size and pore volume of substrates. The reduction of particles size by various grinding and milling methods is often associated with the disruption of the original ordered structure of cellulosic materials, resulting in a decrease in crystallinity and degree of polymerization, and an increase in water sorption capacity. These changes are beneficial for the enzyme-catalysed digestibility of cellulose [8,9].
Several publications report that the synergetic action of endo-and exo-cellulases on the solid surface of cellulosic materials can be accelerated by ultrasonication with frequencies of 20-100 kHz. This lowfrequency or power ultrasound is associated with acoustic cavitation. Questions addressing the potential benefits of ultrasonication in the heterogeneous enzymatic hydrolysis of pure cellulose and cellulose in biomass have become more complex since additional factors, also significantly affecting the process, are involved including the amplitude of ultrasound, duty cycle, presence/absence of a rigid reflector in the sonicated vessel. Nevertheless, the positive contribution of power ultrasound to the enzyme catalyzed heterogeneous processes is well-known and widely confirmed [4,7,10,11].
Moreover, it needs to be considered that during acoustic cavitation the extremely high local temperature, pressure, and shearing forcesattributable to the collapse of cavitation bubbles and the generation of high-velocity microjetscan also affect both the enzyme molecule and the solid substrate, modifying their chemical and physical properties. Previous studies have shown that intensive ultrasonication can change the protein molecules, resulting in a decrease in the activity of enzymes [12][13][14].
Additionally, ultrasonic irradiation can also affect the solid cellulosic substrate by disintegrating the fibrous structure and reducing the particle size. Since the properties of solid substrates have a crucial effect on the efficiency of enzyme catalyzed hydrolysis, the particle size reduction caused by cavitation is often considered as one of the possible advantageous effects of power ultrasound. Numerous studies have investigated the ultrasound-aided changes in the particle size of cellulosic and lignocellulosic materials, and the circumstances of ultrasonication, such as the applied type of reactor, intensity, frequency, duty cycle, temperature and its control, duration, solid substrate loading, etc. [15,16]. Nevertheless, the effect of particle size on the enzymatic hydrolysis of cellulose is still contradictory in the scientific literature [17][18][19], since the morphology, chemical composition, particle size of the original substrate, and the methods used for determining the size of particles (e. g. sieving, microscopy, laser diffraction, etc.) can also affect the final results [20]. Furthermore, in most of the papers, the particle size and its change are usually characterized by an averaged particle size value [21], even though a single number cannot describe the size of a non-spherical particle.
The complex effects of ultrasonication on the particle size of fibrous cellulosic materials and the behavior of disintegrated particles, with varying particle sizes both in the simultaneously or subsequently applied enzymatic digestion, require a more precise description. Ideally, not only a central point of distribution but also one or more values for characterization of the width of the distribution need to be reported. Thus, our research was undertaken to obtain more accurate information (1) about the rate of ultrasound-aided particle size reduction of cellulosic powders and (2) about the mechanism how the particle size reduction occurred can improve the enzymatic hydrolysis of cellulosic materials.
For this, two low-frequency ultrasonic devices (a horn-type reactor and an ultrasonic bath) were used and their effects on the particle size of different cellulosic materials (such as bleached cotton and linen powders) were evaluated. Subsequently, the ultrasonicated powders were subjected to enzymatic hydrolysis by cellulase, and the efficiency of the process was evaluated elucidating the importance of particle size reduction occurred by cavitation. Although the two ultrasonic devices, i. e. the ultrasonic horn and the ultrasonic bath, cannot be compared because the focus of energy and the position of transducers are different. However, the evaluation of these two widely-used ultrasonic devices in the same, well-designed and controlled process is important, since it can contribute to the selection of the right equipment for successful implementation of a specific task.
Thus, in this study, an attempt was made to separate the individual contribution of particle size reduction on the enzyme-aided hydrolysis of solid cellulosic substrates. Fig. 1 presents the key methodologies applied in this study, the powerful techniques (X-ray diffraction /XRD/, particle size analysis /PSA/, scanning electron microscopy (SEM), spectroscopy /UV-VIS/), and the simple tests (water sorption capacity /WSC/, water retention value /WRV/, degree of polymerization /DP v /) used for describing the changes, evaluating the observations and supporting the main statements. The photos of the experimental set-up for ultrasonic and enzyme treatments can be seen in Fig S1 (in Supplementary data).
Materials
All chemicals and a commercial acidic cellulase enzyme (Celluclast 1.5 L, 80 FPU/ml, produced by Trichoderma reesei) were purchased from Sigma-Aldrich and used in the experiments. The maximal activity of the enzyme is near 50 • C and at pH 5. 100% bleached cotton and linen plain weave fabrics with a weight per unit area of 157 and 200 g/m 2 , respectively, regarded as cellulose sources and obtained from Pannon-Flax Linen Weaving Co. Győr, Hungary were selected as model substrates for the research. Approximately 1.5 g of the fabrics were ground in a ball-mill (Mixer Mill MM400, Retsch GmBH, Germany) at a frequency of 30 1/s, with 11 stainless steel balls for 2.5 min and the cellulose powders were used in the experiment. The selection of the bleached cotton and linen as cellulose sources can be explained by the high cellulose content and the significant difference in the structure of fibers. While cotton is a single biological cell with a compact structure resistant to disruption to fibrils, linen is a bast fiber with a multicellular fiber structure, where fiber bundles can be easily disintegrated into elementary fibers by physical, chemical and enzyme actions [8,22].
Ultrasonication and cellulase enzymatic hydrolysis
The powder of both cellulosic fabrics was subjected to ultrasonic and enzymatic treatments. Suspensions of the cellulosic powders with a concentration of 5 g/l and with a total volume of 125 ml were prepared with a 0.05 M acetate buffer (pH 5). Next, the suspensions were thermostated to 50 ± 2 • C and ultrasonicated in a range of 0-60 min either with a horn-type reactor (Sonics & Materials, Model: Vibra-Cell VC505, driving frequency of 20 kHz, power of 500 W, replaceable tip with a diameter of 13 mm), or with an ultrasonic bath (Elmasonic S 40H, 4.5 L, driving frequency of 37 kHz, power of 370 W). The sonicated samples were labelled as US-H-X and US-B-X, respectively, where X shows the duration of sonication in min.
The horn-type reactor was used at 60% amplitude and during sonication the suspensions were also stirred with a magnetic bar at 250 rpm. Control treatments were always carried out simultaneously, with only magnetic stirring but without ultrasonication, and denoted as MS-X. The configuration of the experimental apparatus with the horn-type reactor is presented in detail elsewhere [14].
To gain information about the impact of sonication on the rate of disintegration of solid substrates, the particle size of cellulosic powders was tracked during a 60-minute period and measured at time points 0, 2, 5, 10, 15, 30, 45, and 60 min. Since the ultrasonication with a horn-type reactor was much more efficient than an ultrasonic bath in terms of disintegration of the solid cellulosic particles, in the subsequent experiments the treatment with a horn-type reactor for 15 min (US-H-15) was selected as pretreatment and applied before the enzymatic hydrolysis.
The enzymatic hydrolysis of cellulosic powders was performed under isothermal conditions at 50 • C in a thermostated shaker (Medingen SWB 20, Germany) following the US-H-15 or the control (MS-15) treatments. Cellulase enzyme with concentrations of 3.2, 6.4 and 12.8 FPU/g substrate was added to the ultrasonicated and control cellulose suspensions. Three parallel experiments were performed for 60 min with one sample taken from each flask at each time interval (0, 5, 10, 20, 30, 60 min). Next, the concentration of the reducing sugars (in glucose equivalent) released in the enzyme reaction was measured as described by Miller [23]. Dinitrosalicylic acid served as a stop reagent and the liberated reducing sugars were determined by UV-VIS spectrophotometry at 540 nm with a Unicam 500 UV-Vis spectrophotometer. The reported values are the results of three parallel measurements.
Particle size analysis
A Horiba Partica LA-950V2 laser diffraction particle size analyzer (Horiba, Japan) was used to define the dimensions of the cellulose particles in the untreated powders as well as the control, ultrasonicated and enzyme-treated suspensions. To adequately describe the particle size of suspensions, the mean (average), as well as median (D v50, for volume distribution) values were used, where D v50 represents the diameter value, below which 50 % of the particles can be found. Further characteristics such as D v90 and D v10 values were also given. Below D v90 and D v10 , 90% and 10% of the total amount of particles can be found, respectively. In some cases, the width of distribution was characterized by the Span value calculated using the following equation (Eq. (1)):
Methods for characterizing the morphology and bulk properties
A Philips PW 1710/PW 1820 X-ray diffractometer was employed at 2θ = 4-40 • to characterize the structure of bleached cotton and linen fabrics and their powders. The crystallinity indices (CrI%) can be determined by the following equation (Eq. (2)): where I AM and I 200 refer to the intensity of diffraction at 2θ = 18 • and the maximum intensity of the 200 lattice diffractions at 2θ = 22.7 • , respectively [24]. Images of the original fibrous and ground samples, as well as the ground powders before and after ultrasonication, were taken with a JEOL JSM 6380 LA scanning electron microscope. To induce conductivity, all samples were sputter-coated with gold for analysis.
For estimating the less ordered regions in cellulose, water sorption capacity of the fibrous and ground samples was determined by exposing the triplicate samples, previously dried over P 2 O 5 for 5 days, to an atmosphere of 65% rh at 25 • C for 5 days. CUENE (cupri-ethylenediamine) fluidity was measured and converted to the viscometric degree of polymerization of cellulose [25]. For characterizing the accessibility of cellulose samples to aqueous reagents, the water retention value (WRV) of fibrous and ground cellulose was determined by soaking the samples in distilled water for 24 h at room temperature and then centrifuging with a Hermle Z326 K centrifuge (Labnet, USA) at 5500 rpm for 10 min. The weight of samples was measured right after centrifugation (W wet ) and after drying (W dry ) at 105 • C. The WRV means the weight difference expressed as the percentage of the dry weigh and was calculated with the following equation (Eq. (3)):
Bulk properties and morphology of the fibrous and ground cellulosic materials
Initially, the original fabrics before and after ball-milling were investigated and characterized with selected physical properties that can have a significant role in enzyme-aided hydrolysis. The difference between the fibrous and ground forms was then quantified. Table 1 presents a summary of these results including data on the particle size, degree of polymerization, crystallinity, and morphology. The latter was introduced by scanning electron microscopy and by measuring the interaction with liquid water and water vapor. Fig. 2 shows the SEM images of the original fibers from fabrics and the particles of powders. Clearly, even a short ball-milling for 2.5 min can destroy the fibrous structure of both fibers, resulting in cellulosic powders with lower crystallinity and degree of polymerization. These changes were accompanied by a significant modification of morphology leading to a considerable increase in water sorption and water retention ability.
As the SEM images of cellulosic powders reveal, besides small particles, some longer fiber fragments remained especially in the linen powder (Fig. 2d). The averaged particle size measured by laser diffraction was 49 µm for cotton, while it was about 123 µm for linen. Ballmilling had only an effect on the length of cotton producing small fiber particles, but did not significantly modify the characteristic appearance i.e. convolutions in the ribbon-like fiber (Fig. 2c). For bast linen, however, the well-known multicellular fiber structure disappeared and the powder consists of flattened elementary fibers and fiber bundles (Fig. 2d). The new morphology and surface properties of ground cotton and linen can be characterized by increased water sorption and water retention as well as lower crystallinity ( Table 1). The water sorption capacity, measuring the accessibility of cellulose to water vapor, increased significantly by ball-milling of the bleached cotton and linen (from 6.74 to 7.93% and from 7.09 to 9.14%, respectively) and correlates well with the slightly lower crystallinity of powders (from 83 to 81% for cotton and from 85 to 82% for linen). Since water molecules cannot penetrate the crystalline regions, both WSC data indicate an increase in the disordered regions of ground cellulose. The water retention capacity of the fibers depends on the pore structure and surface wetting properties [26]. Since bleached cotton and linen fabrics were used as cellulose sources, and their wettability characterized by water drop test (the method is not introduced) was perfect with a wetting time lower than 1 s, the water retention ability is mainly determined by the pore structure. By destroying the fibrous structure and creating particles with rough surface by ball-milling, the water retention ability of both powders increased significantly compared to the fibrous materials, and the differences in WRV of fibrous and ground cotton and linen were 10.2% and 20.0%, respectively. As the SEM images of the ground samples reveal, the surface of particlesespecially for linenis irregular with cracks and pores capable of retaining water and resulting in the highest WRV (68.1 %) among the samples investigated (Table 1).
Ball-milling extensively degraded the cellulose chains in both bleached cotton and linen, and the loss in DP was 43.7 and 41.5%, respectively. Surprisingly, in a previous study, a significantly lower loss in DP was measured after ball-milling of the raw cotton and linen (14.5 and 30.5%, respectively), and the accompanying increase in water sorption capacity was also more moderate [8] in comparison to the data presented here for the bleached cellulose samples. This indicates that the original, compact structure of raw cotton and linen fibers is more resistant to the destructive effect of ball-milling than that of the bleached fibers with traditional process history including harsh alkaline and oxidative treatments under industrial circumstances.
Ultrasonication of bleached cotton and linen powders either with a horn-type reactor or a sonication bath
Ultrasonication was used as a pretreatment of cellulose suspensions. The experiments aimed (1) to get information about the behavior of cotton and linen particles in aqueous suspension in an ultrasonicated environment, and (2) to evaluate the effect of various types of equipment (ultrasonic horn and bath) on the properties of the cellulose powders. Laser diffraction particle size analysis was the primary test method applied in this study. Besides PSA, changes in morphology induced by ultrasonication were characterized by XRD, and the WRV and WSC values were also measured.
Comparative analysis of particle size and particle size distribution
As discussed in the previous section, the averaged particle size of initial powders used for the experiments was characterized by the mean size, with values for cotton and linen of 49 and 123 µm, respectively ( Table 1). The effect of magnetic stirring (control treatment) and ultrasonication of cellulose suspensions either with a sonication bath or a horn-type reactor was investigated. The treatments lasted for 60 min. Samples were taken in triplicates from the cellulose suspensions at various time points for the subsequent laser diffraction PSA. Results in Fig. 3 show the mean size values as a function of the time of treatment. Clearly, the disintegration of particles is markedly improved in the sonicated environment compared to the treatment with magnetic stirring (MS) alone. In the latter case, only a slight decrease of 15 % was observable for both fibers after 60 min.
Although there was a reduction in the size of particles in both powders, the ultrasonic treatments obviously lead to a stronger decrease in the size of linen particles than that of cotton. The treatment with a horn-type reactor elicited an extremely high shearing force and resulted in a 49 % reduction in mean size, whereas upon applying the ultrasonication bath, a 35 % reduction was observed at the last time point (60 min) for linen particles. For cotton, however, the level of size reduction was more modest with final values of 24% and 16%, respectively. Results, especially in Fig. 3b, Since a single value cannot describe and characterize the real changes in the size of non-spherical particles such as cellulosic powders, there is a need to deepen the understanding of the particle modifying effect of ultrasonication. Results prove that magnetic stirring even for 60 min was unable to separate and disintegrate the cellulosic particles and did not substantially modify the PSD curves of the original cotton and linen powders (red and black curves in Fig. 4, respectively). For cotton, the original, stirred (MS), and sonicated in a bath (UH-B) samples exhibited unimodal and symmetrical PSD curves (Fig. 4 a,c) with low Span values ranged from 1.86 to 2.22 (Table S1 in Supplementary data). A more intensive ultrasonication (UH-H) slightly increased the Span to 2.38 (for 15 min), shifting the peaks to smaller particle sizes. The curves remained unimodal but have a perceptible negative skew (Fig. 4a, blue and green curves), indicating the appearance of smaller particles. This is most likely due to the detachment of tiny particles from the surface of fibrous cotton particulates (Fig. 2c), since cotton has a compact structure, which is resistant to further disruption for example by cavitation-induced shearing, or other physical forces [8]. The SEM image of the ultrasonicated cotton particles in Fig. 5a confirms the statements above.
For linen, however, the distribution curve of original particles is three-modal (Fig. 4b,d) with large width of distribution (Span: 2.46), indicating the presence of particles in a broad size range of 5 µm to 500 µm. The two main peaks center at about 100 and 250 µm. After magnetic stirring, the particle size distribution was essentially unchanged but an additional, less pronounced peak in the area of small particles near 15 µm appeared (Fig. 4b,d, black and red curves). Ultrasonication with both devices has a significant effect on the particles of linen powder, resulting in the disappearance of the peak of the largest particles (nearly 250 µm) and enrichment of the smallest particles near 15 µm. The latter was more pronounced with the horn-type reactor even after 15 min treatment (Fig. 4b, blue and green curves). Both changes proved the efficiency of ultrasonication in the disintegration of linen particles, indicating that the share forces generated by microjets are strong enough for separating the multifibrillar linen into elementary fibers with a possible length of 100 µm. The SEM image of linen particles (Fig. 5b) proves the effectiveness of ultrasonication (US-H) on the disintegration of fiber bundles and separation of elementary fibers from the surface.
The shift of the PSD curves to a smaller size and the unambiguous formation of a peak near 15 µm indicate that the ratio of fine particles in the US-H suspension increased significantly (Fig. 4b). The disintegration of the fibrous particles and generation of a large amount of small particles are usually accompanied by the morphological changes of the cellulosic substrate, most likely leading to improved accessibility [27], which can affect the heterogeneous enzyme-aided hydrolysis of cellulose.
Comparative analysis of the morphology of ultrasonicated cellulosic powders
Since even a 15 min-long ultrasonication had a significant particle size reducing and PSD modifying effect on the cellulosic powders as discussed previously, in the following experiments the changes in the morphology solely of samples treated for 15 min were investigated. As Fig. 6 shows the interaction of cellulosic particles with liquid water (expressed in WRV) has changed significantly, especially for samples treated with a horn-type reactor. Results proved that linen powder ultrasonicated with a horn-type reactor (US-H-15) has a markedly improved water retention value of 85.8%, whereas the WRV of cotton powder is more modest with values of around 59%. The WRVs for MS and US-B samples are significantly lower: the values for cotton are 51.6 and 53.7%, and for linen are 67.9 and 69.8%, respectively. Similarly, a significant increase in water holding capacity of okara fibers occurred by ultrasonication with increasing power density [28].
Since water retention power is a measure of the accessibility of fibers to aqueous baths during wet processing [26], it can predict the effectiveness of ultrasonication (with a horn-type reactor, US-H) as a pretreatment in the cellulase enzymatic hydrolysis of cellulose powders. In that case, the shear forces generated in the ultrasonicated suspension were much stronger than those caused by a moderate magnetic stirring or an ultrasonic bath, and they were able to separate solid particles stuck together in cotton powder or elementary fibers in fiber bundles of linen. The new surfaces contain irregularities, cavities, and external fibrils, that contribute significantly to the increased water holding capacity of the ultrasonicated substrates. It was proven for softwood kraft pulp fibers [27] that not only pores but also external fibrils formed under high shear forces and present on the surface of fibers can retain a significant amount of water. In this study, the surface fibrillation of linen particles by ultrasonication (US-H) was very pronounced (Fig. 5b) that can contribute to the water holding capacity and explain the large value of 85.5% (Fig. 6b).
The slight changes occurred by magnetic stirring and ultrasonication were measured as the water vapor sorption of cellulosic samples (Fig. 6). The values for cotton and linen are in a narrow range of 7.76-7.92% and 8.13-9.11%, respectively. Clearly, the applied treatments have only a negligible effect on the free hydroxyl groups in the amorphous regions and at the surface of crystallites of cellulose that are responsible for water vapor sorption. The data correlate well with the crystallinity (Table S1 and Fig. S2 in Supplementary data), which remained almost unchanged during the processes (CrI: 79-82%). Shearing treatment of bleached cellulose with homogenization resulted in an increase of water retention, but the crystallinity of cellulose remained unchanged [29].
Furthermore, all samples were dried after the applied treatments since both the water retention and water sorption tests require dry cellulosic powders. Since drying causes an irreversible collapse of pores and formation of interfiber hydrogen bonds (Kimura et al., 2020), the test methods likely under-estimate the accessibility. Thus, a comparison of values in Fig. 6 with those of the untreated powders (Table 1, WSC: 7.93 ± 0.05% and 9.14 ± 0.08%; WRV: 55.9 ± 1.3 and 68.1 ± 0.2%, for cotton and linen, respectively) is superfluous.
It is clear that in the present experiment, low-frequency ultrasonic pretreatment with a horn-type reactor was more efficient and resulted in more favorable changes in substrate properties than the ultrasonic bath, presumably facilitating a more efficient enzymatic treatment of the cellulose powder. However, it should also be noted that in other processes, e.g. in enzyme-assisted bleaching of cotton, the most favorable results, either in terms of product properties or process parameters, can be achieved by combining and simultaneous using different ultrasonic devices, which also have different frequencies, geometry and performance [30].
Reducing sugar liberation
Ultrasonication with a horn-type reactor was proven to be the most effective pretreatment by modifying the particle size and particle size distribution as well as the structural and bulk properties of cotton and linen powders. Since the most significant changes in the particle size occurred within 15 min and beyond only minor variations were detected, ultrasonication with a horn-type reactor for 15 min was selected as pretreatment and applied prior to the cellulase enzymatic digestion. As a control, a pretreatment under magnetic stirring for 15 min was applied. The enzymatic hydrolysis was performed under horizontal shaking.
Consistent with the structural changes and particle size reduction, an increase in enzymatic digestion of the ultrasonicated powders was anticipated. As the results in Fig. 7a,b reveal, a significant increase can be observed in the rate of hydrolysis with respect to that of the control cellulose. Besides, the concentration of reducing sugars liberated in hydrolysis seems to be directly related to the enzyme concentration.
The comparison of reducing sugar concentration data at the 60 min time point (Table S2 in Supplementary data) reveals that the substrate modifying effect of ultrasonication, such as particle size reduction and an increase in accessibility, contributed significantly to the efficiency of the subsequently applied enzymatic hydrolysis and resulted in an increase of the hydrolysis products. The difference in reducing sugar concentrations between the ultrasonicated and control samples at different enzyme loads is in the range of 12-27% and 13-39% for cotton and linen, respectively, which can exclusively be attributable to the advantageous interaction between the solid substrate and ultrasonication. It was also proven that at smaller particle sizes (33-37 µm) of the cellulosic substrate (such as oak sawdust), higher rate of enzymatic digestion was measured, and after 72 h hydrolysis, and approximately 50 % increase in glucose concentration was detected [31]. However, further decrease in particle size to less than 1 µm did not increase the concentration of reducing sugars and the rate of hydrolysis [20]. For microcrystalline cellulose pretreated with homogenization, the increased WRV of the substrate correlated to the enzymatic hydrolysis rate [29].
Additionally, at the 15 min time point (Fig. 7, Table S2 in Supplementary data) the difference is even more significant, as the increase has almost reached 50% and 80% for cotton and linen, respectively, with an enzyme load of 12.8 FPU/g cellulose. These data can provide an opportunity to estimate separately the pure and undoubtedly positive contribution of ultrasonication (15 min) to the heterogeneous enzymatic hydrolysis (15 min), which is based exclusively on the substrate modifying effect of ultrasonication (i.e. particle size reduction and changes in morphology). Otherwise, in ultrasound-aided enzymatic hydrolysis (e.g. for 15 min) not only the positive i.e. the substrate modifying (see above) and mass transfer accelerating effects of ultrasonication prevailed, but also the negative impact, such as a reduction in enzyme activity [4,13,14]. Notwithstanding that several advantageous and disadvantageous processes take place simultaneously in the heterogeneous enzymatic hydrolysis performed in an ultrasonicated environment, the outcome is predominantly positive.
Changes in particle size during the enzymatic hydrolysis
Since the most effective hydrolysis occurred at the highest enzyme load (12,8 FPU/g cellulose, Fig. 7), this process was selected and further investigated to identify the effect of enzyme action on the size and size distribution of particles. Fig. 8a and b show the PSD curves of cotton and linen powders, respectively, before and after the enzymatic hydrolysis. Results prove that (1) the cellulase enzyme treatment affects the particles and changes their average size and PSD, (2) the changes of particles during the enzymatic digestion are determined by the pretreatment (i.e. ultrasonication or magnetic stirring).
Clearly, a small shift of the PSD curves took place. Interestingly, the direction of the shift depends on the type of pretreatments: for ultrasonicated samples, the curves shifted to the righttoward the larger particles; for the control pretreatment, however, the curves shifted to the leftto smaller particle sizesby the enzymatic treatment.
The enzyme-treated cotton (Fig. 8a) has a slightly narrower particle size distribution than the initial ultrasonicated sample, indicating the removal of the smallest particles by enzymatic digestion. The shape of the PSD curves of the control sample (MS-15), however, remained unaffected by the enzyme hydrolysis.
The PSD-modifying effect of enzymatic treatment is particularly pronounced in linen powders (Fig. 8b). For the ultrasonicated sample, similar to the cotton counterpart, the number of small particles with the average size of 15 µm slightly decreased, while a less pronounced peak in the region of large particles near 250 µm appeared. This peak with a significantly higher frequency can also be found in the PSD curve of the MS-15 control and MS-15-E60 samples, but in the latter case, due to the enzyme action, the ratio of large particles decreased significantly.
A plausible explanation for the increase of average particle size and the shift of PSD curves toward the higher values in the US-H-15-E60 process can be the fast enzymatic digestion and disappearance of the smallest particles generated by the UD-H-15 pretreatment. This simplified assumption is supported by the data presented in Table 2. For the ultrasonicated cotton and linen samples, the average mean values and all the D v values of selected portions of particles increased by the enzymatic digestion.
Previous studies have revealed that besides ultrasonication also the cellulase enzyme treatment has a significant impact on particles by disintegrating and fibrillating them and modifying their size [32,33]. Concerning the changes in the size of cellulosic particles during enzyme treatment (Fig. 8), two adverse processes are parallelly taking place and both affect the particles. The enzyme (1) digests the smaller particles and (2) disintegrates especially the larger ones; the former process leads to the wane or disappearance of the smallest fractions of particles, while the latter contributes to the accession of particles in the portions with the larger size. Since the ultrasonicated suspension is rich in small particles, their disappearance or wane by enzymatic digestion overcomes the production of particles with the larger sizes. This would explain the shift of the PSD curve of the US-H-15-E60 suspensions for both fibers to the direction of small particle sizes. The control pretreatment, however, did not affect significantly the particle size and PSD, and did not alter the ratio of small particles. Thus, in the control (MS-15-E60) treatment the disintegrating effect of cellulase is the predominant process, contributing to the shift of PSD curves to the right, toward the larger particles.
Conclusions
In this research bleached cotton and linen in ground form served as cellulose sources to evaluate (1) the rate of the ultrasound-aided particle size reduction and the accompanying changes in the morphology of cellulosic powders and also (2) the effect of size reduction on the efficiency of the subsequent cellulase enzymatic hydrolysis. Results clearly Fig. 8. The effect of pretreatments (US-H-15 or MS-15) on the particle size distribution of residual cotton (a) and linen (b) particles measured after the cellulase enzymatic digestion for 60 min (E60) at 12.8 FPU/g cellulase enzyme load.
Table 2
Changes in particle size of cotton and linen powders during a cellulase enzymatic hydrolysis with an enzyme load of 12.8 FPU/g cellulose and for 60 min (E60). The effect of pretreatments such as ultrasonication with a horn-type reactor (US-H-15) or magnetic stirring for 15 min (MS-15, control) on the mean size and D v10 , D v50 and D v90 characteristics. proved that magnetic stirring (control) only marginally affected the structural properties of cellulosic powders and the influence of the ultrasonication bath was also moderate. Treatment with a horn-type reactor, however, generated strong waves and microjets, leading to the significant modification of the particles. The extremely strong share forces resulted by the treatment led to reduction of the average particle size, a change in the particle size distribution and an increased accessibility e.g. due to surface fibrillation. Additionally, a more severe effect of ultrasonication was observed on linen than on cotton. The presence and amount of particles with a smaller particle size (approx. 15 µm for linen) and their increased accessibility showed a major effect on the rate and efficiency of the subsequent enzymatic hydrolysis. The enzyme preferably digested the small particles generated by ultrasonication and resulted in a high concentration of reducing sugars. Furthermore, the enzyme action was able to disintegrate the larger particles. The disappearance of the smaller and appearance of the larger particles by the enzyme action resulted in a modified PSD as the new size distribution curves shifted to the right towards the larger average particle size regions. In the control suspension, however, the number of small particles was negligible, and the enzyme disintegrating effect was dominant.
Clearly, in ultrasound-aided enzymatic hydrolysis the applied ultrasonication continuously provides small and highly accessible solid particles for the enzymatic digestion, promoting the enzyme action and contributing to the high overall yield of the hydrolysis. Based on the data obtained from the separated 'ultrasonic pretreatmentenzymatic hydrolysis' system introduced above, it can be concluded that the particle size reducing and accessibility increasing effects of ultrasonication on the solid particles could even result in a 50 % and 80 % increase in reducing sugar concentration for cotton and linen, respectively (Table S2 in Supplementary data, at 15 min time point). | 2021-08-19T06:16:37.912Z | 2021-08-11T00:00:00.000 | {
"year": 2021,
"sha1": "f854116583b34a534ef98044b120f0a710f4c9b1",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.ultsonch.2021.105711",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "78f98b1e1eaf8a0d31b1c630c48eadaaef71380e",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
269910692 | pes2o/s2orc | v3-fos-license | Evaluate the Outcome of Labor Induction in Cases of Isolated Oligohydramnios at Term Gestation
: Background: Amniotic fluid volume is an important dynamic parameter for the evaluation of fetal well being. The incidence of isolated oligohydramnios ranges from 0.5% to 5% depends on the target population. Isolated Oligohydramnios (IO) refers to the presence of oligohydramnios without fetal anomalies, without fetal growth restriction, without intrauterine infection, and in the absence of known maternal disease. Methods: A prospective cohort study was conducted at Department of Obstetrics and Gynaecology, Govt Ranees Hospital Pudukottai, Pudukottai Medical College, Pudukottai over a period of 1 and a half year (January 2021 to June 2022) among Antenatal mothers of singleton term gestation based on inclusion and exclusion criteria. This study has been done to establish the fetomaternal outcome associated with active induction of labour for isolated oligohydramnios in term gestation. Results: Mode of delivery in study population, 120 were delivered by labour natural (59.1%), 50 were delivered by assisted delivery (24.6%) and 33 were delivered by LSCS (16.3%) Mode of indication for LSCS, Out of 33, 16 were taken for LSCS due to fetal distress, 11 due to nonreactive CTG and remaining 6 due to failed induction. Type of delivery, 136 were primi, among primigravida, 74 delivered labour natural, 32 by assisted delivery, 30 were LSCS.67 were multigravida, among multigravida 46 delivered labour natural, 18 by assisted delivery, 3 were LSCS. Induction delivery interval for primigravida is longer than multigravida in all types of induction. G + O group delivered faster than F + G; G + O group delivered faster than F + O. Neonatal outcome was insignificant. Conclusion: In this study, both maternal and fetal outcomes are similar to those pregnancies with normal amniotic fluid. Hence early detection of oligohydramnios and its management help in reduction of perinatal morbidity and mortality and also decreased cesearean deliveries.
Introduction
Nature has made floating bed in form of amniotic fluid cavity filled with liquor for requirement of fetus, for its existence and growth in sterile environment, regulation of temperature, avoidance of external injury and reduction of impact of uterine contractions.
OLIGOHYDRAMNIOS is most commonly defined as an amniotic fluid index <5 cm or the largest vertical pocket measuring < 2 cm.It is considered as a risk factor for adverse fetal outcome, as well as indicator of the possible presence of maternal and fetal comorbidities.Hence the identification of oligohydramnios usually mandates close fetal survillence.
When diagnosed at term, it is commonly considered a solid indication for labour.It is well established that oligohydramnios is associated with increased adverse perinatal outcome, possibly as a result of umbilical cord compression, an associated uteroplacentalin sufficency and MSAF.Therefore recommendations for labour induction at term has evolved in an attempt to reduce the incidence of these adverse outcomes.
Early detection of oligohydramnios and its management may help in reduction of perinatal morbidity and mortality one side and decreases caesarean deliveries on the other side.Hence, oligohydramnios has got significant impact on neonatal outcome and maternal morbidity.
Aim of the study
To evaluate the outcome of active induction of labour for isolated oligohydramniosin term gestation.
Objective of the study
The main objectives of the study are: Assessing the need for monitoring the pregnancies with isolated oligohydramnios Assessing the maternal outcome in pregnancies with isolated oligohydramions on conditions such as mode of delivery and after delivery effects. Assessing the fetal outcome in pregnancies with isolated oligohydramnios with using Amniotic fluid index and Non stress test (cardiotocogram during labour.
Study Design
Prospective cohort study.
Duration of Study
Period of 1 and a half year (January 2021 to June 2022)
Study Population
Antenatal mothers of singleton term gestation at Govt Ranees Hospital Pudukottai, Pudukottai Medical College, Pudukottai.
Inclusion Criteria: All patients are selected according to inclusion criteria and informed about the condition and informed written consent were obtained after explaining the procedure, their AFI, and absence of adverse effects in the study with ensurance that their fetus will be monitoring all the time and no adverse effect will be on fetus.Detailed history was elicited and recorded.General examination, Systemic examination and Obstetric examination was carried.Investigated for urine routine, Hb, Blood grouping and typing, Random blood sugar, BT, CT were done.Ultrasonogram was done and documented.
On admission, NST is done for all women.If NST found reactive, then patients are induced according to protocol and monitored.All cases will be monitored by CTG in labour.If there are any signs of fetal distress, emergency LSCS done.
After 3 centimeter dilatation of the cervical os in primigravida and 4 cms dilatation in multigravida ARM done and will be classified as clear and meconium stained liquor.Cases with meconium stained liquor will be taken for emergency LSCS.AS per FOGSI guidlines, Failure to achieve regular uterine contractions (every 3 minutes) after one cycle of completion of cervical ripening are taken as failed induction.
All new borns will be attended by Pediatrician.
Various outcome measures recorded are induced vs spontaneous labour, nature of amniotic fluid, FHR tracings, mode of delivery, indication for caesarean section or instrumental delivery, APGAR score at 1 minutes and 5 minutes, birth weight, admission to neonatal ward, perinatal morbidity and mortality.
Results
Data were entered in the excel spreadsheet and variables were coded accordingly.The statistical analyses were performed using Graph pad Prism version 5 software.Data were presented as mean with Standard deviation for normal distribution/scale data.Data were presented as frequency with proportion n (%) for categorical data.One -way ANOVA with Tukey"s post hoc test was used to compare the mean of six groups (Foley, Gel, Oxytocin, Foley +Gel, Gel + Oxytocin, Foley + Gel + Oxytocin).Fisher"s exact test was used to compare the frequency distribution of parameters between the groups.p<0.05 were considered statistically significant.Type of delivery between the type of parity with respect to the type of induction Total N = 203, 136 were primi, among primigravida 74 delivered labour natural, 32 by assisted delivery, 30 were LSCS.67 were multigravida, among multigravida 46 delivered labour natural, 18 by assisted delivery, 3 were LSCS.Comparison of time interval from induction to delivery in hours between the type of parity with respect to the type of induction.When double intervention F + G and G + O; P < 0.01 which is significant.In single intervention gel only and oxytocin only P < 0.02 and P < 0.045 respectively which is significant.Induction delivery interval for primigravida is longer than multigravida in all types of induction.Data are expressed as n with %.F= Foley; G=Gel and O=Oxytocin for induction.Fisher"s exact test was used to compare the frequencies.NS= Not significant.
New born admission NICU were 22 neonates, others not get admitted Comparison of neonatal outcome between the type of induction performed in the study population P = 0.241 which is not significant.
Discussion
Amniotic fluid is the protective milieu that nourishes the fetus and in process of labour it helps the fetus to maintain the acidbase balance so that it does not goes into distress.
With decreasing amniotic fluid the fetus may have increased perinatal risk.
Phelan et al studied about the amniotic fluid index measurements during pregnancy and concluded that its usefulness in assessing the fetal surveillance throughout pregnancy.
Amniotic fluid index is useful screening test for detecting intrapartum fetal distress and early intervention to deliver an infant with good APGAR.Colleen and Mark et al studied the impact of amniotic fluid volume assessed intrapartum on perinatal outcome concluded that its good in predicting the risk.
In this study, we evaluate the outcome of labour induction in cases of isolated oligohydramnios at term gestation.
In casey et al, the mean maternal age was 23.9yrs, which is comparable to the present study, which was 25.2yrs.
In Donald D et al. the incidence of oligohydramnios was 60% in primigravidas, which is comparable to present study, as it was 67%.
Manzanares S et al shows 84% vaginal deliveries in induced patients of oligohydramnios and 16% by caesarean section.
In this study, 83.7% shows vaginal delivery, among which 59.1% were labour natural, 24.6% were assisted delivery and 16.3% were caesarean section.
In this study, among primigravida, 74 were delivered by labour natural, 32 by assisted delivery, 30 were delivered by lscs.According to J Matern fetal neonatal medicine, oligohydramnios at term did not influence the outcome of induction of labour in nulliparaus women with unfavourable cervix. In
Conclusion
Isolated oligohydramnios is frequent occurrence and demands intensive fetal surveillance and proper antepartum and intrapartum care.
Since oligohydramnios is often associated with maternal and fetal comorbidities affecting the pregnancy outcome, induction of in term pregnancies with IO is common.
IO at term is associated with significantly higher rates of medical intervention and caesarean sections.
Amniotic fluid volume is a predictor of fetal tolerance in labour and its decrease is associated with increased risk of abnormal heart rate and meconium stained fluid.Due to intrapartum complication and high rate of perinatal morbidity and mortality, rates of caesarean section are rising, but decision between vaginal delivery and caesarean section should be well balanced so that unnecessary maternal morbidity prevented and other side timely intervention can reduce perinatal morbidity and mortality.
Figure 1 :
Figure 1: Vertical cone chart depicting the frequency distribution of age category in years in the study population The overall mean age was 25.2 years with SD of 3.31 years.The minimum age was 19 and the maximum age was 35 years.
Figure 2 :
Figure 2: Vertical bar diagram depicting the frequency distribution of type of gravida observed in the study population.N=203Data are expressed as n (frequency) with % (proportions).The Total N=203.In this study total no. of 203 patients 136 were primigravida (67%) and others are multigravida (33%).
Figure 3 :
Figure 3: Vertical bar diagram depicting the frequency distribution of type of gestational age in weeks observed in the study population.N=203 Data are expressed as n (frequency) with % (proportions).The Total N=203.The mean gestational age observed was 38.9 weeks with SD of 0.91 weeks
Figure 4 :Figure 5 :
Figure 4: Pie chart depicting the frequency distribution of type of AFI value observed in the study population.N=203 Data are expressed as n (frequency) with % (proportions).The Total N=203.The mean AFI value observed was 4.36 with SD of 0.48
Figure 6 :
Figure 6: Vertical cone diagram depicting the frequency distribution of type of induction of delivery observed in the study population.N=203
Figure 7 :Figure 8 :
Figure 7: Pie chart depicting the frequency distribution of type of delivery performed in the study population.N=203
Figure 11 :
Figure 11: Horizontal bar diagram depicting the comparison of neonatal outcome between the type of induction performed in the study population.N=203
Table 1 :
Comparison of type of delivery between the type of parity with respect to the type of induction.
Table 2 :
Comparison of time interval from induction to delivery in hours between the type of parity with respect to the type of induction.
Data are expressed as mean with SD.Unpaired "t test was used to compare the means between the groups.* indicates p<0.05 and considered statistically significant.
Table 3 :
Comparison of time interval from induction to delivery (in hours) between the groups in respect to type of induction Data are expressed as mean with SD.One -way ANOVA with Tukey"s post hoc test was used to compare the variance.*indicatedp<0.05and considered significant.F= Foley; G=Gel and O=Oxytocin for induction.Paper ID: MR23203235000 DOI: 10.21275/MR23203235000ISSN:
International Journal of Science and Research (IJSR) ISSN: 2319-7064 SJIF (2022): 7.942 Volume 12 Issue 2, February 2023 www.ijsr.net
this study, 33 mothers were taken up for LSCS, among these 16 due to fetal distress, 11 were taken up due to non reactive CTG.According to Krishna Jagatia et al, most common indication to CS was fetal distress and abnormal heart rate which is comparable to the present study.LicensedUnder Creative Commons Attribution CC BYIn EYAL KRISPIN STUDY, prostaglandin were superior to foley both in successful cervical ripening and vaginal delivery rates.In this study, neonatal outcome like NICU admission were 22 only.IN AJOG 2019, the induction of labour for isolated oligohydramnios had similar perinatal outcomes to otherwise normal pregnancies. | 2024-05-20T15:06:25.079Z | 2023-02-05T00:00:00.000 | {
"year": 2023,
"sha1": "6dc2cbdccabc5b53bf16531d1a8b1103912b128d",
"oa_license": null,
"oa_url": "https://doi.org/10.21275/mr23203235000",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "c9142d4317c90e69615db8d11d3647e01931082d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
111099223 | pes2o/s2orc | v3-fos-license | An Exploratory Study on Entry Level HRM Job Opportunities and Their Requirements in Sri Lanka
The objectives of this study are to explore the types, categories and requirements of entry level HRM job opportunities available in Sri Lanka. In order to achieve the objectives, an exploratory study on all the HRM related employment advertisements appeared in the Sunday Observer newspaper for the period of 3 years (2006, 2007 and 2008) was conducted by using comprehensive and systematic content analysis along with descriptive statistics. The results revealed that a few number of entry level options are available in Sri Lanka. These options can be grouped into two broad classifications such as generalized and specialized way of entry options. Sri Lankan organizations have a practice of recruiting young people as Management Trainee (MT) to fill the current or future job vacancies. In connection with entry-level HRM jobs, MT has been identified as one of the main sources of entry options in Sri Lanka. Based on the needed educational qualifications, the MT option is classified into eight categories. In case of MT, a placement process and conditions were identified. From entry level job seekers, organizations expect twenty nine skills and eight types of personal qualities. The implications of these findings are useful to all stakeholders of HRM especially current and potential job seekers, organizations and HRM workforce developers in Sri Lanka.
Introduction
Entry level jobs are the primary jobs which persons will have to hold in order to enter into any field of working.Currently many universities, other educational and professional institutes are offering specialization degree, diploma, certificate programs, and professional courses in HRM in Sri Lanka.Business organizations also advertise various entry level jobs in HRM in order to attract young HRM workforce or HRM job seekers from the labour market.In this context, it is important to explore this phenomenon systematically in order to provide empirical knowledge to the stakeholders of HRM.
These are the specific entry level jobs (employment opportunities) available in the field of HRM in Sri Lanka (see Table 1).The above Table 1 provides the answers for what and how many entry level jobs are available in the field of HRM in Sri Lanka annually.There were 118, 247 and 275 job vacancies in 2006, 2007 and 2008 respectively showing an upward trend of increasing.Even though there were 23 entry level HRM jobs identified according to this study, not all jobs are very frequently demanded by the organizations.
Out of these 23 entry level HRM jobs only very few jobs were demanded by the organizations frequently and they include Management Trainee, Human Resource Assistant and Personnel Clerk etc.The most frequently demanded entry level HRM job is Management Trainee (total of the four types is 292).The next most frequently demanded entry level HRM job is Human Resource Assistant/Staff Assistant and then is Human Resources/Personnel Clerk.
Categorization of Entry Level HRM Jobs and their Requirements
In Sri Lanka, several entry level options are available and these options can be grouped into two broad classifications such as generalized way of entry options (e.g.Management Trainee, HR Assistant and HR Clerk) and specialized way of entry options (Training Assistant, Recruitment Assistant, Recruitment Clerk etc.) (See Figure 1).
Management Trainee (MT)
In Sri Lanka, organizations have a practice of selecting young people (with or without experience) as a management trainee to fill the job vacancies (current or future) after giving an adequate training.After comprehensive analysis of collected data It was possible to find number of job advertisements under each type of entry level job as per industry.Table 3 gives number of job advertisements by industry.It indicates that Manufacturing, Trading and Retailing is the industry that had the highest number of job advertisements.The second is Banking, Finance, Insurance and Leasing and the third being Consultancy/HR/Professional Service and Education.
Table 4 gives number of job advertisements under each type of entry level as per types of ownership of firms.According to the Table, Sri Lankan Private Sector organizations had the highest number of job vacancies.The next highest number was in Foreign Companies and then Multi National Companies.One implication is that it is very important to let Sri Lankan Private Sector organizations grow with the Government support so that they expand job opportunities.As far as experience requirement is concerned, for entry level options most of the organizations do not expect experience from the applicants.They expect to give their own training to mould the trainees into their specific requirements.However there are some organizations that have specified 6-12 or 12 -24 months experience in their job advertisements.A reason is that those organizations are very keen in targeting to attract a trained or well prepared young workforce which has already been developed/ prepared by other firms/organizations (competitors /noncompetitors to their organizations).General implication is that it will be an advantage if the person, who expects to enter the field, has a sort of training or exposure to the real world of HRM work for about a period of six months or one year.Most frequently required age is 30 or below 30.There were 70 job advertisements which had no specified age limit.However it seems that an organization recruits a person who is young not matured or old to work in the capacity of Management Trainee who is supposed to take the business into the future as a future manager.
People who get university admission will take four years studies (for special degrees) and if there is no any system delay or personal delay they can complete their studies at the age of 23 years old or 24 years old generally in Sri Lanka, and after that they will be ready to be absorbed by the industry.For a non-graduate management trainee (common) and a non-graduate management trainee in HR/HRM, most of the organizations expect 25 or below 25 as an age limit.For a graduate management trainee (common), and a graduate management trainee in HR/HRM, most of the organizations expect 28-30 as the age limit.An important implication is that an undergraduate has to become a graduate before his or her age of 30 to enter as a Management Trainee in Sri Lanka.It is possible to see a gender bias for some jobs in industries.As far as HRM entry level jobs are concerned it seems no gender bias.The above Table shows the gender pattern of Management Trainees in Sri Lanka.There were job advertisements exclusively for males and exclusively for females but the numbers were small, 08 and 03 respectively within the three years under study.149 job advertisements were not gender biased implying that organisations do not discriminate candidates in terms of gender which is a good practice in Sri Lanka where a large number of females are studying management related educational and professional programmes.
Table 7 presents nature of place preference and Management Trainee.Out of 160 job advertisements 48 advertisements (about 30%) had specified place preference.Thus many organizations do not specify place preference when hiring people for MT considering the entire Sri Lanka as the employee market.However there are a considerable number of organizations which expect the candidates who become eligible to apply for management trainees job vacancies if they are living permanently closer to their company/factory/workplace. Explicitly they mentioned 20 KM (radius) away from their workplace in order to reduce traveling problems, late attendance and imbalance between work life and family life.For any job pay is an important factor for attracting best/appropriate people to fill vacancies in an organization.The above Table 8 was constructed by using available data about pay/allowance and MT.During the training and development period organizations have a practice of rewarding with reasonable or generous training allowances or payments.In Sri Lanka it may vary from Rs.15,000 to 50,000 depending on size, financial strength, market share of organization and its industry norms/standards.Next section will present placement process and conditions for MT.
6 To observe and record trainees' performance and involvement in all the areas during the transformation/training period To observe and record trainees' performance and involvement in that specific/particular functional area during the transformation/training period (e.g.HRM) 7 Finally (if organization is satisfied) assign/appoint to a suitable/best fit division/functional field (specialized) based on their performance during the T&D period and observed interest.
Finally (if organization is satisfied) assign/ appoint to a specific functional field (for which they have been initially selected) based on their performance during the T&D period.
Period of T & D Programme
After selection the candidates who are selected as management trainees in the HRM or related to the HRM will have to undergo a period of one (01) or two years (02) of comprehensive training and development programme in all aspects based on organizations' needs.
Training Period/Duration
As per Table 9, there are organizations numbering 104 (out of 160) which have not specified about training and time frame.22 organizations have specified about training but not mentioned about the duration.18 organizations have specified a training of one year duration.
Contents of T & D Programme
This comprehensive training and development programme content consists of several components.Exhibit 2 presents its components.
If a candidate fails to complete his/her training and development programme successfully, his or her training and development period may be extended or terminated by the organization.Figure 2 was developed in order to show the placement process and conditions of management trainee in Sri Lanka.advertisements) under the eight (08) types of major qualities/attributes such as leadership, performance achievement, attitude, personality, good character, physical character, team orientation and business related.Accordingly dynamic, young, challenge-enable/willing to take challenges, self-motivated/relentless self-motivations and team player/excellent team player are the top five key personal qualities/characteristics/attributes expected by the organizations.Regarding expected personal qualities/attributes, the highest count indicates the most demanded attribute.For example, dynamism is the most demanded trait under the type of leadership quality.According to management expert, Bruce Kaufman (As in Kleiman, 1997) Companies want entry-level HRM candidates to possess: • Leadership and management skills; • Cross-functional HRM expertise (for generalist positions); • Technical skills (e.g., computers, management information systems); • Knowledge of international HRM issues; and • Knowledge of business basics (i.e., accounting, finance, marketing, management, and economics).
Therefore, based on Bruce Kaufman literature, it is possible to reach a conclusion that these are some of the key aspects for the candidates who are seeking jobs in the HRM field at entry-level.
Based on this study, in the Sri Lankan context, out of 29 skills, communication skills, computer literacy/IT skills, interpersonal skills, analytical skills, human relations skills, leadership skills, ability to work long hours, public relation skills, presentation skills and ability to work independently are the top ten skills companies expect from entry-level HRM candidates.
In Sri Lanka, cross-functional HRM expertise (for generalist positions), knowledge of international HRM issues and knowledge of business basics (i.e., accounting, finance, marketing, management, and economics) which were mentioned by Bruce Kaufman are normally included in HRM specialised degrees offered by the leading Sri Lankan universities' HRM departments.Therefore, there were no significant differences between the results of this study and the view of Bruce Kaufman.
Apart from the Management Trainees, HR Assistant and HR Clerk can also be considered as generalized way of entry options in the field of HRM in Sri Lanka because some organizations expect to fulfill these HRM job vacancies with or without experience.
HR Assistant
HR Assistant is one of the most frequent HRM job opportunities in Sri Lanka.In addition to certain HRM qualifications (such as certificate/diploma level qualifications) G.C.E (O/L) or G.C.E (A/L) qualifications are also needed to secure the HR assistant job opportunities in Sri Lanka.After the entry, candidates are able to go to higher levels of career in the HRM career ladder by expanding his/her HRM educational qualifications, in addition to gaining experience and his/her enthusiasm in the HRM field.
HR Clerk
Same as HR Assistant, HR Clerk is also an operational level as well as entry level HRM job option in Sri Lanka.Depending on job performance and enthusiasm in the HRM field/career, he/she is able to climb the HRM career ladder step by step by enhancing his/her job competencies and education(See Table 19).In addition, certain job advertisements of the organisations for HR Executives in Sri Lanka expect the appropriate candidates without experience.They only expect the required knowledge supported by recognized HRM education.These types of opportunities can also be treated as generalized way of entry options in the field of HRM in Sri Lanka.
II. Specialized way of entry options
In Sri Lanka, there are few job titles in entry level specialization areas of HRM such as training and development, recruitment, pay management etc.Out of these jobs certain jobs can be figured out as entry-level HRM jobs in those specialized areas.The interesting candidates can join those areas by substantiating their specific and formal HRM qualifications or showing with some little exposure or some previous volunteer work/temporary assignment or by qualifying certain relevant academic/professional courses on those areas.Some of these specialized ways of entry options are listed in the Exhibit 4. A sort of model is presented to give an answer for the question: how does one enter the HRM profession?Or how can one enter the HRM profession?Out of the 10 possible ways of entry options shown in the Figure, this study was mainly dealt with the first one, i.e.Due to HRM/Related educational background (e.g.Degree in HRM, postgraduate qualification in HRM).The results of this study confirmed that HRM related educational background (degree or related) leads to generalized and specialised way of entry options into the HRM field in Sri Lanka.In the generalized way of entry options the Management Trainee option (graduate or nongraduate) is one of the leading options found in this study in the Sri Lankan context.
Figure: 1
Figure: 1 Types of Entry-Level Jobs in HRM
Figure: 2
Figure: 2 Placement Process and Conditions: Management Trainee
Figure
Figure 3 Possible Ways of Entry Options into the HRM Field
Table 5 presents
age limits required for Management Trainee.Accordingly minimum age for a person to apply for the post of Management Trainee seems to be 21 and maximum age is 35.
Table :
Number of time mentioned in the job ads.).From the collected data, 29 types of abilities/skills/competencies have been identified and they are indicated in Table18.According to the Table, the most highly demanded competence is communication skills.IT skills, interpersonal skills and analytical skills are other very important competency requirements.By using content analysis, lists of competencies were developed for all types of management trainee.After studying those 29 types of abilities/skills/competencies, the top six competencies expected by the organizations from a management trainee are communications skills, computer literacy/IT skills, interpersonal skills, analytical skills, human relations skills and leadership skills.The individuals who have planned to enter into the HRM field must develop those types of abilities/skills/competencies which are valued by the organizations/employers in Sri Lanka. | 2018-12-12T20:17:23.148Z | 2013-01-04T00:00:00.000 | {
"year": 2013,
"sha1": "0b3c669aa0fe43b8409c3262e72966ac523695cf",
"oa_license": "CCBY",
"oa_url": "http://sljhrm.sljol.info/articles/10.4038/sljhrm.v3i1.5098/galley/4066/download/",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "0b3c669aa0fe43b8409c3262e72966ac523695cf",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Engineering"
]
} |
259812239 | pes2o/s2orc | v3-fos-license | Knockdown of VAMP8 attenuates atherosclerosis and enhances the effect of simvastatin in APOE-deficient mice
Purpose: To investigate the regulatory effects of vesicle-associated membrane protein 8 (VAMP8) in atherosclerosis (AS). Methods: VAMP8 expression was assessed using quantitative real time-polymerase chain reaction (qRT-PCR) and western blot, while H&E staining was used to examine the morphology of arterial glandular tissues in AS mice. Lipid accumulation in mice was determined with the aid of Oil Red O staining, whereas the apoptosis of aortic cells was evaluated by TUNEL assay. Results: VAMP8 was highly expressed in the advanced-stage AS samples, and was also elevated in AS mice (p < 0.01). VAMP8 protein level rose in AS mice (p < 0.01). Moreover, the aorta showed atherosclerotic lesions with intima thickening and atherosclerotic plaques in AS group (p < 0.01). However, these changes were alleviated in VAMP8-silenced group (p < 0.01). VAMP8 silencing decreased lipid accumulation and alleviated inflammation and oxidative stress in AS mice (p < 0.01). It was observed that depletion of VAMP8 reduced aortic cell apoptosis in AS mice (p < 0.05). Furthermore, VAMP8 knockdown enhanced the effect of simvastatin on atherosclerosis (p < 0.01). Conclusion: Knockdown of VAMP8 alleviates AS in ApoE-deficient mice. This finding suggests that this might be a potential strategy for the prevention and treatment of AS.
INTRODUCTION
As one of the major reasons for various cardiovascular diseases, atherosclerosis (AS) exhibits high morbidity and mortality all over the world [1]. Previously, the pathogenesis of AS has been widely studied. Some researchers proposed that the progression of AS might be due to the disordered proliferation of vascular smooth muscle cells (VSMCs) and endothelial cells [2]. Some other studies also demonstrated that increased levels of blood lipids containing cholesterol and low-density lipoprotein (LDL), as well as the accumulation of oxidized low-density lipoprotein (ox-LDL) are involved in the aggravation of AS [3,4]. Due to the increased levels of plasma LDL, oxygen-free radicals are produced, which then oxidize LDL to ox-LDL [5]. Although some studies have investigated the pathogenesis of AS, the detailed pathological process of AS including the ox-LDL accumulation, inflammation, oxidative stress and apoptosis of aortic cells remains unclear. Identifying the more effective biomarkers for AS is of essential importance in the management and prevention of AS. The vesicle-associated membrane protein 8 (VAMP8) is one of the soluble N-ethylmaleimide-sensitive factor attachment protein receptor (SNARE) proteins, and is a vital part responsible for the release of secretory granules in exocrine glands [6]. Importantly, VAMP8 has been identified as the first SNARE to join in the secretion of platelet granules [7]. Additionally, VAMP8 is upregulated in the phenotypes of hyper-reactive platelets [8].
In previous studies, VAMP8 was shown to be involved in a variety of biological processes and diseases. Although growing numbers of studies have focused on investigating the function of VAMP8 in different diseases, it is not clear whether it is implicated in the development of AS.
In this study, the roles of VAMP8 in AS were investigated, and the effect of knockdown of VAMP8 on AS in ApoE-deficient mice was assessed.
ApoE mice model of AS
The ApoE (−/−) male mice (6-8 week-old, n = 12) were purchased from Beijing Weitong Lihua Laboratory Animal Technology Company (Beijing, China). All mice were fed for 7 days. 6 mice were fed with high lipid chow (1.25 % cholesterol and 15 % fat) to induce AS model. Another 6 mice were fed with regular chow. The serum lipid levels were measured from the samples of orbital blood drawn from the mice each month. Three months later, the mice were painlessly sacrificed and their aortae isolated, frozen in liquid nitrogen or fixed in 4 % formalin and embedded in paraffin prior to use. This work had obtained the approval from the Second Affiliated Hospital of Guizhou University of Traditional Chinese Medicine (approval no. 20210310), and was conducted based on the U.S. National Institutes of Health Guidelines for the Care and Use of Laboratory Animals [9].
Terminal deoxynucleotidyl transferase dUTP Nick end labeling (TUNEL) assay
The aortic tissue sections were incubated with TUNEL reaction mixture at 37 °C for 60 min in a humidified dark room. The frozen sections were subjected to DAPI staining. The number of DAPIand TUNEL-positive nuclei were photographed using an Olympus DP50-CU digital camera (Tokyo, Japan) to examine the images. The number of TUNEL-positive nuclei were defined using a percentage of the total cell number in the lesion for each mouse.
Hematoxylin and eosin (H&E) staining
H&E staining was performed to explore the pathological characteristics of AS lesions., The specimens were cut into 3-μm serial sections, and then subjected to hematoxylin and eosin staining. A light microscope was applied to examine the H&E-stained specimens to find lesions and evaluate the lesion location and pathological features.
Quantitative real-time polymerase chain reaction analysis (qRT-PCR)
TRIzol reagent (Invitrogen, USA) was utilized for isolating the RNAs from the arterial tissues of mice, and the PrimeScript RT Reagent Kit (Takara, Dalian, China) was utilized for reversetranscribing RNA into cDNA. The cDNA served as a template, and RT-qPCR analysis was conducted using SYBR Premix Ex Taq™ II commercial kit (Takara). The relative expression of VAMP8 was calculated using 2 −ΔΔCt method, with GAPDH serving as an internal reference.
Oil Red O staining
Intact aortas and adventitial fat were extracted from the AS mice, and treated with paraformaldehyde solution for over 24 h. Washing was done using PBS, and the tissues unfolded along the longitudinal axis with ophthalmic dissection scissors, followed by staining with saturated Oil Red O for 15 min, and differentiation in 75 % alcohol three times. An optical microscope (Nikon Eclipse, E100) was used to examine the images.
Statistical analysis
SPSS 21.0 software (SPSS Inc, Chicago, IL) was employed for statistical analysis. All data were presented as mean ± standard deviation (SD, n = 3). Student's t test was utilized for comparisons between two groups, while one-way ANOVA was used for comparison among multi-groups. P < 0.05 was set as statistically significant.
VAMP8 was highly expressed in advancedstage AS
To assess genes that might be associated with the development of AS, R language was used to analyze the differential genes in early-stage AS and advanced-stage AS in the data. It was found that 23 genes were down-regulated, while 154 genes up-regulated (Figure 1 a). Heat map was drawn for the first 30 of all the differential genes (GSM714070-73 was the advanced-stage AS samples while GSM714086-89 was the earlystage AS samples), and it was also found that VAMP8 was highly expressed in the advancedstage AS samples (Figure 1 b).
VAMP8 was highly expressed in AS mice
The Gene Set Enrichment Analysis data revealed that all differential genes were mainly enriched in inflammation, TYROBP causal network in microglia, regulation of cell activation, and innate immunity (Figures 2 a -c). Quantitative RT-PCR was applied to measure the expression of VAMP8 in AS mice, and the results revealed that VAMP8 expression was elevated in AS mice (Figure 2 d). The protein level of VAMP8 was also enhanced in AS mice (Figure 2 e). On the whole, VAMP8 was highly expressed in AS.
VAMP8 knockdown improved pathological conditions in AS mice
The data from western blot analysis revealed that the transfection of sh-VAMP8 in AS mice decreased VAMP8 expression (Figure 3 a). Pathological analysis of arterial glandular tissues in AS mice also revealed that the intima of the aorta was not thick and there was no stenosis in the lumen in the control group, while the aorta had obvious atherosclerotic lesions with intima thickening and atherosclerotic plaques in the AS group, but these changes were alleviated in the AS+sh-VAMP8 silencing group (Figure 3 b). VAMP8 knockdown mitigated the pathological conditions of AS mice.
VAMP8 silencing decreased lipid accumulation in AS mice
The results of Oil Red O staining revealed that the atherosclerotic plaques were localized in the aortic arch and around the branched regions of the aorta in the AS mice, while VAMP8 downregulation reduced the atherosclerotic plaque area (Figure 4 a). In addition, it was found that the elevated TC, TG, LDL levels, and the reduced HDL level in AS mice were reversed after VAMP8 knockdown (Figure 4 b). Thus, inhibition of VAMP8 reduced lipid accumulation in AS mice. The concentrations of TG, TC, HDL and LDL were examined through ELISA. ** P < 0.01
VAMP8 deficiency alleviated inflammation and oxidative stress in AS mice
The results of ELISA revealed that inflammation index (based on IL-6, TNF-α and IL-1β concentrations) were enhanced in AS mice, but VAMP8 silencing ameliorated the effects ( Figure 5 a). Besides, the enhanced concentrations of MDA and the reduced concentrations of SOD and GSH-Px were apparently offset as a result of VAMP8 downregulation in AS (Figure 5 b). All in all, VAMP8 deficiency alleviated inflammation and oxidative stress in AS mice.
Depletion of VAMP8 reduced aortic cell apoptosis in AS mice
In the AS mice, it was observed the aortic cell apoptosis was elevated, but this effect was counteracted by VAMP8 attenuation (Figure 6 a). In addition, VAMP8 suppression decreased the enhanced level of Bax, cleaved-caspase3 and cleaved-PARP protein (Figure 6 b). These results indicated that depletion of VAMP8 reduced aortic cell apoptosis in AS mice.
VAMP8 knockdown enhanced the effect of simvastatin on atherosclerosis
VAMP8 knockdown decreased TC, TG, LDL levels, and increased HDL level in AS + Simva group, and this was more pronounced in AS+sh-VAMP8+Simva group (Figure 7 a). Moreover, the reduced concentrations of IL-6, TNF-α and IL-1β in the AS +Simva group were further deceased after VAMP8 knockdown (Figure 7 b). Thus, VAMP8 knockdown enhanced the effect of simvastatin on atherosclerosis.
DISCUSSION
Atherosclerosis (AS) is a serious and prevalent arterial inflammatory disease with high morbidity and mortality [10], and has become the second major cause of death, as well as the pathophysiological basis for cardiovascular diseases [11]. The main lesion of AS is lipid deposition in parts of the artery, the proliferation of VSMCs and fibrous matrix, gradually developing into atherosclerotic plaque formation. At present, the pathogenesis of AS is not fully understood. A growing number of studies have established an AS animal model through using apolipoprotein E (ApoE) (−/−) mice [12]. In addition, multiple studies have been performed using the mice model. For example, in high-fat diet-fed ApoE (−/−) mice, butter-originated ruminant trans fatty acids have no impact on the alleviation of AS lesions [13]. In ApoE (−/−)deficient mice, proteins extracted from housefly (Musca domestica) maggots suppressed the development of AS via the production of bile acids and the consumption of cholesterol. In the spleen of ApoE-deficient mice, high-fat diet increases the level of splenic NOD1, the recruitment of neutrophils and the release of neutrophil extracellular traps [14]. Therefore, ApoE-deficient mice model is a reliable animal model for AS. Earlier, some studies found that numerous proteins were involved in the progression of AS. For instance, MFN2 suppression promotes the calcification of vascular via the modulation of the RAS-RAF-ERK1/2 pathway, and participate in the development of AS [15]. Downregulation of methyltransferase-like 14 inhibits AS development by regulating the mA modification of p65 [16]. KLF7 modulates HDAC4/miR-148b-3p/NCOR1 pathway to reduce the lesions of AS and relieve glucose metabolic reprogramming in macrophages [17]. Although VAMP8 was reported to be implicated with type I interferon antiviral response, venous thrombosis and tumor progression, whether VAMP8 was involved in AS requires further investigation. In the current study, bioinformatic studies reveal that VAMP8 expression was enhanced in AS samples. In addition, VAMP8 was found to be highly expressed in the tissues of the AS mice model. Besides, the silencing of VAMP8 alleviated the pathological conditions of the AS mice. Furthermore, the results showed that suppression of VAMP8 reduced lipid accumulation in AS mice. VAMP8 knockdown was also found to decrease inflammation and oxidative stress in AS mice. VAMP8 deficiency reduced aortic cell apoptosis in AS mice. Thus, VAMP8 knockdown enhanced the effect of simvastatin on the progression of AS.
CONCLUSION
The findings of this study show that VAMP8 silencing alleviates AS in ApoE-deficient mice, and this highlights the role and function of VAMP8 in AS. However, investigations on VAMP8 in AS progression are required for deeper understanding of the processes. | 2023-07-12T16:05:41.162Z | 2023-06-06T00:00:00.000 | {
"year": 2023,
"sha1": "9be83009812af0b9dc0793624c858c524158b393",
"oa_license": "CCBY",
"oa_url": "https://www.ajol.info/index.php/tjpr/article/download/248932/235397",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "8284c7b0bc42defcf0f4b5bd0cacb95980f1b78a",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": []
} |
1993994 | pes2o/s2orc | v3-fos-license | The use of counting beads to improve the classification of fast breathing in low-resource settings: a multi-country review
To decrease child mortality due to common but life-threatening illnesses, community health workers (CHWs) are trained to assess, classify and treat sick children. For pneumonia, CHWs are trained to count the respiratory rate of a child with cough and/or difficulty breathing, and determine whether the child has fast breathing or not based on how the child’s breath count relates to age-specific respiratory rate cut-off points. International organizations training CHWs to classify fast breathing realized that many of them faced challenges counting and determining how the respiratory rate relates to age-specific cut-off points. Counting beads were designed to overcome these challenges. This article presents findings from different studies on the utility of these beads, in conjunction with a timer, as a tool to improve classification of fast breathing. Studies conducted by the International Rescue Committee and Save the Children among illiterate CHWs assessed the effectiveness of counting beads to improve both counting and classifying respiratory rate against age-specific cut-off points. These studies found that the use of counting beads enabled and improved the assessment and classification of fast breathing. However, a Malaria Consortium study found that the use of counting beads decreased the accuracy of counting breaths among literate CHWs. Qualitative findings from these studies and two additional studies by UNICEF suggest that the design of the beads is crucial: beads should move comfortably, and a separate bead string, with colour coding, is required for the age groups with different cut-off thresholds—eliminating more complicated calculations. Further research, using standardized protocols and gold standard comparisons, is needed to understand the accuracy of beads in comparison to other tools used for classifying pneumonia, which CHWs benefit most from each different tool (i.e. disaggregating data by levels of literacy and numeracy) and what the impact is on improving appropriate treatment for pneumonia.
Accepted
1 May 2014 To decrease child mortality due to common but life-threatening illnesses, community health workers (CHWs) are trained to assess, classify and treat sick children. For pneumonia, CHWs are trained to count the respiratory rate of a child with cough and/or difficulty breathing, and determine whether the child has fast breathing or not based on how the child's breath count relates to age-specific respiratory rate cut-off points. International organizations training CHWs to classify fast breathing realized that many of them faced challenges counting and determining how the respiratory rate relates to age-specific cut-off points. Counting beads were designed to overcome these challenges. This article presents findings from different studies on the utility of these beads, in conjunction with a timer, as a tool to improve classification of fast breathing. Studies conducted by the International Rescue Committee and Save the Children among illiterate CHWs assessed the effectiveness of counting beads to improve both counting and classifying respiratory rate against age-specific cut-off points. These studies found that the use of counting beads enabled and improved the assessment and classification of fast breathing. However, a Malaria Consortium study found that the use of counting beads decreased the accuracy of counting breaths among literate CHWs. Qualitative findings from these studies and two additional studies by UNICEF suggest that the design of the beads is crucial: beads should move comfortably, and a separate bead string, with colour coding, is required for the age groups with different cut-off thresholds-eliminating more complicated calculations. Further research, using standardized protocols and gold standard comparisons, is needed to understand the accuracy of beads in comparison to other tools used for classifying pneumonia, which CHWs benefit most from each different tool (i.e. disaggregating data by levels of literacy and numeracy) and what the impact is on improving appropriate treatment for pneumonia.
Introduction
Because of the overall complexity of diagnosis, the still staggering mortality, lack of diagnostic aids and the growing problem of antibiotic resistance for pneumonia, there is an urgent need for more robust data on tools for pneumonia diagnosis. Pneumonia is the leading cause of childhood mortality, accounting for 17% of global deaths of children under five (UNICEF 2013). Timely recognition of pneumonia signs and symptoms, appropriate care seeking and access to antibiotic treatment can prevent many of these deaths. The Integrated Management for Childhood Illness (IMCI) protocol for 'Caring for Newborns and Children in the Community' guides community health workers (CHWs) to assess fast breathing as an indicator of non-severe pneumonia in children with cough and/or difficulty in breathing (WHO and UNICEF 2011). There are two steps in detecting whether a child has fast breathing: (1) a CHW needs to visually count a child's breath for 1 min and (2) the CHW has to determine how the child's breath count relates to age-specific respiratory cut-off points (WHO and UNICEF 2011). International organizations training CHWs with various degrees of literacy and numeracy realized that many of them faced challenges in counting and relating the breath count to age-specific cut-off points. However, accurate assessment of fast breathing is crucial as selected children [children 2-11 months of age with a respiratory rate (RR) of 50 or more breaths/min and children 12-59 months of age with a RR of 40 or more] are classified as having pneumonia based on their breathing rate and require immediate treatment with antibiotics (Pio 2003;WHO andUNICEF 2005, 2011).
Watches and timers have been used as timing aids to facilitate 1-min RR counting. Box 1 shows an example of an acute respiratory infection (ARI) timer that is distributed by the United Nations Children Fund (UNICEF). Until now, there is limited evidence on counting devices and other affordable tools to help CHWs in resource-poor settings improve classification of fast breathing. One study evaluating the effectiveness of an abacus with a built-in sandglass concluded that CHWs were better able to correctly classify fast breathing with the breath counter (Bang and Bang 1992).
The potential of the use of counting devices, such as beads, is unknown due to lack of information regarding their effectiveness and utility. However, several organizations [including International Rescue Committee (IRC), Save the Children, Malaria Consortium, UNICEF and Population Services International (PSI)] have conducted small-scale studies among CHWs with differing levels of literacy and numeracy, using counting beads, which contribute to the knowledge base.
In this review, we compile the current evidence base of the effectiveness of counting beads to assess and classify breathing rates to guide pneumonia diagnosis. These findings are essential to further guide integrated Community Case Management (iCCM) programming aiming to decrease pneumonia deaths in young children.
Methods
The findings presented are based on studies conducted by IRC, Save the Children, Malaria Consortium and UNICEF to improve iCCM programming in South Sudan, Uganda and Ghana. All CHWs in these studies were trained in iCCM using the World Health Organization (WHO) and UNICEF IMCI 'Caring for Newborns and Children in the Community' protocol. CHWs are named differently in various countries, e.g. in South Box 1 The ARI timer The ARI timer makes a ticking sound every second and has an alarm after 30 s as well as a final alarm to inform the user that 1 min has passed. The user must press the button to start the 1 min timing, during which a child's breath is counted.
Sudan they are called community-based distributors; however, in this article, we refer them as CHWs.
We compiled all research findings on the use of counting beads within these programmes. Although most findings presented are part of larger research initiatives, in this review we focused on the following questions: What is known regarding primarily illiterate CHWs' ability to assess and classify fast breathing without the use of counting beads? Does the use of beads improve the ability of CHWs, particularly those with limited or no literacy and numeracy, to correctly classify fast breathing (hence, including tracking the breaths and classifying the breath count based on IMCI age-specific RR cut-off points)? Does the use of beads improve the ability of literate CHWs to correctly assess the breath count? What are CHWs' perceptions of these tools, and does this differ by literacy level?
A brief description of the studies is provided below and a summary of the key methodological elements of the different studies can be found in Table 1. All studies used the ARI timer explained in Box 1, and the design of beads used by the various organizations can be found in Box 2. The results are separated by qualitative and quantitative research methods, as well as by organization. In addition, anecdotal evidence from two rapid assessments, from PSI in Democratic Republic of Congo and from IRC in Sierra Leone, is not included in the methods or findings, but is used to strengthen the discussion.
IRC study in Uganda and South Sudan
As part of a larger assessment evaluating the quality of care provided by CHWs, IRC compared the CHWs' ability to assess fast breathing using the ARI timer alone vs using the timer and beads. The geographical areas (payams) were selected on the basis of their geographical accessibility to the evaluation team and recent start-up of the programme.
First, CHWs were asked to describe the cut-off points for fast breathing for the two age groups (2-11 and 12-59 months) and to count the child's RR using the ARI timer. A trained clinician who was part of the evaluation team counted simultaneously with the CHW. The RR from the clinician was written down and the CHW was asked to say his/her count. Afterwards, CHWs were given the right counting bead string for the age of the child being assessed and were shown how to move their fingers along the beads and to stop when the timer beeped. Then CHWs were asked to repeat the assessment with the same child using the timer and the beads. The clinician used the counting beads simultaneously with, but out of sight of, the CHW. The beads were counted back and recorded for both the CHW and the trained clinician. Microsoft Excel was used to analyse the data.
Save the Children study in South Sudan
The use of beads by CHWs with limited numeracy was part of an operational research lead by Save the Children. The research was conducted to assess the effectiveness of simulation-based training of CHWs using video technology.
During the training, CHWs were shown video clips of cases with danger signs (including case scenarios of malaria, pneumonia and diarrhoea) and the interactions between CHWs and caretakers. The video also showed the assessment, classification, treatment and advice on home care. CHWs were given one set (two strings) of beads to count and classify the breath count along with the ARI timer acting as a stop watch. CHWs were taken to a hospital outpatient department for post-training skills assessment, where they assessed sick children (40 out of the 69 cases had a history of cough). Each CHW managed (assessing, classifying and deciding to treat) two to three sick children aged 2-59 months. Each CHW also completed a knowledge questionnaire. As these particular CHWs were illiterate, one clinician trained in IMCI observed the CHWs' management of sick children and recorded the assessment, classification and treatment findings on a study form. A senior evaluator who was also an IMCI trainer independently assessed the children at the same time and recorded his findings on a similar form. The RR was measured for all 69 children by the CHWs and the evaluator. To assess fast breathing the CHW had to choose the correct bead string for the age group. The observer marked the case as having fast breathing if the CHW reached the red beads (fast breathing for 2-to 11-month and 12-to 59-month age groups) within the minute. The evaluator used the ARI timer to count child breaths and noted the actual breath count in the form. The data were entered in CSPro and imported and analysed in Microsoft Excel.
Malaria Consortium study in Uganda
Malaria Consortium assessed whether there was a difference in accuracy in counting RR among CHWs using (1) the ARI timer alone vs (2) using the timer and beads and (3) using a mobile phone application, where the centre button on the phone was pressed for every breath observed, and which beeped after 1 min, after which the count was displayed. The sub-county was selected on the basis of its geographical location and mixture of CHWs with different age, sex and literacy levels [varying from being able to read and write well in any of the local languages (78%) to fairly well (22%)].
First, each CHW received a detailed explanation of how to count the RR, while simultaneously using the ARI timer, the same timer with beads as well as the mobile phone application. The CHWs then had the opportunity to practise three different methods of counting RR on videos of children with different breath counts. After familiarizing themselves with the three methods, each CHW was observed counting using the three options, respectively, on children with different breath counts. The video case scenarios were displayed on laptop screens, enabling one CHW to assess a child at a time. The final RR was recorded for the three tools as follows: (1) when using the timer alone, the CHW was asked for the final RR count, (2) when using both the timer and beads, the beads were counted back by a research assistant who provided the final RR count and (3) for the mobile phone application, the result was read from the screen. The breathing rate of the children in the video was known and was used as the gold standard. STATA 12 was used to analyse the data. The research only assessed the impact on counting the RR and did not assess the impact of these Other qualitative research was initiated by UNICEF, as part of a broader effort to identify CHWs' unmet needs regarding tools to support the assessment and classification of RRs. In Uganda the focus of the research was on CHWs' experiences assessing RR using the ARI timer and their ideas regarding tools that might be useful to help improve their assessment and the classification of RRs. The CHWs were not familiar with the use of beads. Building on these findings, a subsequent study was conducted in Northern Ghana, where the CHWs had been trained to use both the ARI timer and beads. The main objective of this study was to help UNICEF improve the design of diagnostic aids based on the challenges CHWs indicated they face while assessing, classifying and identifying treatment needs for children under 5 with rapid breathing. Table 2 summarizes the key findings of various studies. In addition, an overview of the two different types of counting beads (a non-age-specific type and an age-specific type with colour-coded beads) can be found in Box 3.
Findings
Primarily illiterate CHWs' ability to count breaths and their knowledge of age-specific cut-off rates The IRC assessed what the key challenges were for primarily illiterate CHWs while classifying fast breathing. Findings indicated that 46% (21/46) of CHWs in Uganda were not able to apply the age-specific cut-off points and 33% (15/46) made an incorrect count using the ARI timer. In South Sudan, 59% (19/32) of CHWs were not able to apply the age-specific cut-off points and 72% (23/32) of CHWs made an incorrect count using the ARI timer.
Picture
Description of the device Used by One set of two 'age-specific and colour-coded' strands of beads, with one bead counted per breath. The two strands can be distinguished because the beads of one age group have different colours and sizes than the beads of the other strand. Several tiny beads are used to create space between the beads, and the strand is tightly tied to hold the beads in place. At the end of age cut-offs for each strand, there are 10 differently colored (e.g. red/pink) beads which, if counted, indicate fast breathing. Strands have a clasp so they can be used open (straight) or closed (like a necklace). Beads are eclipse shaped and made from newspapers and glue.
Save the Children One strand of beads, non-specific for children ages 0-5 years and colour-coded per 10 beads to ease counting. The strand is necklace shaped and has a protruding start/end bead. One bead is counted per breath. Beads are made from plastic. CHWs are using these beads in conjunction with the ARI Timer. Some versions of the beads have an additional separating bead, between every bead.
Ministry of Health in Ghana and UNICEF
One set of two 'age-specific and colour-coded' strains of beads, with one bead counted per breath. The two strands can be distinguished because the 10 fast breathing beads match the age specific amoxicillin packaging used in Uganda. There are no separator beads, and they are tied so that there is space for moving the beads across the string. The beads are made from plastic and round shaped. Strands have a clasp so they can be used open (straight) or closed (like a necklace). The white beads are 16 mm in diameter, whereas the coloured beads are 12 mm in diameter.
IRC and Malaria Consortium
One set of two 'age-specific and colour-coded' strains of beads, with 1 bead/breath. The two strands can be distinguished because the fast breathing beads of one age group are different colors. There are no separator beads. The beads are made from plastic and round shaped. The strand is straight, with a small orange bead at the beginning and end. Primarily illiterate CHWs' ability to classify fast breathing using counting beads
PSI
In South Sudan 13% (4 out of 32) of the CHWs were able to classify fast breathing using only the ARI timer, whereas this number increased to 63% (20/32) (Odds Ratio (OR) OR ¼ 11.7, P ¼ 0.002) when they used the beads together with the timer. Findings were similar in Uganda, where the ability to classify fast breathing increased from 37% (17/46), using only the timer, to 73% (24/33) (OR ¼ 4.4, P < 0.005), using both tools. Combining the data from both countries, the ability to classify fast breathing increased from 27% (21/78) to 68% (44/65) (OR ¼ 5.7, P < 0.005), see Figure 1. Save the Children's research did not have a component of assessing a sick child without the use of beads, as they identified that CHWs had limited numeracy, and were not able to count beyond 10. A total of 69 sick children were assessed by the 27 CHWs. Using the 'age-specified and colour-coded' beads and ARI timer, the CHWs classified 25 cases as suffering from fast breathing. The senior evaluator who only used the ARI timer classified 23 as fast breathers. Of the 25 CHW-classified fast-breathing cases, 15 (60%) matched with the evaluatorclassified fast-breathing cases.
CHWs were also administered a knowledge questionnaire that included questions on the use of beads appropriate for age. All 27 CHWs picked the beads appropriate for the two age groups. With respect to classifying a child as having fast breathing, 26 out of 27 picked the right colour (red) for the 2-to 11-month group and 27 out of 27 for the 12-to 59-month age group.
Literate CHWs' ability to correctly count breaths using counting beads In Uganda, Malaria Consortium assessed whether the use of tools, including counting beads, improved the ability of literate CHWs to count RR. CHWs were 5.6 times more likely to count (not classify) RR correctly (i.e. AE2 breaths) using the timer alone compared with when it is combined with beads (OR ¼ 5.6, P < 0.001). There was no significant difference between the ARI timer and mobile phone application (OR ¼ 1.1, P ¼ 0.08), implying that CHWs have a similar capacity to correctly count RR using either of the assessment methods (i.e. counting themselves vs pressing a button for every breath observed).
Overall, the median difference between the 'true rate' and the rate observed with the three methods was À1 (interquartile range(IQR) À5 to 2) for the UNICEF timer, À1 (IQR À7 to 2) for the mobile phone application and À5 (À12 to 2) for the ARI timer with beads. Using the sign test for non-parametric data on matched pairs, the differences in rates observed using the ARI timer compared with the true rate was not significantly different from 0 (P ¼ 0.01), whereas for the mobile phone application and counting beads, the difference between the rates observed and the true rate was significantly different from 0 (P ¼ 0.001 and P < 0.0001, respectively). Using the same test, the median difference observed between the ARI timer and mobile phone application was not significantly different (P ¼ 0.179), whereas the difference was significantly different between the ARI timer and the counting beads (P < 0.001) as well as between the mobile phone timer and the counting beads and timer (P ¼ 0.001).
When analysing the accuracy of the three different methods by the characteristic of the rate of the child in the video (i.e. normal or fast), it was demonstrated that all three methods performed much better on the slower breathing rates (i.e. 40 breaths/min) than the fast rates (i.e. 65 and 66 breaths/min). All three methods tended to overestimate the rate in the slowbreathing scenario, whereas they underestimated the rate in the fast-breathing scenarios.
CHWs' perceptions on the use and design of age-specific and colour-coded beads The colour-coded beads designed by IRC in Uganda match the colour of the locally available amoxicillin packages for specific Box 3 Overview of the two types of counting beads intended for use in conjunction with an ARI timer Non-age-specific counting beads: These counting beads are designed to help the CHW keep track of the amount of breaths taken. The CHW counts moves bead for each breath. When 1 min has passed the CHW counts back the beads to determine the RR. Because of the colour coding of the number of beads (e.g. every set of 10), the CHW can count back the beads per colour: e.g. 1 colour ¼ 10 breaths, 2 colours ¼ 20 breaths, etc. On the basis of the RR, the CHW compares the result against the IMCI guideline, correctly remembering the age-specific cut-off rate.
Age-specific beads with colour coding: These counting beads support the CHW not only with counting but also with interpreting the RR against the IMCI guidelines. These beads remove the need to count by the CHWs to assess pneumonia (unless the actual RR is required for reporting purposes) because they consist of a set of two strands or rows of beads that are colour-coded to match the thresholds for the two different age groups. Depending on the age of the child, the CHW selects the matching bead strand and moves the beads for 1 min. Once the minute has passed, the CHW can identify whether the child has pneumonia, depending on the colour of the bead she/he is holding between his/her fingers.
South Sudan Uganda Total
Using only the ARI Ɵmer Using both the ARI Ɵmer & 'age-specific and colour-coded' counƟng beads Figure 1 Percentage of CHWs able to classify fast breathing correctly, based on findings from IRC age groups, and according to (mainly illiterate) CHWs, this helped them to identify the correct string needed for the child, eliminating the need to recall the cut-off points for the different age groups. Regarding the use of counting beads, CHWs interviewed by IRC were most likely to say that: (1) the use of beads eliminated the need to count or worry about forgetting the number or making mistakes while counting; (2) it was easy to move hands along the beads; (3) it was easy to know when to give the treatment and (4) it was easy to explain to the mother that her child did not need medicine. The need for more training and practice regarding the use of beads in combination with the ARI timer was mentioned by CHWs in South Sudan and Uganda.
In the study amongst literate CHWs by Malaria Consortium in Uganda, CHWs expressed their support for alternative methods to count the RR other than the timer which they were familiar with. Using the beads, in addition to the timer, was perceived as being advantageous because each breath was being represented by movement of a bead and then counting of the beads could be done afterwards, thus giving more accurate results.
What I have liked about this method is that I don't have to count the beads as I move them, counting usually comes last and this gives me more concentration on observing the child and moving the beads.
The method was acknowledged as one that could give a quick picture of the diagnosis by identifying the colour of the bead where the hand has stopped after the alarm has gone off (i.e. whether white or red/green bead), the next steps could follow later: From what I have learnt today, the combination of both the timer and beads is very helpful because it helps me to immediately know whether the child has fast breathing by looking at the colour of beads where I have stopped and then, I count the beads to confirm what I have seen, after which I write in my book.
The use of the beads also made it easier to communicate the results to the caretaker, as the colours visually flagged if the child had a high breath rate (indicating pneumonia) or not.
However, the beads were also perceived by some as disadvantageous because combining the timer and beads was considered to be more tasking, with a slow bead movement process and therefore reduced accuracy. During observations, it was noted that CHWs often found it difficult to start moving the beads immediately after starting the timer.
For me, I find this method very challenging because I have to observe three things at the same time: I have to look at the child, start the timer and also move the beads at the same time, which is a bit tasking. That is why you have seen that I have been forgetting to put on the timer as I move the beads.
Related to the design, CHWs often mentioned that the space between individual beads can affect the outcome of the count.
The space between the beads should be small, as the bigger the distance was between the beads, the more difficult it was to move them. The CHWs also suggested that the beads should be light and unattractive in order not to be mistaken for accessories and have a strong string, which would not break easily.
Data from the assessment in Uganda by UNICEF, where mainly illiterate CHWs only used the ARI timer, revealed that CHWs would find it useful to have a device that supports them with the classification of fast breathing and that would help them communicate the findings to the caregivers. When the CHWs in Uganda were shown counting beads they mentioned that beads might be confused with a toy; however, it would help them with the assessment and the classification of fast breathing.
I would have preferred the beads because each time I count, I hold a bead so I will be able to know how many beads I left behind in case I forget where I was when counting Green is OK. Red is for danger.
CHWs' perceptions on the use and design of non-age-specific beads The main concern among mainly illiterate CHWs regarding the use of non-age-specific beads used in the Northern region of Ghana was that these beads still required counting and remembering the cut-off rate. The beads also have 'separator bead', which the CHWs thought creates confusion because 'we move the beads without looking at them'. Nevertheless, the use of prayer beads in the region is common; CHWs are used to counting with beads and are at ease with this. However, the CHWs reported that this similarity could reduce the acceptance by caretakers as the beads are perceived as a tool for prayer and not as a healthcare-related tool.
When shown the 'age-specific and colour-coded' beads used by IRC and Save the Children, the CHWs and their supervisors in Ghana showed a strong preference to this design as it eliminated the need to count and remember the cut-off rate.
This one has only two colours and the colours easily tell if the child has pneumonia or not. I think that this one will be more convenient to use than the current one we have.
Discussion
This review of studies on the utility of counting beads as a tool to improve classification of fast breathing in children with cough and/or difficulty breathing to guide pneumonia diagnosis shows that the introduction of 'age-specific and colour-coded' counting beads, in addition to an accurate timer, can help CHWs with limited numeracy and literacy to more accurately assess fast breathing. Although CHWs also expressed concerns on the task intensity of the method, in general, it was acceptable and applicable across different settings.
There are several limitations associated with the studies included in this review. First, the studies used several research approaches, methods (including gold standards) and research questions. Moreover, programmatic settings as well as the levels of literacy and numeracy of the CHWs differed. Second, the case scenarios by the various organizations assessed children in different settings (at home, hospital or on a video screen), which resulted in different breathings patterns, e.g. the children assessed at home were unlikely to have fast breathing, although a few of them happened to have it. This review suggests that these factors influence the utility of beads.
Data from IRC, Save the Children and anecdotal evidence from PSI and IRC suggest that 'age-specific and colour-coded' beads enable and improve accuracy in the classification of fast breathing by CHWs with limited literacy and numeracy. However, if literacy and numeracy is not an issue, findings from Malaria Consortium show that the use of beads complicates the assessment, as it resulted in more inaccurate counts (i.e. CHWs were 5.6 times less likely to count correctly using the beads and timer). A reason for the breathing count inaccuracy by literate CHWs was that some perceived the use of beads as more task intensive. Although for CHWs with limited literacy and numeracy (e.g. those not able to count beyond 10), the use of beads enabled them to track the breathing rates as well as classifying the count against the age-specific cut-off points, which would not be possible without these beads.
It was also found that the rate of breathing influences the accuracy of the RR count. The research conducted by Malaria Consortium shows that, regardless of the tool CHWs used, they tended to overestimate the rate in the slow-breathing scenario, whereas they underestimated the rate in the fast-breathing scenarios.
Recommendations
Regarding the use of counting beads, these data show that it is premature to conclude to which degree the beads, in addition to a timer, would improve the ability of trained CHWs to correctly classify fast breathing. A more conclusive assessment is needed amongst sick children, disaggregating data by intensity of training and supervision, levels of literacy, numeracy and comparing the final breath count to a gold standard.
Previously conducted studies on pneumonia diagnosis by CHWs indicate that even if CHWs are good in counting, they still often make mistakes in classifying the breath count (Kallander et al. 2006, Mukanga et al. 2011. Here, the use of beads could help in classification, and it should become clearer how for literate CHWs this potential positive effect is affected by the inaccuracy in counting when using beads. Is this because of the lack of familiarity in using the beads, or is the use of beads actually more complex and task intensive?
Concurrent to the need of more evidence regarding the utility of beads, there is a need for more research assessing the effectiveness of other devices, such as automated RR counters.
Conclusion
Given the overall paucity of data, this review of recent studies provides insights on a range of issues to consider when implementing counting beads in iCCM programmes.
This emerging evidence suggests that the introduction of welldesigned 'age-specific and colour-coded' beads in addition to an accurate timer can help CHWs who have difficulty counting breaths and remembering age-specific cut-off rates to more accurately assess and classify fast breathing. It also has the potential to improve communication with the child's caretakers-particularly regarding appropriate treatment options. However, more research is needed on these and other devices to decrease the inaccuracy in pneumonia diagnosis. | 2016-05-04T20:20:58.661Z | 2014-06-27T00:00:00.000 | {
"year": 2014,
"sha1": "dc5ff4397eda2651b197e6ca0ac2ae21a6c50260",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/heapol/article-pdf/30/6/696/2184421/czu047.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "8b88b7b04d1315ee6a869e665cb2b09a69474682",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
254871142 | pes2o/s2orc | v3-fos-license | The Early Diagnosis of Alzheimer’s Disease: A Patient-Centred Conversation with the Care Team
Alzheimer’s disease (AD) is a neurodegenerative disorder which accounts for 60–80% of dementia cases, affecting approximately 10 million people in Europe. Neuroimaging techniques and cerebrospinal fluid biomarkers used in combination with cognitive assessment tools open the door to early diagnosis of AD. However, these tools present some challenges that need to be overcome, such as low sensitivity or specificity, high cost, limited availability or invasiveness. Thus, low-cost and non-invasive alternatives, such as plasma biomarkers, have the potential to drive changes in AD screening and diagnosis. In addition to the technical aspects, organisational challenges as well as ethical concerns need to be addressed. In many countries, there is an insufficient number of specialists to recognise, evaluate and diagnose dementia and the waiting times to see a specialist are long. Given that there is currently no cure for AD, it is important to consider the potential psychological impact of an early diagnosis. In addition, counselling before biomarker sampling and during diagnosis disclosure is vital to guarantee that the patients have all the information necessary and their queries are addressed in a sensitive manner. Here, we illustrate (using a clinical vignette) current challenges of diagnosis and discuss some of the benefits and challenges of early diagnosis in AD including the value of biomarkers in combination with clinical evaluation. Lastly, some guidelines for disclosing early diagnosis of AD are provided based on our experiences.
INTRODUCTION
Alzheimer's disease (AD) is a neurodegenerative disorder characterised by progressive decline in cognitive domains such as memory, language, visuospatial and executive functions, as well as behavioural changes, affecting the ability to perform daily life activities. This results in patients struggling to live independently, feeling less confident, and lonely [1][2][3]. AD also affects the family caregivers of people suffering from dementia, who often suffer from emotional stress and depression [4].
AD accounts for approximately 60-80% of dementia cases [5], currently affecting an estimated 10 million people in Europe, and by 2050, the number of patients will likely almost double to 18 million people [6]. The costs associated with AD and other dementia aetiologies are projected to increase by approximately 43% between 2008 and 2030, with a forecasted total cost in excess of €250 billion [7]. Thus, AD presents a considerable challenge not only for individuals and families affected by this disease but for society as a whole. It has been highlighted that healthcare professionals should work as part of an integrated patientcentred care team to manage the growing and diverse population with AD starting with diagnosis [8]. In support of this the importance of establishing an early AD diagnosis, guidance and tools that may be used throughout the diagnostic journey has been covered in a recent comprehensive review [9].
The neuropathological hallmarks of AD include features such as the presence of abnormal amyloid-b and tau biomarkers and cerebral amyloid angiopathy, neurofibrillary tangles, and glial responses, as well as neuronal and synaptic loss [10][11][12]. Significant evidence has been collected that supports a key role for amyloid-b dyshomeostasis in initiating AD. Thus the amyloid (or Ab) hypothesis has become the dominant model of AD pathogenesis and is informing approaches to and the development of potential treatments [13]. However, it has been proposed that progress in drug development is likely to improve if a probabilistic model of AD is adopted where the condition is driven by genetic factors of decreasing penetrance and stochastic factors whose weight is inversely related to penetrance [14]. On the basis of research, AD can be understood as a clinical continuum covering the preclinical and clinical phases [15]. Although these hallmarks were initially characterised in post-mortem studies, biomarkers allow investigation into some of these neuropathological changes in vivo, having reached a high level of validity.
Studies have shown that alterations in both Ab fluid and neuroimaging biomarkers, which link to the Ab hypothesis for AD pathogenesis, precede clinical symptoms [16]. The preclinical phase refers to the stage at which AD pathology is present but cognitive performance (measured with standardised cognitive tests) still lies within the normal range [17]. During the clinical phase, mild cognitive impairment is characterised by the onset of the earliest cognitive symptoms, while in the final phase of dementia the cognitive impairments are severe enough to produce loss of function [17].
Adequate early diagnosis of AD before symptom onset is of pertinent importance, as it would enable the development of secondary preventive and disease-modifying therapies [18]. However, early diagnosis of AD is not always straightforward. This was discussed in a satellite symposium-'The early diagnosis of Alzheimer's disease: a patient-centred conversation with the care team' presented at 7th EAN in 2021-the content of which forms the basis of this manuscript. By using a clinical vignette, we set out to illustrate some of the challenges that currently exist with early diagnosis and how neuroimaging and cerebrospinal fluid (CSF) biomarkers can help to correctly identify AD in its early stages.
CLINICAL VIGNETTE
An otherwise healthy man, aged 62 years, who was still fully employed in a managerial position sought a consultation as he had experienced cognitive difficulties at work, including memory lapses. For example, these took the form of confusion and some instances of absent-mindedness. Most of these episodes had occurred in the working environment, away from the home. He had otherwise a stable personality without anxiety or depressive episodes in the past, and was in good physical health except for minor hypertension and hypercholesterolaemia. His current medications included aspirin, atorvastatin, and losartan, and there was no family history of dementia.
The initial neurological examination performed by a dementia specialist was unremarkable. Psychiatric assessment showed high levels of anxiety according to the Hospital Anxiety and Depression Scale (HADS) and the State-Trait Anxiety Inventory (STAI).
The magnetic resonance imaging (MRI) report read 'MRI is within the normal range apart from very discrete white matter signal abnormalities at the supratentorial level of ischaemic origin'. This consultation suggested that acute anxiety and reduced self-confidence had resulted in cognitive impairment. Nevertheless, as this patient exhibited cognitive impairments further tests were carried out in order to reach a comprehensive diagnosis.
An 18 F-fluorodeoxyglucose (FDG) positron emission topography (PET) scan showed a hypometabolism in the parietal lobes and the retrosplenial cortex; the CSF analysis showed elevated tau, phosphotau (p-tau) and reduced 42 amino acid form of Ab (Ab42); the amyloid PET scan showed signal in the cerebral cortex, mainly on frontal and temporal regions; the tau PET scan showed abnormal uptake, mainly on the temporal and parietal cortices, corresponding to a Braak stage V-VI. Thus, the additional biomarker analysis indicated that the mild cognitive impairment (MCI) was actually due to AD in this patient. It is important to stress that there are aspects in this clinical vignette that highlight the appropriate use of biomarkers. For example, the use of amyloid PET is based on the requirement to have objectively confirmed cognitive impairments with progressive dementia at an atypically early age of onset (usually defined as age 65 years or less). The use of biomarkers should also result in an increase in diagnostic certainty and potentially alter management. The use of biomarkers in the current case facilitated the accurate diagnosis which otherwise may have resulted in an assumed psychiatric syndrome and associated treatment based on the reported increase in anxiety. The final diagnosis of MCI due to AD made sense to the family of the patient and although they were saddened by the diagnosis, they expressed relief in knowing the full picture. In summary, this clinical vignette illustrates the value of combining clinical assessment tools in combination with biomarker analysis to help with an accurate diagnosis.
BENEFITS OF TIMELY DIAGNOSIS: INCREASED OPTIONS FOR PATIENTS, FAMILIES AND CAREGIVERS
Given the challenges of early diagnosis of AD, it is worth pointing out the benefits that a timely diagnosis can provide. The ''Imaging Dementia-Evidence for Amyloid Scanning (IDEAS)'' study, with 11,409 participants from 343 imaging centres, highlighted the importance of timely diagnoses [19]. Following an amyloid PET scan, there was a change in clinical management for approximately 60% of patients within 90 days, which consisted of one or more of AD drug therapy, other drug therapy or counselling about safety and future planning.
Receiving a timely diagnosis of dementia allows access to information, resources, support and available therapies that may improve cognition. An early diagnosis can help patients to plan for the future, making adjustments for safety and quality of life, as well as seeking financial guidance or legal support [8,9]. It also gives access to research, giving the option to patients to participate in clinical studies and registries [9]. Psychological support can also be very beneficial for families receiving a diagnosis of AD [20].
Finally, an early diagnosis can empower patients to make healthy lifestyle changes that can help to delay the onset of symptoms. A number of studies showed that an active lifestyle can help to build cognitive and brain reserve, which can confer resistance and resilience against neurodegeneration [21,22]. Mounting evidence suggests that physical activity, diet, tobacco and alcohol use, hypertension and diabetes may influence the risk of cognitive impairment and AD [23][24][25]; however, the supporting evidence is limited by regional bias and lack of long-term controlled studies [26]. A large recent study concluded that the risk of cognitive impairment can be reduced by intensive blood pressure control (systolic blood pressure \ 120 mmHg) in individuals with hypertension, but not by standard blood pressure control, suggesting that the protective effect of lifestyle changes may depend on the baseline condition and on the specific therapeutic target [24]. Together, these findings provide support and information to patients on the lifestyle changes that may help lower the risk of dementia.
CLINICAL ASSESSMENT TOOLS TO EVALUATE A MEMORY COMPLAINT
There are a number of clinical assessments tools that can be used to evaluate cognitive deficits.
Here, we will focus on the main brief detection measures that are often used in primary care. Some of the most common scales are the Mini-Cog [27,28], the Mini-Mental State Examination (MMSE) [28] and the MoCA [28,29], with different degrees of sensitivity [30] (Table 1).
Other more extensive scales are often used in clinical trials as endpoints. They include Clinical Dementia Rating-Sum of Boxes (CDR-SB) [31] and Alzheimer's Disease Cooperative Study Scale for Activities of Daily Living Inventory (Mild Cognitive Impairment Version) (ADCS-ADL-MCI) [32], both of which represent composite measures assessing cognitive symptoms [28,31]. Other scales that focus on symptoms alone include Alzheimer's Disease Assessment Scale-Cognitive 13-item subscale (ADAS-Cog) [33], MMSE [34] and Neuropsychiatric Inventory-10 item version (NPI-10) [35]. Together, these validated scales cover a large range of clinical symptom and disease measures with a minimal overlap between scales. Most are used alongside self-reports by patients, allowing one to capture subjective measures of disease and providing important support to physicians in diagnosis and development of management plans [35]. However, the administration time may take up to 1 h, making their use impractical for screening purposes in memory clinics or by first-line general/family practitioners. From the brief scales mentioned above, the MoCA was specifically developed as a screening tool for MCI. It covers multiple cognitive domains, including executive functions. The initially proposed cut-off (25/26 points) showed good sensitivity for MCI (C 83%) but poor specificity (B 66%). Thus, new cut-offs have been proposed to enhance diagnostic accuracy [36].
In addition, the MoCA Memory Index Score (MIS) was devised to predict conversion from MCI to AD. The MoCA-MIS is calculated by adding the number of words remembered in free delayed recall, category-cued recall, and multiple choice-cued recall multiplied by 3, 2 and 1, respectively, with a score ranging from 0 to 15. Individuals with MCI with a low MoCA-TS (total score less than 20/30) and a low MoCA-MIS (Memory Index Score less than 7/15) are at greater risk of short-term conversion to AD [37].
Although cognitive assessment tools can accurately predict future development of AD in patients with MCI, combining them with biomarkers is recommended to provide greater accuracy [38,39]. In addition, biomarkers may be useful to diagnose patients that present borderline clinical manifestations of disease.
CONFIRMING ALZHEIMER'S DISEASE PATHOLOGY USING IMAGING AND FLUID BIOMARKERS
AD was initially defined as a clinical-pathologic entity, which was diagnosed definitely at autopsy [40]. In the 1980s, amyloid-b peptide plaques assumed a cornerstone position in the diagnostic framework; later, they also gave the name to the so-called amyloid hypothesis that suggested the accumulation of Ab peptides into senile plaque as the main cause of neurotoxicity, neuronal cell death and subsequent neurodegeneration [41,42]. The hypothesis is supported by the genetic evidence in familial cases of AD, e.g. mutations in amyloid precursor protein (APP) or presenilin-1/2, a component of c-secretase involved in Ab production [41]. However, the term AD is often used to also describe prototypical clinical syndromes without neuropathologic verification [43]. It is estimated that between 10% and 30% of individuals with a clinical diagnosis of AD do not display neuropathological features of AD at autopsy [43].
The amyloid hypothesis has been challenged with recent advances in imaging that showed that amyloid deposits in the brain can reflect ageing in general, and are not always associated with dementia [41]. Accumulating evidence suggests that the pathophysiological disease progression in the brain may be only in part mediated by amyloid burden; numerous other genetic and environmental factors may impact cellular response and resilience and the lag between plaque accumulation and development of clinical symptoms [26].
In 2018, the National Institute on Aging and Alzheimer's Association (NIA-AA) proposed a purely biological definition of Alzheimer's disease given that amnestic multidomain dementia is neither sensitive nor specific for AD [43]. The NIA-AA suggested a research framework, grouping biomarkers in living persons into those of Ab deposition, pathologic tau, and neurodegeneration [43].
Although the authors emphasised that this research framework should not be used in general medical practice, it generated debate and challenges in everyday clinical practice. One of the concerns raised was that cognitively unimpaired individuals can have biomarker evidence of both Ab and tau pathology without developing clinical manifestations in their lifetime. Recently, the International Working Group (IWG) proposed that confirmation of AD diagnosis requires biomarker evidence of AD pathology together with specific AD clinical phenotypes [18]. According to the IWG, cognitively unimpaired individuals with a biomarkerpositive result should be considered only at-risk for progression to Alzheimer's disease. In addition, the IWG does not recommend biomarker testing for cognitively unimpaired individuals.
Taking these guidelines into consideration, different biomarkers can be used to assess the risk of developing AD in individuals exhibiting cognitive impairments.
Neuroimaging Biomarkers
A variety of neuroimaging techniques can be used as biomarkers in AD, offering complementary information, overall with a high level of analytical and clinical validity [44][45][46]. MRI allows the identification of atrophy when this is marked enough, with medial temporal atrophy being a possible sign of AD. However, atrophy patterns overlap with other diseases and unusual forms of AD can present atypical patterns of atrophy [47].
PET is another neuroimaging technique used for AD diagnosis that offers different possibilities depending on the tracer used. The most common PET tracer for AD in clinical practice is FDG, which measures the cerebral metabolic rates of glucose, a proxy for neuronal activity, although the use of Ab and tau tracers is growing rapidly [47,48]. Studies have shown a FDG-PET endophenotype for AD characterised by hypometabolism in regions of the default-mode network, with metabolism deficits gradually worsening as the disease progresses [47]. However, hypometabolism is less severe or consistent in the early stages of AD [47].
Amyloid PET predicts presence of fibrillary aggregates of Ab, offering higher specificity for AD than MRI or FDG-PET [47]. It can help in the differential diagnosis of AD and frontotemporal lobar degeneration [49]. However, a positive result does not definitively diagnose AD and amyloid imaging may not be sensitive enough in the early stage of some patients.
While amyloid PET has been used for over a decade, tau PET ligands have only become available recently, showing robust differences between healthy controls and patients with MCI [50]. Tau PET holds particular promise as a biomarker for AD, as studies suggest that tau deposits mediate the association between Ab and cognitive impairment [51]. Tau PET has been found to be more sensitive than Ab PET, being associated with worse performance on a variety of neuropsychological tests in both prodromal AD and advanced stages of the disease [52]. Despite being approved by the US Food and Drug Administration (FDA) [53], tau PET is still not reimbursed, limiting access.
Cerebrospinal Fluid Biomarkers
Given the direct contact of the CSF with the extracellular space of the brain, biochemical changes in the CSF can offer valuable information. Thus, CSF has been the focus of much research on diagnostic biomarkers for AD. Three biomarkers, total tau (t-tau), p-tau, and Ab42, have been evaluated in numerous studies, showing a high ability to differentiate AD from normal aging, depression and Parkinson's disease, but lower specificity against vascular dementia and Lewy body dementia [54]. It is hypothesised that decreases in soluble Ab 1-42 in CSF signal its aggregation into plaques. A recent review highlighted the usefulness of using Ab 1-42 /Ab 1-40 in CSF for the diagnosis of AD in people with dementia, and its potential utility in identifying early signs of AD [55]. Indeed, the FDA recently approved the first Ab 1-42 /Ab 1-40 CSF test for the early detection of amyloid plaques associated with AD [56]. In addition, ratios of p-tau/Ab 1-42 , Ab 1-42 /Ab 1-40 and t-tau/Ab in CSF have demonstrated good concordance with amyloid PET [57]; furthermore, these biomarker assays are associated with lower costs than PET [18]. Consequently, CSF investigations for Ab and tau are included in the IWG recommendations for clinical diagnosis of Alzheimer's disease, with PET investigations providing an alternative in cases where lumbar puncture is contraindicated [18].
In order to improve the predictive value of CSF biomarkers and allow for risk stratification, the Erlangen Score Algorithm [58] and other scales [59] that combine measures of Ab42, t-tau, and p-tau have been developed. For the Erlangen Score Algorithm, the CSF results of a patient are determined depending on the pattern of Ab42, t-tau, and p-tau alterations. Scores range from 0, where there is no evidence of AD, to 4, where AD is probable.
Current Challenges and Future of Biomarkers
Despite the clinical utility of both neuroimaging and CSF biomarkers, there are still some obstacles to overcome. Firstly, availability of different techniques varies significantly between centres, with tau PET being available only in research contexts. In addition, the high costs of PET scanning and the lack of reimbursement for these examinations in some countries hinder accessibility. On the other hand, the invasiveness of CSF testing often elicits reluctance towards lumbar puncture among clinicians and patients, despite the overall risk of complications being relatively low [60].
Thus, low-cost and non-invasive alternatives such as plasma biomarkers have the potential to drive changes in AD screening and diagnosis. This demand has motivated the development of ultra-sensitive assays, able to measure very low levels of AD-related biomarkers in blood samples. However, validation in multiple independent cohorts and across platforms as well as comparison with existing validated biomarkers is still lacking [61,62].
Other promising non-invasive biomarkers for AD are neurophysiological markers such as cortical excitability measured with transcranial magnetic stimulation (TMS) [63] as well as retinal changes, including structural, vascular and electrophysiological biomarkers [64]. Digital biomarkers using mobile and wearable device-derived data may also aid early detection of AD [65]. Lastly, combining biomarkers with genotyping can provide greater accuracy in the early diagnosis of AD.
Major System Constraints
In order to achieve early diagnosis in AD, it is important to also consider whether countries are prepared to incorporate innovation. The ''Dementia Innovation Readiness Index 2017'' investigated this in the G7 countries and found that, in most cases, there was an insufficient number of specialists to recognise, evaluate and diagnose dementia [66]. In addition, in some European countries such as France and the UK, the waiting times to see a specialist is more than 12 months [67].
Psychological Impact of Early Diagnosis
With the currently limited availability of disease-modifying treatment for AD, it is important to consider the psychological impact of early diagnosis. Studies have begun assessing the psychological effects of disclosing both ADrelated genetic and biomarker information to cognitively unimpaired adults. In one study, participants who learned they had elevated amyloid levels did not experience short-term negative psychological sequelae [68].
A recent literature review [69] suggests that the potential benefits of disclosure, such as increased autonomy, outweigh the low risk of psychological distress or self-harm in cognitively unimpaired research participants. However, other factors such as potential discrimination in the workplace or insurance need to be considered.
It is also important to point out that pre-and post-biomarker counselling varies across centres [70], which calls for better biomarker counselling and better training to improve communication skills. In this regard, Huntington's disease, for which genetic testing has been available for longer than any other adult-onset genetic disorder, can serve as an example, as genetic testing is offered as part of comprehensive specialist counselling [71].
Considerations to Disclose an Early Diagnosis
As disclosing early diagnosis is a sensitive topic, we suggest some points to consider across the diagnosis process.
Firstly, during pre-biomarker sampling counselling, it is important to provide basic information and establish that the patient understands the diagnostic process and the information that will be disclosed. The four components of clinical competency need to be considered: understanding, appreciation, reasoning, and expression of choice. It is also important to discuss the purpose, limitations and any possible benefits and disadvantages [72]. In addition to providing information, it is also vital to ask whether there are any questions before continuing with next steps.
During the diagnosis disclosure, it is advisable that the patient is accompanied by a caregiver to offer support and to be 'an extra pair of ears'. Communication should be adjusted to the individual patient and information should preferably also be provided in writing for future reference. Patient wishes in terms of level of information should be respected.
The physician should be open about the uncertainty in biomarker interpretation and avoid a deterministic interpretation, adopting a probabilistic interpretation. If the diagnosis is uncertain, a discussion of further diagnostic evaluation or offering the option of a second opinion evaluation may be relevant.
The patient should always be given information about follow-up and post-diagnostic care. Lastly, advice on brain-healthy behaviour and attention to modifiable risk factors can help to empower the patient to do something themselves to influence the disease course.
CONCLUSIONS
Current diagnosis of AD usually relies on clinical signs of cognitive impairment. Cognitive screening using the MoCA can help detect MCI and stratify risk of conversion to AD dementia. In addition, biomarkers, used in combination with clinical assessment in people with specific AD phenotypes, allow for early and more accurate diagnosis in AD [18]. However, use of biomarkers also presents some challenges, such as limited availability and high cost in the case of PET, and invasiveness in the case of CSF. Biomarkers are also critical to the evaluation of disease-modifying therapies in clinical studies in AD through the identification of suitable participants, proof of target concept, determining disease progression and monitoring safety outcomes [73].
Future biomarkers such as plasma and neurophysiological measures could be an attractive advance both in terms of low cost and non-invasiveness. In addition, specific work should address comparability between biomarkers to develop adequate diagnostic algorithms combining them. However, technical advances in biomarkers need to happen together with a change in national health systems to overcome any constraints or barriers. This is of paramount importance, given the increasing prevalence of AD and the burden of this disease on individuals, families and society as a whole.
Biomarker assessment is generally viewed favourably by patients, as the information provided can help individuals to plan and make informed decisions. However, debate continues over how and when to return biomarker information to the patients. Pre-test counselling and diagnostic disclosure whilst respecting the wishes of the patient is as important as correct and accurate diagnostic evaluation.
Funding. This Commentary Article, including the journal's Rapid Service fee, is supported by Biogen, Cambridge, MA, USA.
Medical Writing Assistance. Medical writing assistance was provided by Clara Garcia-Gorro and Andrew Thomson of inScience Communications, Springer Healthcare Ltd, UK, and funded by Biogen Cambridge, MA, USA.
Authorship. All named authors meet the International Committee of Medical Journal Editors (ICMJE) criteria for authorship for this article, take responsibility for the integrity of the work as a whole, and have given their approval for this version to be published.
Contributions. Ziad Nasreddine, Valentina
Garibotto, Simon Kyaga and Alessandro Padovani all made substantial contributions to the drafting and revision of this article for publication.
Disclosures. Ziad Nasreddine was involved in clinical studies with Biogen, Eisai, Eli Lilly and Roche, and is a MoCA test Copyright Owner. Valentina Garibotto reports grants from the Swiss National Science Foundation (projects 320030_169876, 320030_185028 and IZSEZ0_188355), the Velux foundation (project 1123), the Fondation Privée des HUG and the Schmidheiny foundation. Valentina Garibotto received financial support for research and/or speaker fees through her institution from Siemens Healthineers, GE Healthcare, Life Molecular Imaging, Cerveau Technologies, Roche, Merck. Alessandro Padovani was involved in clinical studies for Biogen, Eisai, Roche and Zambon, and advisory boards for Biogen, Eli Lilly, GE Health, Nutricia, PIAM, Roche and Zambon. Simon Kyaga is an employee of Biogen.
Prior Publication. This Commentary Article is based on the proceedings of a Satellite Symposium, 'The early diagnosis of Alzheimer's disease: a patient-centred conversation with the care team' presented on 19 June 2021 at the 7th European Academy of Neurology (EAN) Congress.
Compliance with Ethics Guidelines. This article is based on previously conducted studies and does not contain any new studies with human participants or animals performed by any of the authors.
Data Availability. Data sharing is not applicable to this article as no datasets were generated or analysed during the current study.
Open Access. This article is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, which permits any non-commercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/bync/4.0/. | 2022-12-20T16:02:59.483Z | 2022-12-18T00:00:00.000 | {
"year": 2022,
"sha1": "f37c93c5f2b66a8f2ca8b0b16a4ca91b09f4dce8",
"oa_license": "CCBYNC",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s40120-022-00428-7.pdf",
"oa_status": "GOLD",
"pdf_src": "Springer",
"pdf_hash": "b410f745889e7ca8dfb6abeaf7282af5115fbef0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
211563362 | pes2o/s2orc | v3-fos-license | Core–Double-Shell Fe2O3@SiO2@Jarosite Hybrid Nanoparticles Synthesized by Laser Ablation of Turquoise in Ethanol
This work highlights a facile green route for the one-step synthesis of iron oxide core–double-shell nanoparticles (NPs) and aluminum phosphide (AlP) nanosheets by pulsed laser ablation of the mineral turquoise target from Nishapur in the presence of an ethanol solvent. High-resolution transmission electron microscopy, selected-area electron diffraction pattern, and field emission scanning electron microscopy (FESEM) in combination with energy-dispersive X-ray mapping revealed the formation of NPs with a typical core@double-shell structure in which crystalline α-Fe2O3 (iron oxide) formed the core, while SiO2 (quartz) and (K, H3O)Fe3(SO4)2(OH6) (jarosite) participated as the inner and outer shell, respectively. However, the application of laser ablation on the turquoise phase of the target led to the formation of AlP nanosheets which was confirmed by the X-ray diffraction patterns and FESEM images. Strong absorption of the vein-ablated species in the UV region (250–360 nm) was the characteristic feature of α-Fe2O3 and jarosite phases, while the absorption band at 250–300 nm for the turquoise-ablated species was related to the presence of Cu compound species and also the α-Fe2O3 phase in the sample. Photoluminescence emission spectra for the vein-ablated species depicted a peak centered at 370 nm, while a peak located at 364 nm was ascribed to the turquoise-ablated species. In particular, these hybrid NPs with high purity and stability may offer new opportunities for bio-applications such as anticancer agents and water/wastewater applications.
■ INTRODUCTION
Recently, multifunctional materials due to their wide range of applications have attracted considerable attention. 1 In hybrid nanostructures, different physical and chemical properties of multiple components were integrated, which shows higher device performance regarding their characteristic single components. 2 Different hybrid materials were synthesized with versatile structures and various components such as core− shell and double-shell structures (Fe 2 O 3 @SiO 2 , Cu x @Au y , Fe@Fe 2 O 3 @Si−C−O), 3 doped metal oxides, 4 noble metalsupported oxides, 5 hierarchical oxide heterostructures (TiO 2 / Fe 2 O 3 , ZnO/Fe 2 O 3 ), 6 and so forth. Among the hybrid materials, iron oxide core−shell or double-shell nanostructures motivated fundamental research because of their potential applications in different areas including heavy metal removal by adsorption or reduction, 3c,7 drug or biomolecules transport, 8 pigment synthesis, 9 hyperthermia, 10 catalysis, 11 as bioand chemosensors, 12 fine ceramics, 13 and magnetic resonance imaging. 14 There have been various synthetic strategies to fabricate iron oxide core−shell or double-shell nanostructures such as the selective etching strategy, 15 microemulsion, 16 Stober methods, 17 and the ship-in-bottle approach. 18 However, the methods listed above suffer from the purity of the surface of nanostructures. While this impurity somewhat blocks the benefit of bioconjugations of their surface. These approaches usually are time-consuming and require multiple high-temper-ature experimental steps. Additionally, most of the procedures are not green processes because of the use of toxic preservatives or chemical precursors; therefore, a wellestablished, direct, and environmentally friendly process for the synthesis of iron oxide hybrid nanoparticles (NPs) is still absent.
Liquid phase-pulsed laser ablation (LP-PLA) is a unique and efficient technique for the synthesis of complex nanostructures in a clean manner. 19 The LP-PLA method has received considerable attention in recent years as a simple and easily operable synthesis technique for preparing the various nanomaterials free of contaminants. 20 Ultrapure elements, 21 oxide and non-oxide synthetic compositions, 20a, 22 and polymers and alloys 23 are usually used as the targets for the synthesis of different nanostructures through the LP-PLA method. Minerals are not commonly used as targets in the LP-PLA approach because of the presence of multiphase structures in the mineral samples. Turquoise is a precious mineral which belongs to a group of inorganic materials with at least six recognized members. 24 The general formula of the turquoise group was written as AB 6 (PO 4 ) n (PO 3 )(OH) 2n−1 (OH) 8 ·4H 2 O, wherein the A position was filled with Cu 2+ or Fe 2+ as the most common constituent, while Fe 3+ and Al 3+ are situated at the B position. 25 Turquoise might also have veins running through it. The wide veins on the surface of turquoise are undesired for high-quality designing of jewelry pieces. The vein of Persian turquoise as our mineral target encompasses three main phases of pyrite, quartz, and jarosite. 26 Inspired by this, with the hypothesis of copossessing of iron oxide and silica, synthesis of crystalline core−double-shell NPs through one-step LP-PLA approach, low-cost Persian turquoise with wide veins, inappropriate for jewelry applications, is proposed as a candidate for the ablation target without any manipulation.
In this work, to the best of our knowledge, for the first time, as-prepared nanostructures from the laser ablation of Nishapur turquoise in ethanol medium performed using Ce:Nd:YAG laser in the fundamental wavelength (1064 nm) were studied. The hematite@quartz@jarosite core−double-shell NPs, also produced for the first time through the laser ablation of the vein phase of turquoise, consist of crystalline iron oxide (α-Fe 2 O 3 ) core, silicon oxide (SiO 2 ) inner shell, and jarosite ((K,H 3 O)Fe 3 (SO 4 ) 2 (OH 6 )) outer shell. Laser ablation of the turquoise phase of the target also led to the synthesis of AlP nanosheets. X-ray diffraction (XRD) pattern analysis, field emission scanning electron microscopy (FESEM) in combination with energy-dispersive X-ray (EDX) elemental mapping, high-resolution transmission electron microscopy (HRTEM), and selected-area electron diffraction pattern (SAED) were utilized for phase and morphological studies, while UV−vis−NIR and photoluminescence (PL) spectroscopy used for characterization of the optical properties of the synthesized nanostructures from laser ablation of Nishapur turquoise in ethanol.
■ RESULTS AND DISCUSSION
A schematic of the setup applied for the laser ablation experiments is shown in Figure 1a. The laser beam was irradiated on the surface of the target going through the 0.01 m (b) and after (f) laser ablation); (c,g) surface of the turquoise phase (before (c) and after (g) laser ablation); (d,h) back-scattered electron mode image of the turquoise phase (before (d) and after (h) laser ablation), in h inset, the EDX analysis shows the existence of iron-contained synthesized NPs on the surface of the turquoise phase after the laser ablation process; for all of the irradiated samples, morphological studies were done after 150 min noncontinuous laser irradiation in the ethanol environment by Ce:Nd:YAG laser in 1064 nm wavelength. height of the ethanol solvent. The composition of the vein and turquoise phase of the target before and after the laser ablation were characterized by XRD and is shown in Figure 1b. The vein of the target before laser ablation was mostly composed of jarosite (JCPDS file no. 00-036-0427), hematite (JCPDS file no. 00-033-0664), and SiO 2 (JCPDS file no. 00-046-1045), which were consistent with the previously reported results of studies on Iranian turquoise. 26 After laser ablation of the vein, the identified phase was jarosite; however, the most four main peaks ascribed to (101), (012), (021), and (113) planes showed lower intensity relative to the intact vein target. This decreased intensity of the jarosite phase after laser ablation of the vein was due to the lower crystallinity of the as-synthesized product relative to the vein target produced during the ablation process. The full width at half maximum (fwhm) of peaks related to the (012) and (113) planes for the vein before and after laser ablation was not meaningfully changed. A weak peak related to the main peak ((104) planes) of the Fe 2 O 3 phase, as well as two weak peaks corresponding to (101) and (110) of the SiO 2 phase were seen in the XRD pattern of the dried solution produced after laser ablation. The presence of iron in EDX analysis ( Figure S1) and iron-containing NPs in FESEM images (Figure 2h) confirmed the presence of the ironcontaining phase in the vein-ablated species. In the XRD of the turquoise phase of the sample before laser ablation, the turquoise crystalline phase (JCPDS file no. 96-500-0102) besides Fe 2 O 3 (JCPDS file no. 00-033-0664) and AlP (00-012-0470) phases were characterized. After laser ablation of the turquoise phase, the most intensive peaks of the turquoise phase, that is, the ones related to (111), (11̅ 1), and (123) planes completely disappeared, while the three most intensive peaks of the AlP phase, that is, the ones related to (111), (220), and (311) planes were observed. In addition, the two most intensive peaks of the Fe 2 O 3 phase, that is, the ones corresponding to (104) and (110) were identified.
The morphology of the surface of the target before and after 150 min laser ablation (15 runs of 10 min laser irradiation) was studied by FESEM, as depicted in Figure 2. The effect of the laser pulses irradiated on the surface of the turquoise target in the ethanol medium could be compared considering Figure 2a,e for the untreated and irradiated targets, respectively. These overall views of the surface of the target could demonstrate the formation of holes with different depths on the surface of the target ( Figure 2e); the deep hole is related to the intense ablation for 150 min (9 × 10 4 shots) in the vein phase versus the shallow cavity which indicated mild ablation on the surface of the turquoise phase. The higher resolution FESEM images of the vein and turquoise phases of the target before laser ablation are shown in Figure 2b,c, respectively. The morphology of fragmented particles of SiO 2 , Fe 2 O 3 , and jarosite constituted in the vein phase is shown in Figure 2b, while an integrated surface structure of the turquoise phase is observed in Figure 2c. The surface structures of the veins and turquoise phases after laser ablation are also depicted in Figure 2f,g, respectively. Considering the FESEM images of the surface of the target after laser ablation (Figure 2f,g), the ablation dynamics of these two phases (vein and turquoise phases) during the laser ablation were different. Regarding the ablation of the vein phase by laser, through the formation of new phases, microreliefs were formed on the surface of the vein phase. The microreliefs were therefore modified by repeating the ablation procedure, and the craters were observed in the formed septums and valleys of the vein phase of the target. These reformed irregular microreliefs on the vein phase of the stone, after 150 min of noncontinuous irradiation, formed irregular microchannels and are shown in Figure 2f. It could be understood from the FESEM image of the vein of the target after laser ablation that the deformation of the surface and the formation of the pore-like structure and incomplete pores were arising from laser-induced heating of the ablation target, subsequent melting, and resolidification of the surface ( Figure S2a . FESEM images of (a) iron-containing particles (bright particles) prepared from laser ablation of the vein phase of the target; (b) and (c) AlP nanosheets besides Cu compound-ablated species produced from laser ablation of the turquoise phase of the target in low and high magnifications; (e) FESEM of a bright particle along with a line scan of Si and Fe depicted on, (d,f) EDX mapping images of oxygen and ironsilicon of the particle shown in (e), respectively, EDX mapping images for iron oxide-silicon oxide LP-PLA generated core−double-shell particles produced in ethanol, showing the oxidized iron core and also silicon oxide shell.
ACS Omega
http://pubs.acs.org/journal/acsodf Article before and after laser ablation, respectively (Figures S1d and S3), and confirmed the presence of Cu, Al, P, Fe, and O as the major components of the turquoise phase. However, additional EDX analyses showed that no quantities of Si were present in this region in the accuracy range of the EDX apparatus. These data were in good agreement with mineralogical analysis using laser-induced breakdown spectroscopy (LIBS) images from Nishapur turquoise. 27 The micro-fragmented particles of SiO 2 , Fe 2 O 3 , and jarosite of the vein phase were more exposed to phase transformation during the ablation compared to the turquoise of the integrated structure. Basically, because of the observed NIR optical band of the turquoise phase in 9300 cm −1 (∼1075 nm) resulting from the Fe(II) band, 24 the reflectance of the turquoise phase in the NIR region (1064 nm laser wavelength) is higher than that of the vein phase. This lower absorption in the turquoise phase led to lower ablation efficiency of the turquoise phase (shown in Figure 2e) and led to the change of ablation dynamics in this phase during laser ablation. The surfaces of the turquoise phase before and after laser ablation (150 min noncontinuously irradiation) are illustrated in the back-scattered electron mode FESEM images (Figure 2d,h). After laser ablation of the target, ablated products in the form of nano-and microparticles transferred into the ethanol environment, while some of the as-prepared particles already remained on the surface of the stone target ( Figure 2h). The laser ablation product study was conducted by selective laser irradiation on the surface of the target. After 10 min of irradiation on the vein phase, the liquid environment (ethanol) changed from colorless to dark brown liquid. The color changes were attributed to the formation of nanostructures containing iron oxide species. The turquoise phase of the target was also exposed for 10 min to the laser pulses. FESEM micrographs of the iron-containing NPs, prepared from the laser ablation of the vein, as well as AlP nanosheets produced from the laser ablation of the turquoise phase of the target are depicted in Figure 3a,b,c, respectively. The presence of Cu compound-ablated species in the suspension produced from laser ablation of the turquoise phase is seen in Figures 3c and S5a. Regarding the size distribution of AlP nanosheets, in the applied laser fluence of 178.3 J/cm 2 , broad-peak Gaussian extrapolation was adjusted (inset in Figure S4b). The mean lateral size of AlP nanosheets is measured as 30 nm, while the lateral size of these nanosheets present in the solution was in the range of 30−210 nm. Iron-containing NPs were the bright spheres observed in Figure 3a. For the iron-containing NPs (inset in Figure S4a), the same trend was also seen. Most of the iron-containing NPs were synthesized in relatively small size with the mean size around 20−30 nm; however, substantial 20−550 nm size particles still existed in the suspension. These broad-peak Gaussian distributions were also reported in other laser-synthesized nanostructures. 28 The size of NPs depended on the gained fluency of the surface, which in return depended on the temperature distribution produced on the target. Subsequent pulses have produced different fluences on the surface; in fact, the energy per pulse sensed by the surface depended on the changing pulsed laser modifications of the solid surface as the ablation continues. 29 When the target was irradiated initially on the nonprocessed surface, the temperature distribution on the surface of the target was different from the same distribution produced upon the laser pulses on the processed target. For the initially stable pulses, the temperature profile was smooth. As a result, the energy density (the laser fluence) on the surface of the target was stable, which led to the narrow size distribution of NPs. During 10 min laser irradiation, the solid surface of the target absorbed the short laser pulses and melted. Because of the heat transfer from this molten layer of the target to the surrounding liquid layer, this liquid layer heated up. As a result of the high pressure of the adjacent layer that contacted the surface of the target, the microchannels were formed. After the formation of these channels during 150 min of noncontinuous irradiation on the surface (Figure 2f), numerous hot spots located at the channels altered the temperature profile and changed the fluence distribution, thus finally leading to the synthesis of NPs with more broad size distribution. Furthermore, in nanosecond laser ablation, multiple size distribution occurred because of the radiation-related target heating effect. In the nanosecond laser ablation, because of the electron−phonon relaxation process, 30 the lattice temperature started to increase. When the absorbed energy on the target was sufficiently high, the nanoclusters and atoms were formed. These ablated nanoclusters and atoms tended to agglomerate during or after the laser pulse irradiation. In nanosecond laser ablation, the target itself was heated by the laser-produced plasma and also by the radiation as a direct consequence of the heat diffusion time being much shorter than the laser pulse. This later condition resulted in slow cooling of particles and thus led to the agglomeration of these particles. 31 The radiation-related target heating effect was strong in the nanosecond and microsecond radiations, wherein the ablated crater was surrounded by the heat-affect zone, leading to the release of large droplets of the materials. 32 The FESEM image in combination with the EDX line scan on a single particle is shown in Figure 3d−f. EDX mappings of the same particle regarding the presence of O and Fe−Si were also shown in Figure 3d,f, respectively. The FeO x @SiO x core− shell structure is clearly distinguishable by the defined phase boundary formed with a decrease of silicon with a simultaneous increase of iron in the center. This trend of iron and silicon variations correspondent with the presence of oxygen, depicted in FESEM and EDX (line and mapping) results, confirms the iron oxide enrichment in the core and silicon oxide in the shell. The diameter of the core and the thickness of the shell for some core−shell structure of these particles have been shown in Figure S2c,d. Considering the FESEM images, the core size of these particles changed from 60 to 410 nm and the shell size (inner and outer shell together) changed from 20 to 150 nm.
For a more detailed analysis of the particle, HRTEM analysis was done on the prepared particles. While FESEM images could not recognize more than two layers (recognition of first and second shell aside), HRTEM images disclosed the presence of two shells and a core. In fact, the outer shell was composed of two different compositions revealed by electron diffraction of the particles, showing the formation of core− double-shell particles. Figure 4a showed a bright-field HRTEM image, confirming the formation of multiple size distributions of particles with a core−double-shell structure by 1064 nm laser ablation in ethanol. The measurement on TEM images indicated that the typical core and shell (inner and outer together) sizes were in the range of 5−205 and 2−145 nm, respectively. The multiple size distributions of particles illustrated using FESEM images were discussed ( Figure S4a) in the previous section. A particle with a core-double-shell structure is shown in Figure 4b, in which the outer shell was formed within an identical thickness, although the inner shell was not formed with equal thickness. The dark core was FeO x with about 202 nm diameter and the mean thickness of the inner shell of SiO x was 82 nm; however, the thinner bright outer shell showed a thickness of 3 nm. To identify the specific compositions of the as-synthesized core−double-shell particles, also evaluation of the crystallinity, inverse fast Fourier transform (FFT) was redrawn (Figure 4c) and also SAED characterization from different areas of the sample was done (Figures 4d and S6a,b). The radial profile (circular integration of intensity in reciprocal space) of the diffraction patterns is shown in Figures 4e and S6c,d. For the core, a lattice fringe of 0.27 nm corresponding to the (104) planes of α-Fe 2 O 3 (bottom left inset in Figure 4c) was characterized, while for the inner shell, the measured lattice fringe was 0.25 nm (was marked at the upright of the inset in Figure 4c) which is related to (110) planes of SiO 2 . The lattice fringe of 0.57 nm on the top-left inset in Figure 4c illustrated a characteristic interlayer spacing of 0.57 nm for the jarosite phase (JCPDS file no. 00-036-0427) corresponding to (003) planes. The SAED patterns (Figures 4d and S6a,b) showed the crystalline planes of the α- Optical Properties of Laser-Synthesized Particles and Nanosheets. The optical absorption properties of lasersynthesized particles and turquoise phase-ablated species at room temperature were studied by UV−vis−NIR spectroscopy ( Figure 5a). As shown in Figure 5a, the spectrum of the suspension containing AlP nanosheets and Cu compounds showed broad absorption in the range of 250−300 nm with the maximum peak at 270 nm. Because of the presence of Cuablated specious in the turquoise phase of the target after the laser ablation of the turquoise phase ( Figures S5a and 3c), the absorption spectrum for this sample was intensified in the UV Figure 5. (a) Normalized UV−vis−NIR absorption spectra of lasersynthesized nanostructures produced from laser ablation of the turquoise stone target in the ethanol environment from bottom to top: ethanol as the baseline, turquoise phase-ablated species containing α-Fe 2 O 3 and Cu compounds besides AlP nanosheets, and vein phase-ablated species containing α-Fe 2 O 3 @SiO 2 @jarosite core−double-shell NPs, (b) PL spectra from bottom to top: the vein phase of the target before laser ablation, vein phase-ablated species after 10 min laser ablation of the vein phase in ethanol, turquoise phase of the target before laser ablation, and turquoise phase-ablated species containing α-Fe 2 O 3 and Cu compounds besides AlP nanosheets produced from 10 min laser ablation of the turquoise phase of the target.
ACS Omega
http://pubs.acs.org/journal/acsodf Article region. The UV absorption of Cu compounds usually was in the visible region and changed to the UV region with quantum confinement effect enhancement. 33 In addition, UV absorption of the sample was observed because of the direct-charge transfer transition of O 2− 2p → Fe 3+ 3d of α-Fe 2 O 3 in the turquoise phase-ablated species. However, in Figure 5a, the same spectrum accomplished on the vein phase-ablated species was seen; for the vein phase-ablated species containing the core−double-shell particles, there is a strong absorption region including 250−360 nm corresponding to the direct-charge transfer transition of O 2− 2p → Fe 3+ 3d of α-Fe 2 O 3 . Another strong absorption in the visible region due to an indirect transition of Fe 3+ 3d → 3d was seen. 34 Furthermore, because ofthe presence of the jarosite phase, UV absorption for the vein phase-ablated species sample was strong and broad. 35 Figure 5b depicts the PL spectra of both laser-generated nanostructures from the 10 min laser ablation of the turquoise mineral in ethanol (vein and turquoise) and the target (vein and turquoise phases) before ablation. All samples were excited at 270 nm at room temperature. The emission band centered at 370 nm (3.35 eV) is observed in the vein phase-ablated species prepared from laser ablation of the vein phase of the turquoise stone. This luminescence is due to electron transition within α-Fe 2 O 3 and can be observed even with the bare eye at room temperature. The emission band centered at 364 nm (3.41 eV) is observed in the turquoise phase-ablated species, which indicated the copper compound emission in these ablated species. The presence of these compounds was previously illustrated using the FESEM image in combination EDX analysis (Figures 3c and S5a). The presence of α-Fe 2 O 3 can intensify the emission band in the UV region. The UV emission peak in 364 nm for the turquoise phase-ablated species was related to the recombination of electron−hole pairs in free excitons. 36 These PL properties of the suspension were in agreement with the PL emission of Cu compounds and also α-Fe 2 O 3 emission bands in the UV region. 36,37 The blue emission band of the sample (led to show the blue color of these compounds) is seen, as shown in Figure S4, when this sample was excited at 430 nm at room temperature. The PL spectrum of the turquoise phase-ablated species in Figure S4 showed the emission band located in 462 nm (led to the blue color emission) and 492 nm related to cyan color (a deep greenish-blue color) emission and also a weak peak in 504 nm corresponding to green color emission of these turquoise phase-ablated species. Intriguingly, the fwhm of the emission spectra for both nanostructures synthesized from laser ablation of the vein and the turquoise phase of the target (i.e., vein phase and turquoise phases after LP-PLA) is narrower than the nonablated samples (i.e., vein and turquoise phases before LP-PLA). Basically, when the size of the nanostructures is less than 30 nm, the PL phenomenon is observed because of the quantum confinement effect. 38 For the bulk material, the PL spectra are not observable because of the forbidden optical transition. 39 In the current study, the effect of the size of the turquoise phase-ablated species and the core−double-shell NPs needs to be considered. Furthermore, the narrower spectrum in the turquoise phase-ablated species and core− double-shell NPs indicates higher uniformity of these nanostructure emission energies and thus higher size uniformity of these nanostructures compared to the untreated turquoise and vein-phase structures.
■ CONCLUSIONS
In conclusion, the presented work describes a suitable one-pot, one-step, "green" synthesis route for the selective laser synthesis of high-purity core−double-shell magnetic NPs and AlP nanosheets from the selective laser ablation of the mineral turquoise target in the ethanol environment. HRTEM and SAED analyses show that these core−double-shell NPs are composed of the hematite core and quartz inner shell and jarosite outer shell. Regarding the TEM images, the mean diameter of the hematite core is measured as 27 nm, while the mean quartz shell thickness is 9 nm. However, the nanosheet formation has been confirmed using FESEM images with a lateral size of 30−210 ± 6 nm. UV−vis−NIR absorption for core−double-shell NPs shows strong absorption in the UV region (250−360 nm) because of α-Fe 2 O 3 and jarosite and phases, while turquoise phase-ablated species have a peak located at 270 nm related to the presence of Cu compounds in the turquoise phase. The PL emission of core−double-shell NPs at 370 nm is due to the electronic transition within the iron oxide (α-Fe 2 O 3 ), while for the turquoise phase-ablated species, it is located at 364 nm and related to the presence of Cu compounds and α-Fe 2 O 3 phases in the turquoise phase.
LP-PLA Conditions and Sample Preparation.
A Ce:Nd:YAG laser with a fundamental wavelength of 1064 nm, the repetition rate of 10 Hz, and pulse duration of 10 ns with Gaussian beam distribution, pulse energy of 350 mJ/pulse (178.3 J/cm 2 ), with the irradiation time of 10 min were used for laser synthesis experiments. The turquoise target used in our experiment was obtained from the Nishapur (Iran). This target is composed of two phases of different colors on the surface, and the light brown veins were surrounded by the turquoise phase surface. The laser was made to irradiate from the top of the cylindrical container to the target at the bottom of the container to avoid reflections through the glass walls. Pure ethanol (Merck, 99%) filling up to 0.01 m height on the top of the target surface was used as a liquid environment for laser ablation experiments. The laser ablation experiments were performed in two steps. In the first step, the laser beam was directed on the vein of the turquoise surface and the related suspension was collected and dripped to Si wafer followed by drying for morphological and structural characterization. In the second step, the laser beam was conducted on the turquoise surface, and as performed in the first stage, the relevant solution was collected and dripped to the Si wafer followed by drying for morphological and structural characterization.
Instrumentation. UV−vis−NIR spectroscopy was carried out on the separated supernatant of the ablation product, with a T80+ model spectrometer manufactured by PG Instruments Ltd. (United Kingdom), PL emission was recorded using PL spectroscopy (Gilden Photonics, Scotland), and XRD patterns were obtained using a Phillips X'pert MPD powder diffractometer with radiation of Co Kα (wavelength = 0.179 nm, Netherland). The morphology of the nanostructures was characterized using a field-emission scanning electron microscope (FESEM, MIRA3TESCAN-XMU, Czech Republic). HRTEM images and SAED patterns were taken using FEI Titan 80−300 LB (200 kV, America).
EDX analysis of the turquoise and vein phases before laser ablation, EDX mapping of the turquoise phase after laser ablation, further FESEM images of the vein phase after laser ablation, synthesized products with size distribution histograms, SAED of core−double-shell NPs, and PL property of turquoise-ablated species (PDF) | 2020-02-20T09:09:03.165Z | 2020-02-14T00:00:00.000 | {
"year": 2020,
"sha1": "2ec343f754cac5d2695b7c3be3653b58c0041e0f",
"oa_license": "acs-specific: authorchoice/editors choice usage agreement",
"oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsomega.9b03882",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "855d42fa7533f56e8b0a7276444e7e2075b1db17",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
264933735 | pes2o/s2orc | v3-fos-license | Fecal short-chain fatty acids in non-constipated irritable bowel syndrome: a potential clinically relevant stratification factor based on catabotyping analysis
ABSTRACT The gut microbiota is believed to be a critical factor in the pathogenesis of IBS, and its metabolic byproducts, such as short-chain fatty acids (SCFAs), are known to influence gut function and host health. Despite this, the precise role of SCFAs in IBS remains a topic of debate. In this study, we examined the bacterial community structure by 16S rRNA gene profiling and SCFA levels by UPLC-MS/MS in fecal samples from healthy controls (HC; n = 100) and non-constipated patients (IBS-D and IBS-M; NC-IBS; n = 240) enrolled in 19 hospitals in Italy. Our findings suggest a significant difference between the fecal microbiomes of NC-IBS patients and HC subjects, with HC exhibiting higher intra-sample biodiversity. Furthermore, we were able to classify non-constipated patients into two distinct subgroups based on their fecal SCFA levels (fecal catabotype “high” and “low”), each characterized by unique taxonomic bacterial signatures. Our results suggest that the fecal catabotype with higher SCFA levels may represent a distinct clinical phenotype of IBS that could have implications for its diagnosis and treatment. This study provides a new perspective on the intricate relationship between the gut microbiome and bowel symptoms in IBS, underscoring the importance of personalized strategies for its management.
Introduction
Irritable bowel syndrome (IBS) is a common functional gastrointestinal disorder that affects a large proportion of the population worldwide, with an estimated prevalence of 7-21%, depending on the geographical location and diagnostic criteria used 1 .IBS is characterized by recurrent abdominal pain associated with altered bowel habits, including diarrhea (IBS-D), constipation (IBS-C), or a mixture of both (IBS-M), which can significantly impair the quality of life of affected individuals.Despite its high prevalence and impact, the pathophysiology of IBS still needs to be better understood, and the available treatments result effective only for a limited portion of patients 2,3 .
A growing body of evidence suggests that alterations in the gut microbiota and its metabolic products may play a crucial role in the pathogenesis of IBS 4 .The gut microbiota is a complex community of microorganisms (prevalently bacteria) that colonize the human gastrointestinal tract, interacting with the host immune system, modulating gut motility and secretion, and participating in the digestion and absorption of nutrients 5 .The main metabolic products of the gut microbiota are shortchain fatty acids (SCFAs), a group of organic acids that are mainly produced by the bacterial fermentation of dietary fiber and other carbohydrates in the colon.In particular, the most abundant SCFAs in the human bowel are the bacterial catabolites acetate (C2), propionate (C3), and butyrate (C4), which are reported to reach up to hundreds of millimoles per liter in the proximal colon and cecum 6 .In addition, other organic acids deriving from the bacterial catabolism detected in the human gut are valerate (C5), lactate, and succinate 7 .
SCFAs are essential for maintaining gut health and function since they provide energy to the colonic epithelium, stimulate the production of mucus and antimicrobial peptides, and regulate gut motility and inflammation 8 .Moreover, SCFAs have been shown to have systemic effects beyond the gut, including regulating glucose and lipid metabolism and the modulation of immune function and gut-brain communication 9,10 .Despite the demonstrated beneficial effects of intestinal SCFAs on human physiology and metabolism, the role of these microbial metabolites in IBS remains controversial.Some studies have reported lower levels of SCFAs in IBS patients compared to healthy controls, while others have reported higher levels or no significant differences 11 .This heterogeneity in results may be due to the different subtypes of IBS, as well as the differences in dietary habits, medication use, and other confounding factors that can influence gut microbiota and SCFA production.Recently, we reported chronically elevated SCFA levels in the feces of a small cohort of IBS patients with diarrhea (n = 11; IBS-D) compared to IBS patients with constipation (n = 12; IBS-C) and healthy controls (n = 25) over a period of 16 weeks -
12
. Similarly, a meta-analysis by Sun et al. evidenced a significant increase of fecal butyrate in IBS-D patients compared to healthy controls 13 .Nonetheless, contradicting results are also reported, such as reduced SCFAs 14 , and improved gastrointestinal symptoms upon oral intake of butyrate in IBS-D patients 15,16 .
To investigate the role of the intestinal microbiome in patients with IBS, we studied fecal shortchain fatty acid (SCFA) levels and bacterial community structure in a cohort of 240 nonconstipated patients (IBS-D and IBS-M; NC-IBS) recruited from 19 hospitals in Italy in comparison to healthy control (HC) subjects.Our findings suggest that NC-IBS patients can be stratified into distinct subgroups based on fecal SCFA levels (catabotypes), which are characterized by specific taxonomical bacterial signatures and clinical profiles.
The fecal microbiome of NC-IBS patients and HC subjects differs significantly
Metataxonomic analysis evidenced several significant differences between HC (n = 100) and NC-IBS patients (n = 235; Figure 1).Intra-sample biodiversity (α-diversity) was found to be significantly higher in HC as measured with four different indexes: observed-features (measure of richness), Shannon's entropy (considering both evenness and richness), Pielou's (measure of evenness), and Faith's phylogenetic diversity (PD) (Figure 2).Conversely, we did not find significant differences in β-diversity between IBS and HC as determined through both unweighted and weighted UniFrac (Not shown).Nonetheless, it was possible to separate NC-IBS from HC samples by performing a PLS discriminant analysis (PLSDA) based on the CLR-transformed abundance of bacterial taxa in fecal samples (outcome HC vs. NC-IBS: 0.96; accuracy: 0.67; Figure 3; Supplementary Figure S1).
The analysis of metataxonomic data was also conducted while keeping the IBS-D and IBS-M patients separate.The results revealed no significant differences between these two subgroups, both in alpha-diversity and taxonomic community structure (see Supplementary Figure S2), thus supporting the decision to analyze these two subgroups together.Several taxa were found to be differently represented between NC-IBS and HC samples.Specifically, the phylum Bacteroidetes, the Firmicutes families Ruminococcaceae and Christensenellaceae, and the order Erysipelotrichales were underrepresented in NC-IBS samples, as revealed by LEfSe analysis (Figure 4a; Supplementary Figure S3).Conversely, we observed an overrepresentation of the Actinobacteria genera Rothia and Collinsella (in particular, the species C. aerofaciens), and several Firmicutes taxa, including the genera Clostridium (family Clostridaceae) and [Ruminococcus] (now reclassified as Mediterraneibacter; family Lachnospiraceae), and the species [Ruminococcus] gnavus in NC-IBS samples (Figure 4a; Supplementary Figure S3).
Due to the compositional nature of 16S rRNA gene profiling data [17], we inferred the bacterial taxa that were differently represented between NC-IBS and HC samples after CLR-transformation of taxonomic abundances.The obtained results confirmed several findings obtained with the LEfSe analysis, including the overrepresentation of the species C. aerofaciens and [R.] gnavus in NC-IBS samples (Figure 4b).Overall, at the higher taxonomic levels, the analysis of CLR-transformed data showed that NC-IBS patients had increased Actinobacteria and Firmicutes (order Clostridiales), and decreased Bacteroidales compared to HC.At the family level, the main bacteria overrepresented in the IBS patients were Actinomycetaceae, Clostridiaceae, Lachnospiraceae, Brucellaceae, and Enterobacteriaceae.Additionally, within Bacteroidetes, we found that the species Bacteroides uniformis was more abundant in HC, whereas Bacteroides plebeius was more abundant in NC-IBS (Figure 4b).
Subsequently, we quantified the concentration of the primary organic acids derived from bacterial S3).(b) significantly different taxa determined through the Mann-Whitney test carried out with CLRtransformed bacterial abundances.Higher and lower abundances for each taxon are reported with a red and cyan background, respectively.The black and yellow heatmap represents the mean CLR-transformed abundances of the reported taxonomic units.The taxonomic lineage of each taxon is shown: p, phylum; c, class; o, order; f, family; g, genus; s, species.
catabolism in the human gut, including acetate, butyrate, propionate, valerate, lactate, and succinate, in fecal samples collected from 100 HC and 240 NC-IBS individuals (Figure 1).Acetate and butyrate were the most abundant organic acids detected in most samples, with propionate being the most abundant in ten NC-IBS patients and one HC, and succinate being the most abundant in four NC-IBS patients and one HC.Valerate and lactate were the most abundant organic acids in four and one NC-IBS patients, respectively.NC-IBS patients had a significantly lower abundance of butyrate (median concentrations of 3.9 vs. 3.1 mmol/100 g of feces; P = 0.007 according to the Mann-Whitney test), succinate (0.04 vs. 0.00 mmol/100 g of feces; P = 3.8 × 10 -7 ), and valerate (1.5 vs. 1.2 mmol/100 g of feces; P = 4.3 × 10 -4 ) compared to HC subjects (Table 1; Figure 5).The propionate/butyrate ratio, which was proposed as a potential biomarker for IBS 17,18 , was not significantly different between IBS and HC samples.Furthermore, we found no significant differences in fecal organic acids between IBS-D and IBS-M subtypes (Supplementary Figure S2).
Overall, our results indicate that NC-IBS patients exhibit significant alterations in their fecal microbiome compared to healthy individuals.Specifically, our data suggest a general expansion of Actinobacteria, particularly the species C. aerofaciens, as well as a reduction in Bacteroidetes, butyrate, valerate, and succinate in the fecal microbiome of NC-IBS patients.
Stratification based on the fecal levels of SCFAs revealed two distinct microbiome clusters in NC-IBS
We performed an unsupervised clustering analysis of the 240 NC-IBS patients under investigation using principal coordinate analysis (PCoA) based on the concentrations of six microbial catabolites in their fecal samples: acetate, butyrate, propionate, lactate, succinate, and valerate (catabotyping analysis).Based on the highest Silhouette coefficient (SI = 0.27) and variance ratio criterion (CH index = 77), we were able to classify the IBS samples into two distinct clusters, which we named fecal catabotype "high" (FC-H; n = 128) and "low" (FC-L; n = 112) (Figure 6a).The FC-H group was characterized by significantly higher concentrations of all six metabolites, except for succinate, compared to FC-L.Additionally, the fecal levels of acetate, butyrate, and propionate were lower in HC than in FC-H, while succinate was higher.Notably, HC subjects had significantly higher fecal concentrations of all metabolites compared to FC-L NC-IBS patients (Table 1).Moreover, we observed that the propionate/butyrate ratio was higher in FC-L (median value of 0.48) compared to HC (0.34; P = 0.011) and FC-H (0.38; P = 0.066).These findings suggest that NC-IBS patients can be categorized into two clusters characterized by higher and lower fecal SCFA levels than HC, as evidenced by the spider chart based on the relative abundances of bacterial catabolites (Figure 6b).Furthermore, the identified metabotypes do not overlap with IBS subtypes, as it emerged that IBS-D samples distributed in 60% and 40% between FC-H and FC-L, respectively, while IBS-M samples distributed in 45% and 55% between FC-H and FC-L, respectively.Since intestinal SCFA levels directly result from microbial metabolism in the gut, we compared the fecal bacterial community structure of the IBS subgroups FC-H and FC-L.Statistical analysis of CLR-transformed abundances revealed 73 bacterial taxa with different representations, including 54 OTUs (Figure 7).Only 12 taxa were more abundant in FC-L than FC-H, including OTUs ascribed to the species Bifidobacterium longum (phylum Actinobacteria), Lentihominibacter faecis, Anaerostipes hadrus, and an undefined Oscillospira species (order Clostridiales), and Akkermansia muciniphila (Figure 7).Conversely, among the most abundant taxa overrepresented in FC-H, we found several bacteria known to ferment fiber to produce SCFAs, such as Anaerobutyricum soehngenii, Fusicatenibacter saccharivorans, Dorea phocaeensis/formicigenerans, Faecalibacterium duncaniae, and Faecalibacterium prausnitzii (Figure 7).
Comparison with HC showed that NC-IBS catabotypes had lower α-diversity (Supplementary Figure S4).In addition, we found that 102 bacterial taxa (including 62 OTUs) significantly distinguished HC from FC-L samples.In contrast, the number of differently represented bacterial taxa between HC and FC-H was 276 (including 206 OTUs) (Supplementary Figure S5).In general, most of the significantly different bacterial taxa were overrepresented in the IBS catabotypes compared to HC samples (86 and 263 significantly different taxa respectively increased in FC-L and FC-H compared to HC).Specifically, both FC-L and FC-H had significantly more Firmicutes than HC, mostly due to the overrepresentation of the order Clostridiales and the family Lachnospiraceae within it.Also, Actinobacteria (particularly the species Collinsella aerofaciens) showed higher abundance in both IBS bacterial catabotypes than HC.On the contrary, dominant members of the phylum Bacteroidetes were depleted in the IBS groups compared to HC, such as the order Bacteroidales and the species Bacteroides uniformis reduced in FC-L, and the family Rikenellaceae depleted in FC-H.Notably, the butyrate producer F. prausnitzii was higher in FC-H and lower in FC-L compared to HC (Supplementary Figure S5).
Overall, these data show that IBS patients can be stratified according to the fecal level of SCFAs into two groups (catabotypes), which the diverse representation of several bacterial taxa can distinguish.In particular, the catabotype FC-H, which had significantly higher concentrations of SCFAs, was characterized by the overrepresentation of many taxonomic units (including numerous fiber degraders and SCFA producers) compared to both HC and the NC-IBS group FC-L.Higher and lower abundances for each taxon are reported with a red and cyan background, respectively.The taxonomic lineage of each taxon is shown: p, phylum; c, class; o, order; f, family; g, genus; s, species.Taxonomic names written in blue were determined through a manual BLASTN search in GenBank using the sequence of the corresponding reads.
Fecal catabotypes may be clinically relevant
Subsequently, we compared the IBS subgroups FC-H and FC-L in relation to the clinical parameters registered two weeks before fecal collection.We considered abdominal pain (Numeric Pain Rating Scale, NRS scale), fecal type, and stool frequency.Notably, patients in the catabotype FC-H (n = 120) had higher levels of abdominal pain (normal distribution: mean value of 3.8 vs. 3.2; P < .01)and fecal type (non-normal distribution: median value of 4.9 vs. 4.4; P < .0001)than those in the FH-L catabotype (n = 99) (Table 2a).
Subsequently, we performed a Kendall's correlation analysis, which showed a significant positive association of abdominal pain toward valerate and propionate, and succinate toward evacuation number, but only in the FC-H catabotype (Figure 8).In addition, fecal type and butyrate correlated positively in both catabotypes, whereas fecal type and valerate correlated only in the catabotype FC-L (Figure 8).
Finally, we analyzed the clinical data of the PROBE-IBS/2 cohort of IBS patients for 16 weeks during which they followed a conventional therapeutic protocol for IBS based on dietary and lifestyle recommendations (n = 202).Interestingly, the patients in the FC-H group responded better than those in FC-L.In fact, over a period of 16 weeks, FC-H patients reported a significant reduction of abdominal pain and fecal type, whereas only the stool frequency decreased in the FC-L group (Table 2b).
In particular, the initial difference between FC-L and FC-H in abdominal pain was not anymore significant after 16 weeks due to a better responsiveness of the FC-H patients during the first 4 weeks (Figure 9).
Overall, these data showed that the fecal catabotypes FC-H and FC-L are characterized by different IBS symptom severity and suggest that the direct association between fecal SCFA and IBS symptoms can be stronger in the FC-H catabotype.
Discussion
Numerous studies investigated the role of the intestinal microbial ecosystem in IBS, providing several different hypotheses on the potential microbial signatures for this condition 19 .The fact that IBS is a remarkably heterogeneous condition both at the etiological level and in its symptomatic manifestations plausibly justifies the contradicting findings and suggests the need to distinguish between potentially different subtypes within IBS.In this context, bowel habits represent the most commonly used criterion to categorize among IBS patients.In support of this discrimination strategy, several studies highlighted significant differences between IBS patients with diarrhea compared to those with constipation, which seem to be particularly clear at the level of the intestinal microbiome in terms of both specific taxa 20 and microbial metabolites such as SCFAs 21 .Therefore, in order to reduce the existing variability in the context of of significant differences in the taxonomic community structure of the fecal microbiome and fecal levels of SCFAs between IBS-D and IBS-M, as revealed in this study.
The comparison of the fecal bacterial communities of a cohort of 235 NC-IBS patients with 100 age-and sex-matched healthy controls highlighted some significant differences that already emerged in the literature, such as a significantly lower alphadiversity 20 and an expansion of the Actinobacteria phylum 24,25 .Furthermore, here we observed the increase in Firmicutes (particularly the order Clostridiales) and the depletion of Bacteroidetes (particularly the order Bacteroidales) in NC-IBS compared to HC, as reported in IBS also by two systematic reviews 26,27 , supporting the notion that these taxa may represent valid microbial signatures in the fecal microbiota of patients with IBS.
Available research data suggest that the primary bacterial metabolites of the human gut, i.e., the SCFAs, may play a crucial role IBS in pathogenesis.In particular, several mechanistic studies suggest the causal involvement of intestinal SCFAs in the pathophysiology of diarrhea-predominant IBS as a result of the direct effect of these organic acids on serotonin synthesis, colon motility, and visceral hypersensitivity 14,[28][29][30][31] .As a result, it has been suggested that regulating colonic SCFA levels could serve as a potential therapeutic target for IBS 11,17,32 .With this in mind, we quantified the main bacterial catabolites in the feces of the NC-IBS patients.The analysis of the obtained data revealed a lower concentration of butyrate, succinate, and valerate in their fecal samples compared to the HC subjects.Elevated succinate levels in the gut were associated with inflammation 33 , and were reported in IBS-D patients with symptom exacerbation 34 and IBS-D compared to IBS-C 35; nonetheless little is known about succinate concentration in IBS compared to the healthy population.Valerate, shown to reduce autoimmunity by enhancing IL-10 and suppressing Th17 cells 36 , was reported to be positively associated with fecal type and significantly higher in patients with IBS-D than in those with IBS-C 12,13 .Fecal valerate levels were also shown to be correlated with visceral sensation (abdominal pain or visceral hypersensitivity) in IBS-D patients, but a significant difference compared to healthy control subjects was not found 17,37 .A much broader, nonetheless partly contradicting, literature is available for butyrate.As opposite to the data presented here, in a previous publication, we reported that fecal butyrate levels, calculated as the median values of five measurements conducted over 16 weeks, were significantly higher in a smaller group of IBS-D patients (n = 11) than in IBS-C subjects (n = 12) and healthy controls(n = 23) 12 .Also in the metaanalysis by Sun et al., fecal butyrate was shown higher in patients with IBS-D than in HCs 13 .Nonetheless, in other studies, fecal butyrate levels in IBS-D were reported to be not dissimilar from HC 18,38 .
The available literature suggests that the subtype, particularly IBS-D versus IBS-C, may partly explain the ambiguous results regarding the differences in colonic levels of butyrate and other SCFAs between IBS patients and the healthy population.However, our findings demonstrate a wider variability in fecal SCFA levels in NC-IBS compared to HC (for instance, a σ 2 variance of 20.5 in NC-IBS versus 6.7 in HC samples for butyrate; Figure 5).This let us speculate about the possibility of different subgroups within the NC-IBS patient population, for which SCFAs may have variable clinical relevance.To test this hypothesis, we analyzed the data of fecal SCFA levels with an algorithm already adopted for the unsupervised stratification of human individuals based on bacterial metataxonomic data of their fecal microbiota, known as "enterotyping" 39 .The enterotype concept is a controversial theory 40 suggesting that the gut microbiome can be classified into a small number of distinct community types based on the abundance of specific bacterial groups.These enterotypes have been proposed to be relatively stable and to play a role in the host's health and disease risk, therefore potentially relevant in clinical practice 41 .In analogy with the enterotype concept, here we aimed to assess the hypothesis that IBS patients could be stratified in catabotypes, i.e., distinct groups of subjects based on the fecal levels of the main organic acid produced by the microbial catabolism in the human colon.The application of catabotyping permitted the stratification of the 240 IBS patients here investigated into two groups with sharply different levels of fecal SCFAs, which for one group were significantly higher (catabotype FC-H) and for the other (catabotype FC-L) significantly lower than those in HC subjects.Notably, we found that the catabotype FC-H showed higher levels of abdominal pain and fecal type compared to FC-L suggesting that an association between IBS symptoms and catabotypes may exist.Nonetheless, we cannot conclusively state the direction of the observed association between SCFAs and IBS symptoms since it can either be possible that alterations in gut motility and sensitivity lead to changes in SCFA production or vice versa.According to available literature, discrepancies are evident regarding the possible association between SCFAs and bowel symptoms.Ringel-Kulka et al. reported that total fecal SCFA levels correlated negatively with colon transit time and positively with stool frequency 18 .According to Tana et al., high acetate and propionate levels are associated with significantly worse abdominal symptoms 42 .Nonetheless, Wang et al. reported that SCFA signatures were not consistently associated with IBS severity over time 43 .Furthermore, administering SCFAs in the ileum increased ileal motility and abdominal pain in humans 44 and promoted visceral hypersensitivity in rats 45,46 .However, in the distal colon, butyrate was shown to increase and propionate to decrease the rate of colonic propulsion in guinea pigs 47 .Exogenous administration of butyrate was also shown to increase colon motility in a mouse model of IBS 28 .Nonetheless, recently, a prospective multicenter clinical trial involving 3000 nonhospitalized IBS patients showed that the oral administration of sodium butyrate may effectively relieve the IBS symptoms 16 .
In this study, we demonstrated that the fecal microbiota of individuals with the FC-H catabotype, who have a greater abundance of SCFAs in their feces, differed significantly (more than FC-L) from those of HC, precisely due to the overrepresentation of multiple bacterial taxa.This finding suggests that the FC-H catabotype may represent a subset of IBS patients with a higher degree of dysbiosis.Recently, Vervier et al. used shotgun metagenomics to analyze fecal samples from 56 IBS-D and IBS-M patients before and after a low-FODMAP diet.Through unsupervised clustering, they identified two fecal microbiota subtypes with different clinical responses to the dietary therapy.In particular, they reported a significantly enhanced responsiveness in the "pathogenic-like" subtype (IBS P ), enriched in Firmicutes and genes for carbohydrate metabolism 23 .Our data suggest that the FC-H catabotype may share similarities with the IBS P subtype proposed by Vervier et al.
Therefore, we hypothesize that the FC-H catabotype may represent a subgroup of NC-IBS patients who could benefit the most from a low-FODMAP diet or other dietary interventions targeting the fiber-fermenting microbial components of the gut microbiota.In substantiation of this hypothesis, our study findings indicate that adherence to the dietary guidelines provided to patients in the PROBE-IBS/2 trial, which included the avoidance of fibrous, large-leaf vegetables, legumes, fruits with high fiber content, and whole grains (further information on the recommendations on lifestyle and dietary habits are available in the Supplementary Material file), yielded a more substantial amelioration in abdominal pain and fecal consistency within the FC-H subgroup.Contextually, Eetemadi and Tagkopoulos 48 analyzed metataxonomic data from six different low-FODMAP intervention trials and concluded that IBS patients who are likely to respond better to a low-FODMAP diet are characterized by a higher abundance of putative genes related to SCFA metabolism pathways 48 .Likewise, Chumpitazi and colleagues documented that in children with IBS who responded positively to the FODMAP diet, there were elevated levels of particular microbial groups, such as Ruminococcaceae and Faecalibacterium prausnitzii, which have been linked to higher rates of saccharolytic metabolism 49 .Furthermore, our correlation analysis revealed a significant positive association between butyrate and fecal type, and between propionate and valerate levels and abdominal pain exclusively in the FC-H catabotype.This observation supports the hypothesis that only in patients with higher concentration, fecal SCFAs may play a role in IBS pathogenesis and could represent a valid target to enhance diagnosis and guide toward potentially more effective treatment options.Specifically, we speculate that higher levels of fecal organic acids in NC-IBS patients may indicate a greater likelihood of success for therapeutic interventions targeting the microbiome.In addition to the low-FODMAP diet, which has significant limitations in long-term sustainability, this subset of IBS patients may represent the population most likely to derive substantial benefits from treatments such as antibiotics (e.g., as rifaximin), interventions involving probiotics or live biotherapeutics (e.g., Blautia hydrogenotrophica, 50 , and fecal microbiota transplantation (FMT), the efficacy of which in IBS is controversial 51 .Concerning FMT, we can postulate that in IBS patients where the colonic microbial ecosystem is a major contributor to symptoms (as we hypothesized here, particularly in FC-H patients), FMT may offer the most significant and potentially enduring advantages.
In conclusion, this study investigated the role of SCFAs in IBS, and specifically explored the presence of a subgroup of non-constipated patients characterized by higher levels of SCFAs (that we named catabotype FC-H).We hypothesized that the FC-H subgroup may represent a distinct clinical phenotype of IBS with specific clinical features.In order to understand if the identification of this catabotype may have implications for the diagnosis and treatment of IBS, further research efforts should be first directed toward the development of a standardized analytical protocol and the identification of a threshold for fecal organic acids that can help establish whether a non-constipated IBS patient falls into this category of patients.
Overall, our study offers novel perspectives on the intricate relationship between the gut microbiome and bowel symptoms in IBS, emphasizing the importance of tailored strategies for its treatment.
Study participants and clinical assessment
The present study included a total of 240 patients diagnosed with non-constipated [i.e., IBS with predominant diarrhea (IBS-D) or IBS with mixed bowel habits (IBS-M); NC-IBS] according to Rome IV criteria, recruited in 19 Italian hospitals.These subjects were from the cohort of the PROBE-IBS/2 trial (ClinicalTrials.govIdentifier NCT03449628).In addition, we included 100 healthy controls matching the IBS patients of the PROBE-IBS/2 cohort in terms of age and sex.The characteristics of the study populations are summarized in Table 3.The standard 11-point numeric rating scale (ranging from 0, indicating no pain, to 10, indicating the worst possible pain) was used to measure abdominal pain in IBS patients at two-week intervals.Finally, we assessed stool frequency and form using the Bristol stool chart.The patients' abdominal pain and bowel habits were monitored over 16 weeks.At recruitment, patients received the recommendations on life-style and eating habits described in the supplementary material file (Supplementary Methods).
Differently abundant taxa between groups were identified using linear discriminant analysis (LDA) combined with effect size (LEfSe) algorithm 52; a cutoff value of LDA score (log10) above 2.0 was chosen.In addition, significantly different taxa were also identified by Mann-Whitney test on read abundances that underwent centered log ratio (CLR) transformation.Stratification based on catabotyping analysis was carried out based on the fecal concentrations of acetate, butyrate, propionate, lactate, succinate, and valerate using JSD distance and the Partitioning around Medoids (PAM) algorithm.Principal Coordinate Analysis with centroid was used to represent the samples graphically.For paired/unpaired matches with all the other variables considered in the study, paired/unpaired Student's t-test or Mann-Whitney U and Wilcoxon Signed-Ranks tests were adopted depending on normal distribution assessed through the Shapiro-Francia test.Correlation analyses were carried out calculating Kendall's τ rank correlation coefficient.
Figure 1 .
Figure 1.Diagram reporting the number of patients for which specific indicated data are available.(a), quantification of organic acids in feces; (b), metataxonomics of fecal samples through 16S rRNA gene profiling; (c), abdominal pain and bowel habits data.
Figure 3 .
Figure 3. PLS discriminant analysis (PLSDA) with prediction background for non-constipated IBS patients (NC-IBS) and healthy control subjects (HC).The Receiver Operating Characteristic (ROC) curve of the PLSDA model is shown in supplementary Figure S1.
Figure 4 .
Figure 4. Bacterial taxa in fecal samples exhibiting a significantly different abundance between individuals with non-constipated irritable bowel syndrome (NC-IBS) and healthy subjects (HC).(a) cladogram based on LEfSe analysis (LDA scores for the single taxa are reported in Supplementary FigureS3).(b) significantly different taxa determined through the Mann-Whitney test carried out with CLRtransformed bacterial abundances.Higher and lower abundances for each taxon are reported with a red and cyan background, respectively.The black and yellow heatmap represents the mean CLR-transformed abundances of the reported taxonomic units.The taxonomic lineage of each taxon is shown: p, phylum; c, class; o, order; f, family; g, genus; s, species.
Figure 5 .
Figure 5. Organic acids in fecal samples exhibiting a significantly different abundance between individuals with non-constipated irritable bowel syndrome (NC-IBS) and healthy subjects (HC).The statistical analysis was performed using the Mann-Whitney test and indicated a statistically significant difference compared to HC samples.**, P < .01;***, P < .001;****, P < .0001.Red numbers refer to the σ 2 variance of the corresponding data.
Figure 6 .
Figure 6.Stratification analysis of non-constipated IBS patients (NC-IBS) based on the fecal levels of the organic acids.(a) principalcoordinate analysis (PCoA) generated through catabotyping based on the fecal concentrations of acetate, butyrate, propionate, lactate, succinate, and valerate.(b) spider chart based on the abundances of bacterial catabolites expressed as relative abundance.
Figure 7 .
Figure 7. Bacterial taxa in fecal samples of NC-IBS patients exhibiting a significantly different abundance between catabotypes FC-H and FC-L.Significantly different taxa have been determined through the Mann-Whitney test carried out with CLR-transformed bacterial abundances.The black-yellow heatmap represents the mean CLR-transformed abundances of the reported taxonomic units.Higher and lower abundances for each taxon are reported with a red and cyan background, respectively.The taxonomic lineage of each taxon is shown: p, phylum; c, class; o, order; f, family; g, genus; s, species.Taxonomic names written in blue were determined through a manual BLASTN search in GenBank using the sequence of the corresponding reads.
Figure 9 .
Figure 9. Progression of abdominal pain over 16 weeks in the PROBE-IBS/2 cohort of non-constipated IBS patients stratified in the fecal catabotype FC-L and FC-H.Each point refers to the median value of the numeric rating scale (NRS) for pain assessment.Statistics is according to the Mann-Whitney test between FC-L and FC-H at each specific time point.The numbers in gray close to each point refer to the number of patients.**, P < .01;*, P < .05;n.s., not significant.
Table 1 .
Concentration of organic acids in fecal samples collected from healthy subjects (HC) and non-constipated patients (NC-IBS).Data for NC-IBS samples are also shown after stratification in catabotypes FC-H and FC-L.All data are reported in mmol/g of feces.Prp/but, ratio between the concentrations of propionate and butyrate.Statistics is according to Mann-Whitney test and indicate statistically significant difference compared to HC samples.*, P < .05;**, P < .01;***, P < .001;****, P < .0001.
Table 2 .
Clinical symptoms in the non-constipated IBS patients in the PROBE-IBS/2 cohort of the fecal catabotypes FC-L and FC-H.(a) comparison of the FC-H and FC-L catabotypes at baseline.Statistics is according to the Mann-Whitney U test (fecal type and evacuation number) or the unpaired t-test (abdominal pain), depending on the normal distribution of data.(b) comparison between the initial (baseline) and final (after 16 weeks) visit.Statistics is according to the Wilcoxon signed-rank test (fecal type and evacuation number) or the unpaired t-test (abdominal pain) depending on the normal distribution of data.*, P < .05;**, P < .01.
22,23 in this study, we restricted our analysis to IBS patients with diarrhea (IBS-D and IBS-M), excluding constipation-predominant IBS.Two factors substantiate the inclusion of mixed-type IBS within a group with IBS-D: (i) existing scientific literature demonstrating similarities in characteristics and responses to clinical treatments among patients with IBS lacking constipation22,23; (ii) the absence | 2023-11-03T06:18:45.135Z | 2023-11-01T00:00:00.000 | {
"year": 2023,
"sha1": "cf8b4976ecb52cbcb5415b450c13588660359d43",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/19490976.2023.2274128?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "c7b312549c7b7849b130835dff93d023defe8e29",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
233797509 | pes2o/s2orc | v3-fos-license | EMC of Wideband Power Sources
In this paper the results of the EMC tests of the wideband power sources: the PWM-based power source and audio power amplifier are discussed. They are intended to be used to supply the measuring system developed for evaluation of the wideband transformation accuracy of instrument transformers. Therefore, it is required to detect possible interferences that may be caused by the power supply to its operation and that may cause a decrease in its accuracy. The tests concern the conducted emission in the frequencies range from 150 kHz to 30 MHz and the radiated emission in the frequencies range from 30 MHz to 1 GHz. Moreover, the level of conducted disturbances in frequencies range from 100 Hz to 5 kHz generated into the supplying current is measured and the immunity of both wideband power sources to low frequency conductive disturbances in the supplying voltage and current is tested. Then, the voltage gain error and phase shift of the output voltage are measured. The EMC tests of both power sources show lack of compliance with the requirement of the standard IEC 61326-1. However, in system application of the audio power amplifier is possible if required increased immunity to conducted emission of the measuring system is ensured.
Introduction
Wideband power sources are required in many applications, but one of the most demanding of them is supplying the measuring systems designed for testing of the inductive instrument transformers [1][2][3][4][5][6][7][8][9][10][11][12][13][14]. This is due to the fact that all the electromagnetic compatibility (EMC) requirements must not interfere with the operation of the sensitive measuring system, and simultaneously a high output power is required. This results from the nonlinearity of the magnetization characteristics of the magnetic core of the inductive instrument transformers, and the necessity to be tested in their rated operating conditions [12,13]. Therefore, during testing of wideband accuracy of voltage instrument transformers and dividers increases the exposure to the magnetic field from high voltage [1][2][3][4][5][6][7][8]. Additionally, the supplying of the measuring system used for testing current instrument transformers requires high immunity to the magnetic field from the high current track [6][7][8][9][10][11][12][13][14]. Wideband testing procedures of transformation accuracy of inductive instrument transformers at harmonics of distorted current or voltage and requirements for the measuring and supplying systems are still under development [5,[8][9][10]14,15]. The demand for such tests results from the low power quality of current and voltage in the power networks [1,11]. According to EU directive 2014/30/EU the electrical equipment should meet the requirements of electromagnetic compatibility [16]. It ensures that electrical and electronic equipment does not generate, or is not affected by, electromagnetic disturbance. The wideband power sources are subject to the standard IEC 61326-1 that covers specify requirements for immunity and emission including the limits of radiated and conductive disturbances for electrical equipment, operating from the circuit being measured [17]. In the field of conducted and radiated emission of disturbances power sources are subject to the standard EN 55011, which complies with general requirements of the standard IEC 61000-6-4 for emission in industrial environments [18,19].
In the paper [20], the development of a programmable three phase AC power source is presented. It is capable of providing a stable voltage with variable amplitude and variable frequency over a wide range. Moreover, it can generate low harmonic distortion arbitrary waveforms. In the paper [21], the response for the need for calibrators of different electrical equipment to generate non-sinusoidal voltages and currents is presented. Voltage harmonics magnitude and phase angle closed control loops enable accurate and precise adjustment of waveform parameters. The aim of this work is the development of a wideband power supply specified for laboratory testing with distorted current and voltage where the level and phase of each higher harmonics are individually selectable. One tested power source is self-made from audio power amplifier and arbitrary waveform generator and its behavior is compared with an off-the-shelf source type Chroma 61504. The application of the power source is to supply the step-up current or voltage transformer to test the wideband transformation accuracy of inductive instrument transformers [9,[11][12][13][14][15]. The results of the EMC test of the audio power amplifier are compared with the results obtained for the off-the-shelf source. Moreover, the different modes of operation tested to find the worst condition concern the different loads and also the output frequency of the source. Furthermore, the voltage gain error and phase shift between input and output of tested power source at different frequencies are measured to determine if the input waveform composed of many harmonics is accurately reproduced.
Objects of the Research
The EMC tests were performed on two wideband power sources. Device A is a PWM-based power source model Chroma 61504 and device B is an audio power amplifier. The PWM-based power source has four main elements: input stage with active power factor correction, DC rail, PWM control circuit and output-stage inverter. The power factor correction is provided the control of the current of the inductor and output capacitor by the MOSFET drive in order to ensure that input current and the AC line voltage are in phase with each other. A PWM-controlled inverter is formed by two transistors, each one is used for the lower or upper part of the waveform generated as a series of variouswidth rectangles [15,22]. The rated power of the tested PWM-based power source is equal to 2 kVA. A total of two voltage ranges are available: 0 to 150 and 0 to 300 V, with the maximum rms values of the output current equal to 16 or 8 A, respectively. Its frequency band of operation with the rated apparent power is equal to 1 kHz.
The tested audio power amplifier is a linear amplifier of class H with the output stage transistors operating in class AB. They are working in the push-pull topology. The crossover distortion is reduced as each transistor is conducting for more than half of the period. The H class amplifiers are more power efficient as they are designed to draw power from the higher voltage rail only when the output voltage is higher than in the low voltage rail. It has two channels that can either work separately or bridged to double the output voltage. Common control of gain will automatically track each channel identically and the input voltage of the first channel will be amplified in the counter-phase by each channel. The rated output power is equal to 2 kW and is available into 2 Ω rated output resistance per channel or 4 kW into 4 Ω in the bridge mode. The rms output voltage is equal to 70 V for each channel separately or 140 V bridged. The rated rms value of the output currents is equal to 28 A. The application of a linear amplifier to provide a wideband power source requires an arbitral waveform generator to control the input signal. The wideband power source composed of the audio power amplifier with an arbitrary generator enables for a 1 4 of the price twice higher active output power than the PWM-based power source. The main advantage of the off-the-shelf source is its ability for autonomous operation. However, its frequency band of operation with its rated output power is only 1 kHz. Moreover, during operation with high inductive impedance the output voltage of the audio power amplifier is limited by 2 [15]. It is a typical load when the step-up current and AC test transformers are supplied [1][2][3][4]8,9,13,14].
Conducted and Radiated Emission Tests
The tests in the field of electromagnetic compatibility concern measurements of the conducted emission according to the IEC 61326-1 standard in the frequencies range from 150 kHz to 30 MHz and the tests of radiated emission in the frequencies range from 30 MHz to 1 GHz [16]. The conducted disturbance emission was tested with the use of the SMR 4503 measuring receiver, the LISN artificial network NNB-51 and a computer with COMPLIANCE-3 software version 3.90 (Figure 1a). The emission of radiated disturbances was tested with the use of the GTEM-1000 chamber, the SMR 4503 measuring receiver and a computer with COMPLIANCE-3 software version 3.90 (Figure 1b). amplifier is limited by 2 [15]. It is a typical load when the step-up current and AC test transformers are supplied [1][2][3][4]8,9,13,14].
Conducted and Radiated Emission Tests
The tests in the field of electromagnetic compatibility concern measurements of the conducted emission according to the IEC 61326-1 standard in the frequencies range from 150 kHz to 30 MHz and the tests of radiated emission in the frequencies range from 30 MHz to 1 GHz [16]. The conducted disturbance emission was tested with the use of the SMR 4503 measuring receiver, the LISN artificial network NNB-51 and a computer with COMPLIANCE-3 software version 3.90 (Figure 1a). The emission of radiated disturbances was tested with the use of the GTEM-1000 chamber, the SMR 4503 measuring receiver and a computer with COMPLIANCE-3 software version 3.90 (Figure 1b). In the case of both power sources the tests were carried out to detect the influence of the frequency of generated output voltage on the conducted emission. Measurements were made for two selected frequencies: 50 and 1000 Hz. Additionally, the influence of the output power of the device on conducted emissions was examined. Measurements were made for three selected power levels: 1500, 770, 440 W. The results of the measurement are shown in In the case of both power sources the tests were carried out to detect the influence of the frequency of generated output voltage on the conducted emission. Measurements were made for two selected frequencies: 50 and 1000 Hz. Additionally, the influence of the output power of the device on conducted emissions was examined. Measurements were made for three selected power levels: 1500, 770, 440 W. The results of the measurement are shown in Figures 2-5. In Figure 2 the frequency spectrum of conducted emission of PWM-based power source (device A) is presented. The frequencies of generated voltage by the device under test are equal to 50 and 1000 Hz with the value of the active load power equal to 1500 W.
The limiting values for conducted emission resulting from the standard IEC 61326-1 for the electrical equipment operating from the circuit being measured were not exceeded. Moreover, for device A, no influence of the generated frequency on the level of disturbances was observed.
In Figure 3 the frequency spectrum of conducted emission of PWM-based power source (device A) is presented. The frequencies of generated voltage by the device under test are equal to 50 Hz and the values of the active load power are equal to 1500, 770, and 440 W.
A decrease in the value of the output power causes a slight reduction in the level of generated conducted disturbances in the range from 400 kHz to 3 MHz by the PWM based power source.
In Figure 4, the frequency spectrum of conducted emission of the audio power amplifier (device B) is presented. The frequencies of the generated output voltage by the device under test are equal to 50 and 1000 Hz with output load active power equal to 1500 W. The limiting values for conducted emission resulting from the standard IEC 61326-1 for the electrical equipment operating from the circuit being measured were not exceeded. Moreover, for device A, no influence of the generated frequency on the level of disturbances was observed.
In Figure 3 the frequency spectrum of conducted emission of PWM-based power source (device A) is presented. The frequencies of generated voltage by the device under test are equal to 50 Hz and the values of the active load power are equal to 1500, 770, and 440 W.
A decrease in the value of the output power causes a slight reduction in the level of generated conducted disturbances in the range from 400 kHz to 3 MHz by the PWM based power source. The limiting values for conducted emission resulting from the standard IEC 61326-1 for the electrical equipment operating from the circuit being measured were not exceeded. Moreover, for device A, no influence of the generated frequency on the level of disturbances was observed.
In Figure 3 the frequency spectrum of conducted emission of PWM-based power source (device A) is presented. The frequencies of generated voltage by the device under test are equal to 50 Hz and the values of the active load power are equal to 1500, 770, and 440 W.
A decrease in the value of the output power causes a slight reduction in the level of generated conducted disturbances in the range from 400 kHz to 3 MHz by the PWM based power source. For device B, no influence of the frequency of generated output voltage on the level of disturbances was observed, except for the range of 10-20 MHz. The limit specified for the level of conducted disturbances in the IEC 61326-1 standard was exceeded by the audio power amplifier. It is due to the fact that the device is not equipped by the manufacturer with an appropriate EMI (electromagnetic interference) filter.
In Figure 5, the frequency spectrum of conducted emission of the audio power amplifier (device B) is presented. The frequency of generated voltage by the device under test is equal to 50 Hz and the values of the active load power are equal to 1500, 770, and 440 W. When the output power of device B is reduced, a decrease in the level of disturbances in the range of frequencies from 150 kHz to 30 MHz is observed, except for the range from 7 to 14 MHz.
In the case of conducted emission tests no significant influence of frequency of generated output voltage on the disturbance spectrum of both devices was observed. The results concerning the influence of the value of the output power confirm that it is an important factor in the assessment of the electromagnetic compatibility of the power sources with the requirements of the standard IEC 61326-1. Conducted disturbance emission measurements should be performed at the rated output power for which the device is designed to operate.
The radiated emission was tested for two selected frequencies of the output voltage equal to 50 and 1000 Hz with three levels of the output power: 1500, 770, and 440 W. The measurement results are shown in Figures 6-9. In Figure 6, the frequency spectrum of radiated emission of PWM-based power source (device A) is presented. The frequencies of the generated output voltage by the device under test are equal to 50 and 1000 Hz with the values of the active load power equal to 1500 W.
turer with an appropriate EMI (electromagnetic interference) filter.
In Figure 5, the frequency spectrum of conducted emission of the audio power amplifier (device B) is presented. The frequency of generated voltage by the device under test is equal to 50 Hz and the values of the active load power are equal to 1500, 770, and 440 W.
When the output power of device B is reduced, a decrease in the level of disturbances in the range of frequencies from 150 kHz to 30 MHz is observed, except for the range from 7 to 14 MHz.
In the case of conducted emission tests no significant influence of frequency of generated output voltage on the disturbance spectrum of both devices was observed. The results concerning the influence of the value of the output power confirm that it is an important factor in the assessment of the electromagnetic compatibility of the power sources with the requirements of the standard IEC 61326-1. Conducted disturbance emission measurements should be performed at the rated output power for which the device is designed to operate.
The radiated emission was tested for two selected frequencies of the output voltage equal to 50 and 1000 Hz with three levels of the output power: 1500, 770, and 440 W. The measurement results are shown in Figures 6-9. In Figure 6, the frequency spectrum of radiated emission of PWM-based power source (device A) is presented. The frequencies of the generated output voltage by the device under test are equal to 50 and 1000 Hz with the values of the active load power equal to 1500 W. There is no detected influence of the frequency of generated output voltage on the level of radiated disturbances. The limit specified for the level of conducted disturbances in the IEC 61326-1 standard was exceeded by the PWM-based power source. The probable cause of the problem is the PWM-controlled inverter. To reduce the radiated emissions ferrite cores can be used.
In Figure 7, the frequency spectrum of radiated emission of PWM-based power source (device A) is presented. The frequency of generated voltage by the device under test is equal to 50 Hz and the values of the active load power are equal to 1500, 770, and 440 W.
In the case when the value of the output power of the PWM-based power source is reduced a significant decrease in the level of radiated disturbances is detected.
In Figure 8, the frequency spectrum of radiated emission of an audio power amplifier (device B) is presented. The frequencies of generated voltage by the device under test are equal to 50 and 1000 Hz with active load power equal to 1500 W.
The limiting values for radiated emission resulting from the standard IEC 61326-1 for the electrical equipment operating from the circuit being measured were not exceeded. Moreover, for device B, no influence of the frequency of generated voltage on the level of disturbances is observed. There is no detected influence of the frequency of generated output voltage on the level of radiated disturbances. The limit specified for the level of conducted disturbances in the IEC 61326-1 standard was exceeded by the PWM-based power source. The probable cause of the problem is the PWM-controlled inverter. To reduce the radiated emissions ferrite cores can be used.
In Figure 7, the frequency spectrum of radiated emission of PWM-based power source (device A) is presented. The frequency of generated voltage by the device under test is equal to 50 Hz and the values of the active load power are equal to 1500, 770, and 440 W.
In the case when the value of the output power of the PWM-based power source is reduced a significant decrease in the level of radiated disturbances is detected.
In Figure 8, the frequency spectrum of radiated emission of an audio power amplifier (device B) is presented. The frequencies of generated voltage by the device under test are equal to 50 and 1000 Hz with active load power equal to 1500 W. The limiting values for radiated emission resulting from the standard IEC 61326-1 for the electrical equipment operating from the circuit being measured were not exceeded. Moreover, for device B, no influence of the frequency of generated voltage on the level of disturbances is observed.
In Figure 9, the frequency spectrum of conducted emission of an audio power amplifier (device B) is presented. The frequency of generated output voltage by the device under test is equal to 50 Hz and the values of the active load power are equal to 1500, 770, and 440 W.
Along with a decrease in the output power a significant decrease in the level of radiated disturbances is detected.
The results of the measurements of radiated emissions show as well high importance of the value of the output power of tested power source on the detected level of radiated disturbances. Therefore, the tests of compliance of the power source with the requirements of the standard IEC 61326-1 should be performed with the rated output power for which the device is designed.
Conducted Disturbances Tests in Frequencies Range from 100 Hz to 5 kHz
The next stage of laboratory tests aims to determine the level of conducted disturbances produced by two tested power sources in the frequency range from 100 Hz to 5 kHz. The measurements were made for inductive and resistive loads and different rms values of output current. The measuring circuit used to determine the rms values of harmonics in supplying current is presented in Figure 10. of output current. The measuring circuit used to determine the rms values of harmonics in supplying current is presented in Figure 10. In Figure 10, the following notations are used: AWG-arbitrary waveform generator, DPM-digital power meter, D-A/D-B-device A (PWM-based power source) or device B (audio power amplifier), PPS-programmable power supply, LO-load impedance of tested devices.
Utilization of the programmable power source ensures the pure sinusoidal voltage of frequency equal to 50 Hz to supply tested devices. To determine the total harmonic distortion factors (THDi) of suppling current one module of the digital power meter (DPM) is used to measure the rms values of harmonics of current and voltage. Moreover, the second module is used to measure output power to ensure the required load. In accordance with the standard IEC 50160 THDi factor may be calculated from formula [23]: second module is used to measure output power to ensure the required load. In accordance with the standard IEC 50160 THDi factor may be calculated from formula [23]: In Table 1 calculated values of THDi factors for supply currents of tested PWM-based power source and audio power amplifier are presented. The percentage values of a given higher harmonics are presented in Figure 11. The PWM-based power source has a much lower distortion of the supply current in comparison with the audio power amplifier as results of determined values of total harmonics distortion factor. This results from the application of the power factor correction system that controls the time of charging of the output capacitor of DC rail through the inductor [15]. The highest value of the THDi factor was detected in the case of the PWMbased power source for the rms value of the output current equal to 1 A. The lowest value is obtained for the maximum rms value of the output current equal to 6.5 A. There are no detected changes in obtained values due to the type of the load. The results for the audio power amplifier as well show no influence of the power factor but significant dependence on the load value, therefore the rms value of the output current.
The audio power amplifier is characterized by a supply current consisting of higher The PWM-based power source has a much lower distortion of the supply current in comparison with the audio power amplifier as results of determined values of total harmonics distortion factor. This results from the application of the power factor correction system that controls the time of charging of the output capacitor of DC rail through the inductor [15]. The highest value of the THDi factor was detected in the case of the PWMbased power source for the rms value of the output current equal to 1 A. The lowest value is obtained for the maximum rms value of the output current equal to 6.5 A. There are no detected changes in obtained values due to the type of the load. The results for the audio power amplifier as well show no influence of the power factor but significant dependence on the load value, therefore the rms value of the output current.
The audio power amplifier is characterized by a supply current consisting of higher harmonics which rms values are significantly exceeding the limiting values specified in the standard EN 61000- 3-12 [24]. This results from a distortion of the waveform of supply current presented in Figure 12b. It will increase the exposure of the measuring systems connected to the same AC mains. Parts 3-16, covering limits for harmonic currents produced by energy supplying equipment with a rated current less than or equal to 75 A per phase connected to public low-voltage systems, is still not available. The differences in the percentage values of the higher harmonics generated into the supplying current of the PWM-based power source result from the rms value of the output current. If the rms value of the output current is equal to 6.5 A, the rms values of suppling current is the highest, therefore the percentage values of higher harmonics are the lowest (Figure 11). In case when its rms value drops to 2.5 A for both resistive and inductive loads the rms values of harmonics of the input current are equal but significantly increased. In Figure 12 the waveforms of the output and input currents and voltages measured for the PWM-based power source and the audio power amplifier for these conditions are presented. The waveform of the suppling current of the audio power amplifier (Figure 12b) is similar to a typical rectifier. The principle of operation of the power factor correction system in the PWM-based power source ensures significantly lower distortion of the suppling current. Moreover, its waveform is the same for inductive and resistance load, however the rms values of output current and voltages are different [25].
Gain, Phase Linearity, and Conducted Disturbances Immunity Tests
The next stage of laboratory tests of the power sources was to determine the linearity of their voltage gain and phase shift with frequency in the range from 50 Hz to 5 kHz for sinusoidal and distorted input voltage. These tests were carried out in the measuring circuit presented in Figure 13. The waveform of the suppling current of the audio power amplifier (Figure 12b) is similar to a typical rectifier. The principle of operation of the power factor correction system in the PWM-based power source ensures significantly lower distortion of the suppling current. Moreover, its waveform is the same for inductive and resistance load, however the rms values of output current and voltages are different [25].
Gain, Phase Linearity, and Conducted Disturbances Immunity Tests
The next stage of laboratory tests of the power sources was to determine the linearity of their voltage gain and phase shift with frequency in the range from 50 Hz to 5 kHz for sinusoidal and distorted input voltage. These tests were carried out in the measuring circuit presented in Figure 13.
In Figure 13 the notations are used as in Figure 10, additionally: RVD-wideband reference voltage divider, DEC-wideband converter of differential voltage to single ended voltage.
The tested devices (D-A/D-B) are supplied by the programmable power source (PPS) fed by the arbitrary waveform generator (AWG) which enables to generate any supply voltage waveform required for the conducted disturbances immunity tests. The wideband reference voltage divider (RVD) is used to step-down the output voltage of the tested power source. The voltage ratio of the used RVD is selected in order to obtain the required gain for a given test conditions [2,4,26]. The measuring system is composed of a digital power meter (DPM), in which the second module is used to measure the rms values of harmonics (up to 100th) of output voltage and current, as well as the phase shift between them. Moreover, the value and power factor of the load is determined. To obtain the values of voltage gain error and phase shift of harmonics caused by the tested device first module of DPM is used to measure the rms values of harmonics of input voltage and the rms values of harmonics of differential voltage between the output voltage of RVD and the input voltage of tested power source. Application of the high impedance wideband converter of differential voltage to single ended voltage (DEC) is required in ordered to prevent the impact of measuring circuit on the output voltage of used RVD [6,14]. The rms power amplifier (yellow-supply voltage, green-supply current, red-output voltage, blue-output current).
The waveform of the suppling current of the audio power amplifier (Figure 12b) is similar to a typical rectifier. The principle of operation of the power factor correction system in the PWM-based power source ensures significantly lower distortion of the suppling current. Moreover, its waveform is the same for inductive and resistance load, however the rms values of output current and voltages are different [25].
Gain, Phase Linearity, and Conducted Disturbances Immunity Tests
The next stage of laboratory tests of the power sources was to determine the linearity of their voltage gain and phase shift with frequency in the range from 50 Hz to 5 kHz for sinusoidal and distorted input voltage. These tests were carried out in the measuring circuit presented in Figure 13. In Figure 13 the notations are used as in Figure 10, additionally: RVD-wideband reference voltage divider, DEC-wideband converter of differential voltage to single ended voltage.
The tested devices (D-A/D-B) are supplied by the programmable power source (PPS) fed by the arbitrary waveform generator (AWG) which enables to generate any supply The value of the voltage gain error of hk harmonic of the output voltage may be calculated from equation: The percentage value of the hk harmonic of the composite error resulting from its voltage gain error and additional phase sift in the output voltage is determined from equation: Determined hk values of the composite error and voltage gain errors are used to calculate the value of the additional phase shift of harmonic of the output voltage: Gain and phase linearity test with the frequency of the input voltage during amplification of sinusoidal of frequency equal to 50 or 500 Hz, and distorted input voltages were performed when tested devices were supplied by the programmable power source. The percentage values of higher harmonics were equal to the required values by the relevant standard [19]. Tests were performed in the no-load state and when the output active power was equal to 1500, 770, and 440 W or apparent power was equal to 1500, 770, and 440 VA with power factor equal to 0.45 were performed. The measurement results are presented in Figure 14. In the case of the PWM-based power source the most significant impact on the value of the voltage gain error and phase shift has the frequency of the input voltage. In the noload state the voltage gain errors (Figure 14a) have lower values than in the load state (from 20th harmonic). This may result from increased additional voltage drop on internal elements of the device for increased output current. The same values of voltage gain errors and phase shift were obtained regardless of the value of the load power factor. Furthermore, even for the case when conducted disturbances occur in the supply of PWM-based power source, the frequency characteristics of voltage gain errors and phase shift are the same as in Figure 14.
The frequency characteristic of voltage gain errors and phase shift for audio power amplifier is presented in Figure 15. In the case of the audio power amplifier the most significant impact on the value of the voltage ratio error has the frequency of the input voltage but it is negligible taking into comparison the results determined for the PWM-based power source. This phenomenon is caused by the input low pass filter that is selected at too high a frequency. Moreover, there are slight changes in the power factor and value of the load of the power amplifier ±0.5%. Furthermore, even for the case when conducted disturbances occur in its supply voltage and current the frequency characteristics of voltage gain errors and phase shift are the same as in Figure 15. The immunity to low frequency conductive disturbances results In the case of the PWM-based power source the most significant impact on the value of the voltage gain error and phase shift has the frequency of the input voltage. In the no-load state the voltage gain errors (Figure 14a) have lower values than in the load state (from 20th harmonic). This may result from increased additional voltage drop on internal elements of the device for increased output current. The same values of voltage gain errors and phase shift were obtained regardless of the value of the load power factor. Furthermore, even for the case when conducted disturbances occur in the supply of PWM-based power source, the frequency characteristics of voltage gain errors and phase shift are the same as in Figure 14.
The frequency characteristic of voltage gain errors and phase shift for audio power amplifier is presented in Figure 15. In the case of the PWM-based power source the most significant impact on the value of the voltage gain error and phase shift has the frequency of the input voltage. In the noload state the voltage gain errors (Figure 14a) have lower values than in the load state (from 20th harmonic). This may result from increased additional voltage drop on internal elements of the device for increased output current. The same values of voltage gain errors and phase shift were obtained regardless of the value of the load power factor. Furthermore, even for the case when conducted disturbances occur in the supply of PWM-based power source, the frequency characteristics of voltage gain errors and phase shift are the same as in Figure 14.
The frequency characteristic of voltage gain errors and phase shift for audio power amplifier is presented in Figure 15. In the case of the audio power amplifier the most significant impact on the value of the voltage ratio error has the frequency of the input voltage but it is negligible taking into comparison the results determined for the PWM-based power source. This phenomenon is caused by the input low pass filter that is selected at too high a frequency. Moreover, there are slight changes in the power factor and value of the load of the power amplifier ±0.5%. Furthermore, even for the case when conducted disturbances occur in its supply voltage and current the frequency characteristics of voltage gain errors and phase shift are In the case of the audio power amplifier the most significant impact on the value of the voltage ratio error has the frequency of the input voltage but it is negligible taking into comparison the results determined for the PWM-based power source. This phenomenon is caused by the input low pass filter that is selected at too high a frequency. Moreover, there are slight changes in the power factor and value of the load of the power amplifier ±0.5%. Furthermore, even for the case when conducted disturbances occur in its supply voltage and current the frequency characteristics of voltage gain errors and phase shift are the same as in Figure 15. The immunity to low frequency conductive disturbances results from the conversion of the power by DC rail.
Conclusions
The EMC tests of both power sources show the lack of compliance with the requirement of the standard IEC 61326-1. In the case of the tested PWM-based power source the problem concerned exceeded limits of the radiated emissions, while in the audio power amplifier the level of conducted emissions was too high. In the case of conducted disturbances in the frequencies range from 100 Hz to 5 kHz generated into the supplying current by the PWM-based power source their level results from the rms value of supply current. This is caused by the pulsed operation of the rectifier in the PWM-based power source and it is significantly lower than in the case of the audio power amplifier. This is ensured by the application of the power factor correction system that controls the time of charging of the output capacitor of DC rail through the inductor. The immunity of both wideband power sources to low frequency conductive disturbances in the supply voltage and current results from the conversion of the input power by DC rail. In the case of the PWM-based power source the most significant impact on the value of the voltage gain error and phase shift has the frequency of the input voltage. Therefore, the feedback loop used by the manufacturer for the control of the rms values of harmonics of the output voltage is required. The results of the research clearly show that the EMC tests of the wideband power sources intended to be used for supplying the measuring systems are essential for their proper operation.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. Computer data is not publicly available because it is not suitable for use by another researcher.
Conflicts of Interest:
The authors declare no conflict of interest. | 2021-05-07T00:02:57.940Z | 2021-03-07T00:00:00.000 | {
"year": 2021,
"sha1": "76f310c6ea5097b7ebe5920b8999cda9b925ce80",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1073/14/5/1457/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "c54a7d43108afbccf546ac85452492a64fc67c3a",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
229314766 | pes2o/s2orc | v3-fos-license | Manufacturing micropatterned collagen scaffolds with chemical-crosslinking for development of biomimetic tissue-engineered oral mucosa
The junction between the epithelium and the underlying connective tissue undulates, constituting of rete ridges, which lack currently available soft tissue constructs. In this study, using a micro electro mechanical systems process and soft lithography, fifteen negative molds, with different dimensions and aspect ratios in grid- and pillar-type configurations, were designed and fabricated to create three-dimensional micropatterns and replicated onto fish-scale type I collagen scaffolds treated with chemical crosslinking. Image analyses showed the micropatterns were well-transferred onto the scaffold surfaces, showing the versatility of our manufacturing system. With the help of rheological test, the collagen scaffold manufactured in this study was confirmed to be an ideal gel and have visco-elastic features. As compared with our previous study, its mechanical and handling properties were improved by chemical cross-linking, which is beneficial for grafting and suturing into the complex structures of oral cavity. Histologic evaluation of a tissue-engineered oral mucosa showed the topographical microstructures of grid-type were well-preserved, rather than pillar-type, a well-stratified epithelial layer was regenerated on all scaffolds and the epithelial rete ridge-like structure was developed. As this three-dimensional microstructure is valuable for maintaining epithelial integrity, our micropatterned collagen scaffolds can be used not only intraorally but extraorally as a graft material for human use.
Scientific Reports
| (2020) 10:22192 | https://doi.org/10.1038/s41598-020-79114-3 www.nature.com/scientificreports/ urement of the 15 micropatterns showed that the variations were almost within 25% of the originally designed dimensions (Table 1). Similar to our previous study, variation of more than 10% was observed in the dimensions of negative molds for only grid patterns (Sample ID [1][2][3][4]. For other micropatterns, variation of more than 10% was observed in the truncated prototypes of the sample group of 5-8 and in the channel width of the sample group of 9-15. Scanning electron microscopy (SEM) revealed a well-developed fibril network within the collagen matrices crosslinked with 1% EDC (Fig. 3A). The diameters of fibrils of the 1% tilapia scale type I collagen matrices with EDC cross-linking ranged from 40 to 120 nm, which were approximately 1.2 times thicker than those without EDC crosslinking shown in the Fig. 3a in our previous report 15 . SEM showed that micropatterns were formed on the collagen scaffolds crosslinked with 1% EDC, with several dimensions and aspect ratio. The surface textures of collagen scaffolds with or without 1% EDC crosslinking showed little differences between all the micropatterns 15 . Similar to the previous report, the microstructure configuration of the grid micropattern in this study appeared to be well-preserved, however, the pillar micropattern appeared to be collapsed and poorly preserved, ( Fig. 3B; [1][2][3][4] 15 . In contrast, the four types of micropatterns that were fabricated with 100 μm in height, thickness and channel width respectively, were all well-transferred onto the collagen scaffold ( Fig. 3B; [5][6][7][8]. Furthermore, for the grid-rectangular (G-R) prototype, the dimension of microstructure configurations smaller than 100 μm was also well-preserved ( Fig. 3B; 9-15). Thus, the present study showed that our technique also enabled the fabrication the micropatterns on the collagen scaffolds as small as 50 µm.
Physical property of collagen scaffold with or without 1% EDC crosslinking. The collagen scaffold with 1% EDC crosslinking increased the Young's module compared with the scaffold without crosslinking (approx. 48.65 kPa vs. 29.10 kPa, at 37 °C). This is in contrast to the previous data describing the physical property of the collagen gel mixed with 1.0% chondroitin sulfate (approx. 7.0 kPa). Thus, the physical property of the EDC-crosslinked collagen gels used in this study was enhanced, compared with the previous study. The dynamic viscoelastic property of the collagen scaffold, measured using a rotational rheometer with a parallel plate configuration, is shown in Fig. 4A, where storage modulus G′ and loss modulus G″ are plotted against frequency ω. The results showed G′ is higher than G″ and takes a plateau value over the whole frequency range measured in this study. Such frequency dependence corresponds to the behavior expected of an elastic gel. Furthermore, when using a 4-0 braided silk suture and a round needle, the collagen scaffold with 1% EDC crosslinking was durable enough to be sutured in contrast to the scaffold without crosslinking, which was readily ruptured (Fig. 4B).
Macroscopic findings of EVPOMEs during manufacturing. Contraction of the EVPOMEs using collagen scaffold crosslinked with 1% EDC was rarely seen during the 11 days of manufacturing. In contrast, severe contraction was often detected in the EVPOMEs without 1% EDC crosslinking. Statistical analysis indicated the significant contraction of the EVPOMEs when the grid micropattern prototype was used (Fig. 5). In addition, there were large variations among samples without 1% EDC crosslinking; however, statistical significance was (1,2,5,6,(9)(10)(11)(12)(13)(14)(15) and Si (7,8) Histologic findings of EVPOME. First, we compared the histologic features of our EVPOMEs grown on micropatterned 1% collagen scaffolds with and without 1% EDC crosslinking (Fig. 6A,B). The dimension and aspect ratio of micropatterning were identical to the previous study design 15 . The original microstructures fabricated on the scaffold surface crosslinked with 1% EDC were relatively well-preserved, in spite of a slight decrease in the vertical dimension [ Fig. 6A(b-e)], whereas those without EDC crosslinking were hardly maintained, which was characterized by an almost complete loss of their vertical dimensions [ Fig. 6B(b-e)]. This finding suggested the use of 1% EDC avoided the flattening and severe deformation of the DEJ-like microstructures. In addition, compared with the scaffold without EDC crosslinking, eosinophilic collagen fibrillar structures were more evident in the scaffold crosslinked with EDC, including the portion of microstructures. This is consistent with the SEM image showing thicker fibrils with EDC crosslinking (Fig. 6A vs. B). Furthermore, a continuous and fully-differentiated epithelial layer was formed on the scaffolds with 1% EDC crosslinking [ Fig. 6A(a-e)]. Remarkable epithelial thickening was also seen on all scaffolds without EDC crosslinking due to the scaffold shrinkage (contraction) during EVPOME manufacturing [Figs. 5. and 6B(a-e)]. As a result, because the configurations of the microstructures were well-maintained, the epithelial architecture developed on the EDC-crosslinked scaffold appeared to mimic an intrinsic "rete ridge" architecture. Nonetheless, because the original dimension of the micropatterns initially designed was preserved by EDC crosslinking, the undulation of the scaffold could be too large for oral keratinocytes to grow and stratify, when compared to the inherent DEJ microstructure. Therefore, we tested four prototypes with smaller and different configurations, as micropatterned scaffold crosslinked with EDC. Histologic findings showed that the grid-type micropatterns were well-preserved and a well-stratified epithelial layer developed ( Fig. 7a,b). This epithelial layer showing an intrinsic rete ridge, unlike that formed when seeded on AlloDerm (data not shown), indicating the development of a more biomimetic tissue-engineered oral mucosa than that of our previous study. However, for the pillar-type, the microstructure severely collapsed and their height was almost lost, even with crosslinking with EDC ( Fig. 7c,d). Apart from those findings, many tiny gaps between cells and scaffold were found at the corner angle of micropatterns.
The histologic appearances of EVPOMEs using G-R micropattern was the most stable and consistent [ Fig. 6A(a,b)]. Therefore, using a 1% collagen matrix crosslinked with 1% EDC, we designed a smaller micropattern of G-R prototype, with different aspect ratios, to evaluate the conformability of our manufacturing system. A continuous and well-stratified epithelial layer that resembles to "rete ridges" was again formed on all micropatterns designed although the degree of the micropattern deformation varied depending on the aspect ratios (Fig. 8).
Discussion
Soft lithography is one of the patterning techniques that fabricate or replicate micro/nanostructures using an "elastomeric stamp. " In this work, we have extended our previous technique, combined with MEMS process for fabricating negative molds, to engineer collagen scaffolds, applying it to different designs of DEJ-like micropattern. Several reports have shown a similar strategy to develop micropattern of intricate DEJ-like structure onto the surface of hydrogel scaffolds using photolithography as dermal substitutes for clinical application to reconstruct tissue defects 16,17 . However, their basic microarchitecture was a "grooved" pattern only, which is not as complex as the prototypes produced in this study. Compared with conventional photolithography, soft lithography methods are more high-throughput in the laboratory setting, with wider resolution, lower costs and a variety of pattern-transferring methods 18 . These advantages helped us to fabricate more complex microstructures in our system. Therefore, our system is beneficial in engineering ECM-based biomaterials for the development of a variety of the design of microstructures in negative molds for transfer onto the collagen gel, hence suggesting versatility of our system to create the micropatterning design.
Beside from soft lithography, electrospinning and 3D bioprinting, regardless of cell-free or cells-loaded, are another major micro/biofabrication technique to develop tissue-engineered soft tissue constructs 10,11 . Recent reports demonstrated that both methods have enabled the replication of the topography of the DEJ structure on Table 1. Measurements of 15 different negative molds after fabrication as well as their variations as compared with the originally designed dimensions for both PDMS substrates for the grid micropattern formation and Si substrates for pillar micropattern formation (n = 3). www.nature.com/scientificreports/ www.nature.com/scientificreports/ hydrogel scaffolds that lack skin equivalents [19][20][21] . Compared with soft lithography, however, the strengths and weaknesses of these approaches are not clear due to growing advancements of these nascent micro/biofabrication technology for dermal substitutes 12,22,23 . Further comparison and evaluation among these state-of-art technologies are needed to consider how different techniques will affect the microenvironmental cues for directing cellular behaviors, eventually leading to applications for engineered soft tissue constructs 24 . For tissue engineering applications, covalent intermolecular crosslinks between collagen molecules using appropriate biocompatible molecules are essential to develop stable biomaterials with a high degree of mechanical integrity 25 . EDC used in this study has also been applied to marine derived collagen/gelatin 26 . One of the direct effects was to inhibit gel contraction after oral keratinocytes were seeded, which allowed the omission of 1% chondroitin sulfate into collagen that was added in our previous study. Also, consistent with other studies, the www.nature.com/scientificreports/ enhancement of stiffness provided the collagen matrix with more durability and manageability of the scaffold that enabled suture while avoiding rupture of the scaffold 27,28 . This property, which can be contributed by thicker collagen fibrils as compared with those without cross-linking, is significant for technicians in cell-processing centers and surgeons. Although chemically crosslinked hydrogels are apt to be less biocompatible than physically crosslinked ones due to the potential cytotoxicity of the residual chemicals and reagents, EDC has been utilized in many soft tissue-engineered products, including CollaMend and Sharklet. These facts indicate insignificance of EDC cytotoxicity 17,[29][30][31] . However, the stability of collagen gel treated with EDC decreases the degradation rate in vivo 32,33 . An in vivo evaluation of our EVPOMEs is required for future clinical applications. This study confirmed that the Tilapia collagen scaffold has a characteristic of an ideal gel by the rheological test, consistent with the animal-derived ones, suggesting the scaffolds fabricated in this study are visco-elastic, shown to be biomimetic [34][35][36] . On the other hand, this visco-elastic property makes it difficult to accurately evaluate the configuration of micro-scale fine structures, such as the DEJ-like undulations, because the fixatives used for SEM and histological observation cause sample contraction, resulting in artefacts. To obviate deformation of the specimens, such as buckling of microstructures, it is necessary to develop a live 3D imaging technology to measure soft tissue constructs non-invasively 37,38 .
A major achievement of this study was the successful development of biomimetic oral mucosa equivalents having a rete ridge-like epithelial structure when the cells were seeded onto the grid-type microstructures showing minimal deformation. Nevertheless, similar to our previous histologic evaluations, the microscopic findings of EVPOMEs again showed incomplete shape fidelity of the scaffold surface micropatterns, exhibiting partial failure to replicate the DEJ-like microarchitecture originally designed in the negative molds while it was obvious that the grid-type microstructures were more stable with minimal reduction of the vertical dimension than the pillar-types. Although this difference could be due to mechanical structural issue and/or different eosinophilic fibril distribution between the two prototype designs, this defect causes the failure of the comprehensive evaluation to investigate the effect of micropatterning-based cues over oral keratinocytes proliferation, differentiation, and stem cell niche, an area that needs to be explored and studied for regenerative medicine 39 . Therefore, it is necessary to address the incomplete shape fidelity of microstructures on collagen scaffold to advance our technology. Although EDC crosslinking improved the mechanical properties of the scaffold, the portion of the The initial construct size was 20 mm in diameter (Scale bar = 20 mm). Data shown are presented as mean ± SD of diameters of the EVPOMEs (n = 5). The collagen gels with EDC crosslinking showed little contraction as indicated. As p values determined by a paired t test are shown, the EVPOME dimeter was significantly reduced (contracted) after manufacturing the grid-type scaffolds (G-R and G-T) without 1% EDC crosslinking, compared with the crosslinked ones. Although the contraction of other EVPOMEs was not statistically significant, the resulting EVPOME diameters were inconsistent due to large variations. www.nature.com/scientificreports/ www.nature.com/scientificreports/ micropatterns did not seem to have sufficient physical property to tolerate the mechanical forces during epithelial formation 39,40 . To manufacture a stiffer collagen scaffold, another approach, such as producing basement membrane-like structure or use of supramolecular cross-linkers, may be required to tolerate the mechanical force generated by cells, producing a stable microenvironmental cue 41 . The serrated interface between basal cells and the collagen scaffolds was not present in this study. Mechanical properties of hydrogel are critical for the stability of the scaffold in culture and should have an impact on cellular mechano-transduction regulated by traction force or F-actin distribution of keratinocytes [42][43][44] . Therefore, the histologic finding may depend on the elastic module of the scaffold, suggesting that the mechanical property must also be engineered when the micropatterning is designed using ECM-based biomaterials.
Histologic evaluation reveals there were small cleavages present at the corners of the indentations between cells and collagen surface. The fact that cells did not attach to the scaffold implies that there is a need to eliminate the right angle planes on the scaffold. Since most physiological occurring structures, such as DEJ and microvilli, are inherently curved, the topography and curvature along villus-like microstructures have a significant effect on the cellular morphology of small intestinal epithelial cells in physiological events 6 . Thus, engineering 3D curvature (locally) on an undulating microstructure of the scaffold may be necessary to develop better biomimetic oral mucosa equivalents.
Due to using a marine byproduct for the production of collagen, marine origin materials, basically a waste product in the fishery industry, emerged as a sustainable resource 45 . Therefore, marine collagen, such as Nile Tilapia, is one of the major collagen sources, which can be alternative to mammalian-derived collagen. Marine collagen is a safer and more attractive biomaterial for processing using advanced fabrication techniques, confirming the growing relevance of marine biomaterials in tissue engineering and regenerative medicine [46][47][48][49][50] . Although further research is needed, the skin of Nile Tilapia had a therapeutic effect on burn wounds in human clinical trials 51 . Therefore, the collagen scaffold manufactured in this study could facilitate wound healing after transplantation onto wounds in the oral cavity. Applications of marine collagen as a biomimicking material, not only for hard tissues but also soft tissues including hemostats, has great potential in regenerative medicine 52,53 .
Figure 7.
A representative histologic appearance of EVPOMEs manufactured for total 11 days, 4 days in a submerged, and subsequent 7 days in an air-liquid interface condition. (n = 5): Hematoxylin and eosin staining (original magnifications, left panels 4 × , right panels 20 ×) The grid-type micropatterns (a,b) were well-preserved and a well-stratified epithelial layer was developed. This epithelial layer showed an intrinsic rete ridge, whereas, for the pillar-type, the microstructure severely collapsed and the "rete ridge"-like structure was poorly developed, even with cross-linking with EDC. (a-d) indicate micropatterned collagen scaffolds originally designed by sample IDs of 5, 6, 7, and 8 shown in Fig. 1, respectively. Arrows indicate tiny gaps between cells and scaffold at the corner angle of micropatterns.
Scientific Reports | (2020) 10:22192 | https://doi.org/10.1038/s41598-020-79114-3 www.nature.com/scientificreports/ In summary, our manufacturing system is valuable for fabricating a negative mold to replicate a variety of the biomimetic microstructures, such as DEJ on collagen scaffolds, implying competency of this technique to engineer scaffolds with a micropattern composed of a variety of hydrogels, such as the native topography of DEJ. Together, with easy-handling and durability against suturing, the physical properties of the scaffold gained by using 1% EDC crosslinking make the 1% tilapia scale collagen a suitable biomaterial for future clinical applications in regenerative medicine, especially in epithelial tissue defects. Although in vivo studies of the micropatterned collagen scaffold are necessary to examine its immunogenicity and biodegradability, and effects on epithelial regeneration, this scaffold can be applied as acellular substitutes for intraoral and extraoral epithelial tissue reconstruction and regeneration. Furthermore, this construct of engineered oral mucosa with rete ridges could be utilized as an in vitro system to study oral keratinocyte stem cells, a test for pharmaceutical evaluation in full-thickness wound repair, when more stable properties are provided onto the portion of microstructure.
Methods
Ethical approval. The use of human oral mucosa keratinocytes and the procurement procedure was approved by the Internal Review Board of the Niigata University Hospital. Number: 2015-5018. All methods were carried out in accordance with relevant guidelines and regulations.
Design of micropatterns and fabrication of microstructured negative molds.
In order to mimic the connective tissue papilla of oral mucosa, we previously designed two grid-and pillar-micropattern prototypes with rectangular or truncated configurations, respectively, resulting in four negative molds as micropatterns, and their dimensions on the scaffold designed were approximately 200 μm in height, 200 μm in thickness, and 100 μm in channel width 15 . To test the conformability of our manufacturing system, 15 different dimensions and aspect ratios smaller than those of previous study, including the same four micropatterns (i.e., ratio of topography height to topography thickness), were manufactured. The dimensions were simply changed with 50% reduction of the height and channel width of a grid micropattern and 50% reduction of the height and thickness of a pillar micropattern. In addition, all combinations of height, thickness and channel width of 100 μm and 50 μm were applied to micropattern because of the structural stability of G-R prototype. Negative molds designed in this study were fabricated as described previously 15 . Briefly, an initial silicon (Si) mold for grid patterns was prepared via anisotropic deep-reactive ion etching using a photoresist mask, and then Si isotropic wet etching was performed using an acid mixture to form the truncated structure. Subsequently, the polydimethylsiloxane (PDMS, SILPOT 184, Dow Corning Toray, Tokyo, Japan) was casted against the Si substrate with grid patterns to www.nature.com/scientificreports/ form a soft lithography mold. Si substrates with through-hole patterns, which were mainly fabricated via anisotropic deep-reactive ion etching, were directly used as negative molds for pillar patterns.
Fabrication and measurement of physical property of collagen scaffolds, and macroscopic test of handling property. A total of 16 scaffolds comprising four different micropattern prototypes with different dimensions, including a flat surface control, comprising 1% type I tilapia scale atelocollagen matrix with or without EDC crosslinking, were prepared 54 . Cell campus (100% freeze-dried collagen: FD-08G, Taki Chemical Co., Ltd., Hyogo, Japan) was dissolved in HCl (pH 3.0) at 1.1 wt%. The collagen solution was mixed with Dulbecco's phosphate buffered saline (D-PBS, KAC Co., Ltd., Kyoto, Japan) at 4 °C. After pouring each collagen matrix solution into the PDMS or Si, molds were inverted and immersed into each collagen solution, both molds were placed into an incubator (25 °C) to induce fibrogenesis. Subsequently, half of the collagen gels were chemically crosslinked by 1-ethyl-3-(3-dimethylaminopropyl) carbodiimide hydrochloride (EDC) (Tokyo Chemical Industry, Tokyo, Japan) treatment. The EDC was dissolved in 99.5% ethanol (Kishida Chemical) at 1.0%w/v. EDC crosslinking was performed by immersing the micropatterned collagen gels in a solution containing 100 mg of EDC per 7.8 mg of type I collagen at room temperature for 24 h. After crosslinking, the scaffolds were placed individually in a container and washed in D-PBS by rotational stirring at room temperature for 24 h. Then, they were all γ-irradiated for sterilization. For simplifying scaffold fabrication, the surrounding collagen matrix surface of the 14-mm square-shaped microstructure was planarized 15 . Five pieces of 1% collagen gels with or without 1% EDC crosslinking, without microstructures having a diameter of 20 mm and a thickness of 2 mm, were used for measuring storage modulus and loss modulus. They were determined by means of a rheometer (HAAKE MARS III, Thermo Fisher Scientific Inc., Germany) with a compression speed of 0.2 mm/s, at 37 °C. In addition, Young's modulus was determined using a compact tabletop universal tester (EZ-LX, Shimazu Corporation, Kyoto, Japan) with a compression speed of 0.15 mm/s, at 37 °C. Subsequently, the handling property of the collagen gels without or with EDC cross-linking was tested. After only the epithelial layer, 20 mm in diameter, was removed from Bio-SKiN, a medical training artificial skin (Regina Fashion Supply Co. Ltd., Saitama, Japan), the gels were placed on the "wounded" surface. Using 4-0 braided silk (Nesco, Tokyo, Japan) and surgical instruments, the collagen gel (transplant) was sutured with the surrounding Bio-SKiN, supposed to be oral mucosa, and photographed.
Observation of negative molds and 1% collagen gels crosslinked with 1% EDC. The top views of the negative molds for grid patterns (PDMS) and pillar patterns (Si) were observed either optical microscopy (VH-S30, KEYENCE, Osaka, Japan) or confocal microscopy (K-9510, KEYENCE), respectively (n = 3). In addition, we observed the microstructures of the 1% tilapia scale type I collagen matrices crosslinked with 1% EDC using scanning electron microscopy (SEM) as described previously 15 .
Procurement of oral mucosa samples. The protocol for obtaining human oral mucosa samples was approved by the Niigata University Hospital Internal Review Board (approval # 2015-5018). Patients receiving minor dentoalveolar surgery at an oral and maxillofacial surgery outpatient clinic at Niigata University Hospital were provided with sufficient information regarding this study, and all participating individuals signed an informed consent form. An oral mucosa tissue sample, approximately 5 mm 2 in size, was trimmed off from the elevated mucoperisoteal flap.
Cell culture of primary oral mucosa keratinocytes. Primary oral mucosa keratinocytes were serially cultured as previously described 15 . Briefly, the tissue sample was soaked in a 0.025% trypsin/EDTA solution (Thermo Fisher Scientific, Waltham, MA, USA) containing 1.5% Antibiotic-Antimycotic (Thermo Fisher Scientific), overnight at room temperature. Oral mucosa keratinocytes were scraped off from the underlying connective tissue using a scalpel in a 0.0125% defined trypsin inhibitor (Thermo Fisher Scientific), resuspended in EpiLife supplemented with EpiLife Defined Growth Supplements (Thermo Fisher Scientific), referred to as "complete medium, " and plated at a density of 4.0-5.0 × 10 4 cells/cm 2 . After reaching a confluence of 70%-80%, the cells were re-plated at a density of 0.7-1.0 × 10 4 cells/cm 2 , fed with the complete medium supplemented with gentamicin (5.0 μg/mL), and amphotericin B (0.375 μg/mL; Thermo Fisher Scientific). Oral mucosa keratinocytes from passages 3 to 5 were used in the study.
Manufacturing of tissue-engineered oral mucosa equivalents (EVPOMEs). The EVPOMEs were manufactured by seeding oral mucosa keratinocytes obtained from five individuals onto microstructured tilapia scale collagen scaffolds having four microstructures with various aspect ratios (with one flat surface as a control), with or without 1% EDC crosslinking. AlloDerm (Allergan, Madison, NJ, USA) was used as a positive control 15 . According to our human clinical application protocol, after presoaking the scaffolds in type IV collagen (5 μg/ cm 2 , derived from the human placenta, Sigma-Aldrich) in D-PBS (Wako chemical, Osaka, Japan) overnight at 4 °C in a 12-well plate, oral mucosa keratinocytes were seeded onto all scaffolds at a cell density of 1.5 × 10 5 cells/ cm 255 . The composites were cultured in complete medium supplemented with 1.2 mM Ca 2+ for 4 days in a submerged condition, and then raised to an air-liquid interface with the same culture medium for another 7 days 15 .
Macroscopic analysis of EVPOMEs and statistical analysis.
During manufacturing the EVPOMEs, their diameters were measured every day, as previously stated using Image J (National Institutes of Health, Bethesda, MD, USA, http://image j.nih.gov/ij/) 15 www.nature.com/scientificreports/ day 11 EVPOMEs (diameter) fabricated on scaffolds without crosslinking, as compared with those treated with 1% EDC using the Excel software (n = 5). A p value < 0.05 was considered significantly different.
Histologic examination of EVPOMEs. A total of 31 EVPOMEs were fixed with 4% paraformaldehyde in 100 mM D-PBS and embedded in paraffin. The paraffin-embedded samples were deparaffinized, rehydrated, cut into 5-µm thick sections, and stained with hematoxylin and eosin for histologic examination.
Data availability
The data presented in this study can be provided by the corresponding author upon reasonable request due to pending patent application. | 2020-12-19T05:04:02.893Z | 2020-12-01T00:00:00.000 | {
"year": 2020,
"sha1": "531dfcf402877864e7cf7419d3878bbc0bd1e4b7",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-020-79114-3.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "531dfcf402877864e7cf7419d3878bbc0bd1e4b7",
"s2fieldsofstudy": [
"Engineering",
"Medicine",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
254121936 | pes2o/s2orc | v3-fos-license | Proteomics and genomics of a monomorphic epitheliotropic intestinal T-cell lymphoma: An extremely rare case report and short review of literature
Rationale: Monomorphic epitheliotropic intestinal T-cell lymphoma, formerly known as enteropathy-associated T-cell lymphoma, is an extremely rare, aggressive peripheral extranodal T-cell lymphoma, that is infrequent in native European and Caucasian populations. The current study presents the clinicopathological features, diagnostic approach, and clinical outcomes of this rare entity of lymphoma and highlights the importance of the early diagnosis of monomorphic epitheliotropic intestinal T-cell lymphoma (MEITL). Patient concerns: Main symptoms and/or important clinical findings: We present the case of a 69-year-old male patient presenting with an abdominal mass, intestinal transit disorder, and weight loss. The abdominal computed tomography (CT) revealed features suggestive of a malignancy. Following clinical and imaging investigations, surgical resection of the small intestine with other areas of involvement has been performed and further to the histopathological examination and immunohistochemical testing are mandatory. Diagnoses and Interventions: Histopathological evaluation of the tumor revealed a proliferation of medium- to large-sized monomorphic lymphocytes, with vesicular nuclei, prominent nucleoli, and a moderate amount of clear to pale eosinophilic cytoplasm, with an association of infrequent Reed-Sternberg-like cells. Immunohistochemical assessment of the aforementioned tumor using CD3, CD8, CD5, CD20, and CD30 confirmed the T cell proliferation line and the monomorphic epitheliotropic intestinal T-cell lymphoma diagnosis. Lessons: The current report highlights the importance of early diagnosis of MEITL owing to its poor prognosis and presents histopathological features that help distinguish MEITL from inflammatory bowel diseases and less aggressive T-cell lymphomas.
Introduction
Monomorphic epitheliotropic intestinal T-cell lymphoma (MEITL) (formerly termed enteropathy-associated T cell lymphoma [EATL], type II) is an extremely rare peripheral T-cell lymphoma in South-Eastern Romania, derived from intraepithelial T cells in the gastrointestinal (GI) tract and unlike the classic form of EATL, tends to behave aggressively and there is no clear association with celiac disease and/or other malabsorption syndromes and inflammatory colitis. [1,2] MEITL was previously subclassified as EATL type 2 and based on distinctive pathological and epidemiological features, and to help distinction from EATL, this disease is no longer referred to as type 1 EATL. [1] Most often MEITL involves the small bowel, particularly the jejunum and ileum, but it can also involve the stomach, colon, and other extraintestinal sites. [3] Histologically, the tumor typically consists of small-to medium-sized monomorphic lymphocytes with hyperchromatic nuclei with inconspicuous nucleoli and a moderate amount of clear to pale eosinophilic cytoplasm and the mitotic activity is mostly brisk, and unlike EATL, there usually is no significant inflammatory background or necrosis. [4,5] Its clinical, morphologic, and immunophenotypic features distinguishing it from the more common EATL (previously EATL type I) make it a separate entity and due to its poor prognosis, it needs to be distinguished from inflammatory diseases and less aggressive T-cell lymphoma. [2,6] MEITL has a worldwide distribution, and it accounts for the vast majority of cases of primary intestinal T-cell lymphoma occurring in Asia and which also appears to occur with increased frequency in males, who are affected more often than females, the male-to-female ratio being approximately 2:1. [1] The small intestine is the most often involved in this type of lymphoma, with the jejunum affected more often than the ileum.
The clinical outcome of patients with MEITL is poor, with a median survival of 7 months and 5 years overall and complete response rates are poor: 46% and 48%, respectively. [1] According to a study of Delabie J. et al (2011), elevated serum lactate dehydrogenase (LDH) and C-reactive protein (CRP) levels are risk factors associated with worse overall survival and failure-free survival in MEITL/EATL patients. [3] Neoplastic cells typically display the CD8 + cytotoxic phenotype and are often CD30 − ; whereas Type I EATL is usually CD8 − and at least focally positive for CD30. [4] The Proviral Integration site of the Moloney murine leukemia virus (PIM) family is an important mediator of cell survival, comprising 3 ubiquitously expressed serine/threonine kinases (PIM1, PIM2, and PIM3) with a broad range of cellular substrates that promote cell growth, proliferation, and drug resistance and, they are overexpressed in a number of human cancers and frequently associated with poor prognosis in most hematological malignancies. [7,8] The identification as cooperating targets of Proviral Integration of Moloney virus in murine lymphomas suggested early on that PIM serine/threonine kinases play a significant role in cancer biology. [9] Typically, PIM kinases are activated by signaling pathways downstream of growth factors, cytokines, and mitogenic stimuli, such as the Janus Kinase-signal transducer and activator of transcription and nuclear factor Kappa B, specimen proof 70 and heat shock protein 90 also shield them from proteasomal breakdown. [7,9,10] They function by phosphorylating a wide range of proteins, including those that control transcription (MYC, MYB, RUNX1, RUNX3), cell cycle (p21, p27, CDC25A, CDC25C), protein translation (eukaryotic translation initiation factor 4E-binding protein 1), apoptosis, signaling intermediates (SOCS1, SOCS3, MAP3K5, mTOR, AKT), and drug resistance proteins. PIM kinases collaborate with crucial genes involved in B-and T-cell lymphomagenesis, including c-MYC, B-cell lymphoma 6, and E2A-PBX1, according to studies using transgenic mice. [9] The main purpose of the current study is to report a case of a Caucasian male patient who presented with chronic GI symptoms and normal levels of LDH and CRP at diagnosis, who was found to have MEITL upon resection of the small intestine with multiple areas of involvement noted by imagistic investigations. Moreover, we aimed to find the presence or absence of PIM1 kinase with real time polymerase chain reaction (RT-PCR), because it is expressed in lymphomagenesis with T cell lines and primary tumoral T cells.
The current report calls attention to the importance of the early diagnosis of MEITL that will enable proper management, essential for radical treatment, and we also describe some of the features that help distinguish MEITL from other intestinal T-cell lymphomas.
Clinical findings
We report a case of a 70-years-old male patient who was admitted to "St. Apostol Andrei" Emergency County Hospital in Constanta, Romania, with abdominal pain in the right upper part of the abdomen, transit disorder and weight loss (10 kg/6 months). The local examination revealed a palpable mass localized in the right abdominal flank.
His routine blood investigations showed a secondary anemia (9,4 g/dL hemoglobin, hematocrit 32,3%, and mean corpuscular volume 84,8 fl), with thrombocytopenia grade 3 (platelet 48,000/μL) and hypoalbuminemia (2,3 g/dL). The white blood cells, LDH, CRP, and alkaline phosphatase levels, blood chemistry, carcinoembryonic antigen, carbohydrate antigen 19-9, and alpha-fetoprotein tumor markers were all within normal limits. Upper GI endoscopic examination revealed gastroduodenitis and a gastric ulcer. Endoscopic examination of the colon revealed a polyp of the ascending colon that was removed and sent to the pathology department. The histological investigation of the colonic polyp revealed a tubular adenoma with low-grade dysplasia.
Computed tomography (CT) of the abdomen and pelvis at the initial diagnosis, with intravenous administration of contrast material showed an ileal loop with thickened, iodophilic, irregular walls, located adjacent to the ascending colon, with infiltration of adjacent adipose tissue, another ileal loop with the same appearance in the right iliac fossa, adenopathies and mesenteric adenopathic blocks (maximum 3.9 × 2.6 cm) and minimum pneumoperitoneum in the right flank and minimal ascites in the pouch of Douglas (Fig. 1).
Following the clinical and imagistic investigations, the patient subsequently underwent small bowel and large bowel resection with ileotransverse anastomosis and latero-lateral enteroanastomosis. The surgical specimens were sent to the Clinical Service of Pathology for morphological evaluation.
He was treated with 6 cycles of (appropriate) chemotherapy with cyclophosphamide, doxorubicin, etoposide, vincristine, and prednisone. Follow-up abdominal CT evaluation after 3 months showed no disease progression. CT evaluation of the chest showed residual pulmonary nodules and an adenoma of the left adrenal gland. Multiple polymerase chain reaction (PCR) tests for COVID-19 were performed, with negative test results.
The patient was discharged from the hospital in a good medical condition.
Histopathological examination
Macroscopically, 3 ulcero-vegetant and infiltrative lesions of 48.5, 2.5 × 4, and 8 × 11 cm were seen arising from the enteral mucosa. The selected surgical specimens were fixed in 10% formalin and paraffin-embedded, then stained with hematoxylin eosin. Microscopic examination (Fig. 2) showed the small intestinal lesions to consist of malignant lymphoid proliferation, with intermediate-to large-sized cells, relatively monomorphic, with vesicular nuclei, with prominent nucleoli and weak eosinophilic/clear cytoplasm. The presence of Reed-Sternberg-type uni-/binucleate cells was sporadically associated.
The described cell population was surrounded by a polymorphic inflammatory background (lymphocytes, plasma cells, frequent eosinophilic granulocytes, macrophages) with a diffusely transmural disposition, with formation of lympho-epithelial lesions, mucosal ulceration, and extension to the adjacent colic loops.
Key areas of necrosis, vascular thrombosis, and endothelitis were identified.
The mesenteric lymph nodes showed partial involvement by the same atypical lymphoid cells noted in the small intestine. Ileal and colonic resection margins were free, and it was also found hypertrophy and hyperplasia of the lymphoid follicles of the appendix.
The slides were evaluated by a Nikon Eclipse E600 microscope and representative photos were taken from digital whole slide images, obtained with HuronTISSUEScope TM 4000XT scanner. [11]
Immunohistochemistry evaluation
Further to the histopathological examination, immunohistochemical testing was mandatory, to establish the proliferation line. Immunohistochemical evaluation (Fig. 3) was performed on 4-μm thick sections of a representative formalin-fixed, paraffin embedded tissue block from the enteral lesion samples. After epitope retrieval, tissue sections were incubated with a panel of 5 monoclonal mouse antibodies, ready to use from BIOCARE Medical (Table 1).
In the present study, immunophenotyping showed the intermediate-to large-sized cells to be of T cell origin with strongly CD3 (Fig. 3A) and CD8 (Fig. 3B) positive; CD5 was positive in scattered cells within the tumor (Fig. 2C) and focally positive cells for CD30 biomarker (Fig. 3E) were observed. Negative results were noticed for CD20 biomarker (Fig. 3D). Based on the morphology and immunoprofile, a diagnosis of monomorphic epitheliotropic intestinal T-cell lymphoma with Reed-Sternberglike cells was made.
Genomics
Pim oncogenes, which do have hematological and epithelial origins, are overexpressed in a variety of tumors. Serine/threonine kinases that are encoded by Pim genes have been proved to be able to inhibit the enhanced sensitivity to apoptosis induction that is connected to MYC-driven carcinogenesis. Characterizing the PIM-mediated survival signaling pathways has advanced significantly recently. These oncogenes may be attractive targets for highly specific and selective treatments with favorable toxicity profiles due to the distinctive structure of their active sites and the limited phenotype of animal mutants for all Pim family members.
Serine/threonine kinases with constitutive activity are known as Pim kinases (and therefore lack the need for posttranslational activation). The activity of Pim kinases is mostly controlled at the transcriptional and translational stages since their mRNA and proteins have a truly short half-life.
Quantitative RT-PCR, ribonucleic acid extraction and results of quantification
Total ribonucleic acid was extracted and purified using the extraction kit RNeasy formalin fixed paraffin embedded Tissue Kit (Qiagen, Germany) in accordance with the manufacturer's instructions to measure the ribonucleic acid levels of this gene in MEITL cell lines, primary tumoral, and normal T cells as well as to assess the efficacy of the PIM1 gene 'knockdown and evaluate the levels of this gene in MEITL cell lines. Quantitative RT-PCR was used to gauge the PIM1 gene's expression. The complementary DNA that resulted was put in a plate with a Taqman probe (TaqMan SNP Assay for PIM1). Under the following thermal cycler settings, PCR amplification was conducted using the Applied Biosystems -Sistem Fast Real Time PCR 7500 Sequence Detection System (Life Technologies). Relative quantification was computed using the CT technique, and the outcome showed that the PIM1 gene has low expression.
Discussions
This current case report stood for a primary T-cell lymphoma with immunophenotype features consistent with monomorphic epitheliotropic T-cell lymphoma.
Recent data have led to changes in the categorization of intestinal T-cell lymphomas, and it has become apparent that the 2 subtypes formerly designated as variants of EATL are distinct. [1] Type I EATL, now simply designated as EATL, is closely linked to celiac disease and is primarily a disease of individuals of northern European origin. Type II EATL, now formally designated as monomorphic epitheliotropic intestinal T-cell lymphoma, shows no association with celiac disease and appears relatively increased in incidence in Asian and Hispanic populations. [1] MEITL generally is positive for CD3, CD8, and negative for CD5 and CD30. [1] There are several reports with conflicting opinions about Type II EATL and according to a study from the International T-cell Lymphoma Project, both types of EATL are associated with celiac disease, although multiple reports from Asia showed no such association. [3,4,[12][13][14][15][16][17][18] Although there is no association between MEITL and celiac disease, recent reports have suggested some cases of MEITL to be preceded by a variant of celiac disease. [19] Moreover, in this present case of T-cell lymphoma, there is no association between celiac disease and MEITL.
About the phenotype of this lymphoma, MEITL was originally described as an intestinal T-cell lymphoma with a CD8 + CD56 + phenotype, but according to multiple studies neither marker is mandatory for diagnosis and cases have been reported lacking either or both these molecules. [4,12,13,16,[20][21][22][23] Lymphocytes in the intra-epithelial compartment are described as having a phenotype that is similar to or consistently discordant with that of the invasive tumor [15,22] and they must also express some pan-T-cell markers such as CD2, CD3, CD5, CD7. Expression of CD8 and CD56, although typical for this neoplasm, was not considered mandatory for the diagnosis. [12,13,16,21] In this current case report, immunophenotyping showed a positive expression of CD3, CD5, CD8 and confirms earlier studies about typical expression of CD8 and other T-cell markers for the diagnosis of MEITL.
According to Delabie et al, the small intestine was the most commonly involved site (90% of cases) in EATL and MEITL patients and high serum LDH and CRP levels may reflect a high tumor burden and extensive tissue damage, which may explain the adverse prognosis. [3] The above-mentioned study proved that 65% (37/57) of MEITL/EATL cases did not have an elevated LDH level. [3] In the current case, we noticed that the patientʼs laboratory investigations showed a normal CRP and LDH at admission to our hospital, in spite of widespread lesions. Consequently, we should consider the possibility of malignant lymphoma including MEITL and the elevated risk of intestinal perforation, even in cases with low CRP levels and LDH, because an early diagnosis of MEITL may contribute to the prevention of perforation and implementation of successful chemotherapy. [24] Inflammatory bowel disease, indolent T-cell lymphoproliferative disease, EATL, and intestinal natural killer/T-cell lymphoma can be the differentials of MEITL and the most important features that can guide us to differentiate MEITL from other types of T-cell lymphoma are the epitheliotropic patterns, the monomorphic cell shapes and positive immunophenotyping for CD8. [2,25]
Conclusions
In conclusion, we described a rare case of MEITL, with low expression of gene PIM1, that was correctly diagnosed based on the typical immunophenotype features, epitheliotropic patterns and monomorphic cell shape and in view of the differences in epidemiology and clinicopathologic features, we strongly believe that it is justified to separate out type II EATL from the EATL category as a distinct form of lymphoma, with the lack of association with celiac disease. | 2022-12-02T05:21:52.798Z | 2022-11-25T00:00:00.000 | {
"year": 2022,
"sha1": "83974e4582814e9e52ad90b82f81de346af58a78",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1097/md.0000000000031951",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "83974e4582814e9e52ad90b82f81de346af58a78",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
253083477 | pes2o/s2orc | v3-fos-license | The Public Health Response to COVID-19 in Vietnam: Decentralization and Human Rights
Human rights constitute a universal concern in different countries’ responses to COVID-19. Vietnam is internationally praised for its success in containing the pandemic; nevertheless, human rights issues are a key area that needs to be assessed and improved. Little legal and ethical research is available on human rights in Vietnam, particularly in its response to COVID-19, however. In Vietnam, decentralization took place during the pandemic: higher authorities delegated power to lower ones to make and implement public health measures. Unfortunately, many measures made and implemented decentrally caused human rights concerns or breaches. This article aims to study what makes such measures cause human rights concerns or breaches. It argues that several social, legal, and political factors, including an inadequate understanding of human rights, the undefined breadth of discretion, and lack of supervision, are underlying factors for such problematic decentralized measures. Accordingly, this paper proposes two solutions (i) improving the supervision of the decentralization process, and (ii) improving the understanding of human rights. While Vietnam should learn from the international community to improve its measures, lessons and experience from Vietnam can also contribute to a richer dialogue and better protection of human rights globally.
Introduction
Globally, human rights have been overlooked and vulnerable during the COVID-19 pandemic (Daȩbrowska-Kłosińska 2021; Desierto 2020, 2021; de Mesquita et al. 2021). In different countries, this global problem takes nuanced shapesthough, with close investigations, some common patterns might be found (ten Have 2022). It follows that studying different countries not only helps these countries to spot their problems and find solutions but also helps the global community to understand some common mistakes. Internationally, Vietnam is praised for its success in containing the pandemic, especially in the early stages (Ivic 2020;Pollack et al. 2021;Quach et al. 2021;Thao and Đào 2021). Notwithstanding, its approach is not spotless. Human rights issues are the key area to be assessed and improved. However, the literature studying this issue, especially being informed by in-depth Vietnamese background, is still unsatisfactory.
In Vietnam, decentralization took place during the pandemic: higher authorities delegated power to lower ones to make and implement public health measures. Unfortunately, many measures made and implemented decentrally caused human rights concerns or breaches. This article aims to study what makes such measures cause human rights concerns or breaches. These questions shall be addressed in the below five sections.
This paper first studies three controversial Vietnamese cases concerning public health measures in "Cases". Actually, controversial cases are more than three, however, due to the constraint of space, only three notable cases are retold. The reason for retelling these three cases is that first, these caused wide public outcries. Second, either the Government or local authorities recognized problems concerning these-though they might not do so explicitly or recognize that human rights were breached. By retelling cases, this paper gives a sense of how controversial some social public health measures might be. In "What Human Rights Are at Stake?", from the three cases retold, then the paper goes on to figure out what human rights are at stake in each case. Three cases are also demonstrative of how measures have been made decentrally. Due to the constraint of space, I shall not retell in detail how, procedurally, measures were made and implemented decentrally. In "The Need for Decentralization", I explain why in opposition to the observation that centralization does better in managing the pandemic (ten Have 2022), in Vietnam and, perhaps, many other countries, there are reasons for decentralization. In "Why does Decentralization go Wrong?", this paper offers a contextualized account of what makes such measures cause human rights concerns or breaches. It argues that several social, legal, and political factors, including an inadequate understanding of human rights, the undefined breadth of discretion, and lack of supervision, are the underlying factors for such problematic decentralized measures. Finally, this paper proposes two solutions to prevent or minimise human rights concerns or breaches (i) improving the supervision of the decentralization process, and (ii) improving the understanding of human rights.
Nha Trang City is in the course of implementing Directive 16… Currently, restaurants, diners and non-essential businesses in the area are closed; even takeout is prohibited. For food supply, the city had previously developed a plan to issue shopping permit cards (thẻ đi chợ) which permit people to go to the market once every 3 days. … It is illegal for Mr E to go out to buy bread and drink water. At the same time, the inspection team … has an inappropriate attitude (Chiến 2021).
In short, the Chairman of the People Committee of Nha Trang City might imply that what was only wrong was the rude attitude of the Ward's Depute Chairman but not his decision.
The position of the Deputy Chairman of Vinh Hoa Ward only received minor support from Force-47 groups and extremist individuals heavily influenced by 1 3 discourses of Force-47 groups. The justification for the measure of Vinh Hoa Ward and Nha Trang City is that if citizens can freely go out to buy food, they can pretend to buy food to go out and walk around. Consequently, the policy aim of the measure shall fail should any loopholes be exploited. However, the majority of the population did not accept the Deputy Chairman of Vinh Hoa Ward's behaviour and justification.
In response to public outcries, on the morning of 20 Following that the Chairman of the People Committee of Nha Trang City issued a letter recognizing that 'the Deputy Chairman of Vinh Hoa Ward was misunderstanding the Directive' and apologised to E on behalf of agencies of Nha Trang City (Toàn 2021). Then, the Deputy Chairman of Vinh Hoa Ward publicly apologised to E. The Government's Facebook page ('Thông tin Chính phủ') and the Government News Portal also publicly updated about the apology process, including the letter of the Chairman of the People Committee of Nha Trang City and the personal apology from the Deputy Chairman of Vinh Hoa Ward (Báo Điện tử Chính phủ 2021).
Case 2: Coercing to Take a COVID-19 Test Administered by Public Officers
The second case is related to a woman physically forced to take a COVID-19 test administered by public officers. At the end of September 2021, in a video circulating on YouTube and Facebook's groups and pages, a woman was teaching an online yoga class at her apartment when someone knocked on the door and forced her to leave her room to get a COVID-19 test administered by public officers. The woman replied that she had already conducted a COVID-19 rapid antigen test (RAT) by herself, adding that she did not want to come into contact with others and that she was in the middle of teaching a class (Tuổi Trẻ 2021b). Then, a group of officials, police officers, and members of the apartment complex's management board headed by the Party Secretary of Vinh Phu Ward's Party Committee cut the door lock, coerced, and escorted her to the building's courtyard for a nasal swab collection amidst her babies' cries.
As soon as the video was circulated, public outcries broke out. In response to public outcries, the representative of the official team noted that the woman's apartment was exposed to high risk as infections were detected in the block (Tuổi Trẻ 2021b). Then in a press conference following the event, to justify the measure implemented by the Party Secretary of Vinh Phu Ward, Thuan An's City Party Secretary argued that 'at a time when the whole political system is straining itself against the COVID-19 pandemic, all laws cannot be applied normally…' (Dũng 2021). Notwithstanding, ultimately, the Secretary of Vinh Phu Ward's Party Committee apologised to the woman. There was no further remedy though.
Case 3: Culling Dogs
The third case concerns a situation in which 13 dogs belonging to a married couple were culled in the middle of October 2021. After dogs' owners tested positive for SARS-CoV-2, Tran Van Thoi District's health authorities figured out that dogs' fur had abnormal substances and decided to cull them, fearing the animals carried the virus (Tuổi Trẻ 2021c). The news indicated that there lacked clear evidence that further investigation or measures were considered or implemented. The killing has stirred up an uproar on Facebook where people criticized the decision for being unscientific and cruel (Tuổi Trẻ 2021c).
In a press conference following the event, the Chairman of the Tran Van Thoi District People Committee admitted that "the animals should have been culled only when they had been confirmed to be infected with the disease"; nevertheless, he argued that such a solution is unfeasible given the pressure and limited capability: "If we had had better management capacity, we could have isolated the pets, disinfected them, and monitored them carefully"(Tuổi Trẻ 2021d). In the prevention and control of COVID-19, [due to lacking resources] ensuring people's health and preventing cross-infection in quarantine facilities would be the top priorities; the culling, hence was necessary, he concluded (Tuổi Trẻ 2021d). The Chairman of the Tran Van Thoi District People Committee also claimed that officials had sought consent from dogs' owners. Notwithstanding, dogs' owners claimed that authorities did not seek consent in advance.
What Human Rights are at Stake?
In this section, I shall discuss what human rights are at stake in each case. I shall use the term 'human rights concerns' and 'may violate human rights' to signify that what is discussed here is in no way conclusive. The reason for this is that while the terminology 'human rights' can be nuanced when it is used in different fields, law, philosophy, or ethics…, the term 'human rights violations' is of strong legal connotation and must be defined with clarity, certainty, and accuracy by referring to a set of legal standards. Specifically, to assess whether measures that restrict human rights violate any rights, it is necessary to assess whether such measures comply with the Siracusa principles. 1 As the Siracusa principles also recognize the pluralism of models of a democratic society and socio-economic status among countries… a practice can be a violation in one jurisdiction but not in another jurisdiction. It would follow that asserting a practice as having violated human rights can sometimes be not straightforward but very dedicated, especially in contentious bioethics and health issues. 2 Though the 2013 Constitution, for the first time in the history of the Socialist Republic of Vietnam, articulates explicitly the protection of human rights under Article 3, 3 recognizes the applicability of the Siracusa principles under Article 14, 4 and dedicates a whole chapter (Chapter II) for human rights, a Vietnamese jurisprudence on human rights issues and specific judgments or official statements in this regard are still absent. To claim and demonstrate how some public health measures are problematic and may violate human rights, I refer to articles of ICCPR, ICE-SCR, and Vietnamese Constitutions in conjunction with the jurisprudence of other jurisdictions. Also, this paper shall not engage with legal consequences of the declaration of public health emergencies here (including implications it has on human rights) since Vietnamese Prime Minister Phạm Minh Chính announced clearly that after careful consideration, the State of Emergency would not be declared (though he admitted that emergency measures have already been implemented) (Khuyên 2022;Long and Lê 2021). 5 3 'The State shall … recognize, respect, protect and guarantee human rights and citizens' rights…'. 4 '1. In the Socialist Republic of Vietnam, human rights and citizens' rights in the political, civil, economic, cultural and social fields shall be recognized, respected, protected and guaranteed in accordance with the Constitution and law.
2. Human rights and citizens' rights may not be limited unless prescribed by a law solely in case of necessity for reasons of national defense, national security, social order and safety, social morality and community well-being'. 5 That means the emergency powers have been used without the formal enactment of the state of emergency. The decision of the Government is driven by two factors: As a matter of policy, the Government worried that declaring 'the State of Emergency' might cause public fear thereby embroiling public order and escalating the situation. As a matter of law, in line with the Soviet legal tradition, the [Vietnamese] state has seen the law as a mere tool of the ruling class, embodying the mind of such class (Duẩn 1980;Tâm 2009). As a corollary, the law can be flexible to cope with policy needs promptly without the need of resorting to extreme measures whose ramifications would be hard to predict.
In fact, globally, there are many other countries which take the same approach as Vietnam (Dimitrovs 2020; Dzehtsiarou 2020; Kovács 2020).
2 For example, in the case X, Y, and Z v. The UK, the ECtHR holds that '… since the questions raised by the case touch on areas where there is no clear common ground among the Member States, the Court considers that the margin of appreciation to be afforded to the respondent State must be a wide one' (Para. 44).
1 According to the Siracusa principles as envisaged under Article 12 of ICCPR and developed at the Siracusa Conference, state actions that limit human rights and freedoms must be: • In accordance with the law; • Based on a legitimate objective; • Strictly necessary in a democratic society; • The least restrictive and intrusive means available; and. • Not arbitrary, unreasonable or discriminatory.
Case 1: the Right to Freedom of Movement
The right to freedom of movement (as opposed to the right to liberty and security) is relevant as the measures at hand did not ban all movements: public offices and construction sites were allowed to open and operate, and citizens were allowed to leave their homes once per three days to buy food.
Pursuant to Article 13 of UDHR and Article 12 of ICCPR 'Everyone lawfully within the territory of a State shall, within that territory, have the right to liberty of movement…' (see same is prescribed under Article 2 of Protocol 4 to the ECHR, Article 12 of African Charter on Human and Peoples' Rights, Article 22 of American Convention on Human Rights, Paragraph 15 of ASEAN Human Rights Declaration, Article 23 of the 2013 Constitution).
Though Vinh Hoa Ward's measures vindicate a legitimate objective that is the protection of public health and proponents of such measures have tried to argue that strict measures are practically necessary because as long as citizens can freely go out to buy food, they can pretend to buy food to walk around; consequently, there were loopholes to be exploited. Notwithstanding, the measures can be problematic from the view of human rights law. Though the pandemic is an existing grave threat, the measures express an aim to curb the spread of the virus, and it is a consensus in every legal system (at the national, supranational, and international levels) that authorities should enjoy the margin of appreciation (Evans v. the United Kingdom at para. 77, Koufaki and Adedy v. Greece at para. 37, Vavřička and Others v. The Czech Republic at para. 274-275), Again, to be legally legitimate, restrictions must satisfy the Siracusa principles with which measures of Vinh Hoa Ward seem to fail to satisfy.
First, all restrictions must always have a legal basis. Pursuant to General Comment 27 on freedom of movement, 'The law has to establish the conditions under which the rights may be limited. Restrictions which are not provided for in the law would violate the right to freedom of movement' (para. 12).
In the case of Vietnam, there was no law banning movement for essential needs like buying food. The law on Prevention and Control of Contagious Diseases (LPCCD) regulating the measures that can be taken by the authorities to prevent and control contagious diseases with human-to-human transmission provides a range of measures for the prevention and control of infectious diseases under Sect. transmit very rapidly and spread widely with high mortality rates or with unknown agents) and Article 54 dealing with a state of emergency in case of an epidemic.
Pursuant to Article 53, measures for controlling entry into and exit from zones infected with class-A epidemic diseases include (i) restricting persons and means of transport from entering and leaving epidemic zones; in case of necessity, medical inspection, surveillance, and disposal shall be conducted; (ii) prohibiting transportation from epidemic zones of articles, animals, plants, food, and other commodities capable of transmitting the epidemic disease; (iii) taking personal protection measures, for persons entering epidemic zones specified in Clause 1, Article 51 of this Law; (iv) Other necessary measures as prescribed by law.
Pursuant to Article 54, when declaring a state of emergency in case of an epidemic, the head of the steering committee has the following powers: (i) mobilizing and requisitioning resources specified in Article 55 of this Law; (ii) placing signboards, guard stations, and instructions on travel bypassing epidemic zones; (iii) requesting medical inspection and disposal of means of transport before they leave epidemic zones; (iv) prohibiting mass gathering and other activities likely to transmit the epidemic disease in epidemic zones; (v) prohibiting persons and vehicles from entering epidemic foci, except for those on duty; (vi) conducting disinfection and sterilization on a large scale; (vii) culling animals and destroying food and other articles likely to transmit the epidemic disease to humans; (viii) taking other measures.
On its face, although Article 53 and Article 54.2.d, 54.2.e of LPCDD can be interpreted as permitting a lockdown over large regions via defining the epidemic zones to the broadest possible extent, e.g. a province, groups of provinces, or even the whole nation, overtly restricting most sorts of movements, including movements to buy food, seems to be problematic. The permission to take any 'other necessary measures' can also be used to justify Vinh Hoa Ward's measures though this permission seems to be problematic due to lacking well-defined regulatory breadth and criteria for restricting human rights. Notwithstanding, in practice, measures of local authorities did not invoke the LPCCD but Directive No. 16/CT-TTg dated 31 March 2020 (Directive 16) of the Prime Minister instead. This Directive stipulated that all facilities except essential ones 7 were forced to be closed. Gatherings of more than two persons in public places were prohibited. Directive 16 is not law or bylaw regulation, however. Pursuant to Article 30 of the Law on Organizing the Government: 1. The Prime Minister shall promulgate legislative documents within his/her jurisdiction so as to perform his/her duties and powers, inspect the implementation of such documents and deal with documents in breach of the Constitution and legislation. 2. The Prime Minister shall act on behalf of the Government to sign the Government's documents; issue decisions, directives and instructions, and examine the implementation of such documents in state administrative organs at the central level through the local one.
This means the scope of application of Prime Minister's Directive 16 was internal to the Government's structure. It is not a legislative legal document that can be applied widely to the citizenry. Indeed, the former Prime Minister Nguyễn Xuân Phúc who passed Directive 16 did claim that it was not a legal document and not binding… (Tuân 2020). Moreover, restricting the movement for buying food (i.e. bread) seems also not to be aligned with this Directive.
Second, the laws authorizing the application of restrictions should use precise criteria and may not confer unfettered discretion on those charged with their execution (General Comment 27, para. 12). Measures of Vinh Hoa Ward are problematic because defining what is 'essential' is at the unfettered [and unreasonable] discretion of those in charge of execution. It is hard to think why authorities of Vinh Hoa Ward allow indoor public offices and construction sites to open but ban the making of bread and movements to purchase bread. It is against common sense that authorities of Vinh Hoa Ward claim that bread is not food. The power [to define what is essential] was observable conferred unfetteredly and misused by the authorities of Vinh Hoa Ward.
Third, the measure at stake must be (i) necessary, (ii) appropriate to achieve its protective function, (iii) conform to the principle of proportionality, and (iv) the least intrusive instrument among those which might achieve the desired result. Though restricting movement does help cut the chain of infections, it is not clear if a blanket ban of almost almost all activities like in this case satisfies the requirement of proportionality and is the least intrusive measure. Arguments of proponents of Vinh Hoa Ward's measures are invalid because as asserted by public health experts of the Government, the purpose of the policy is not about banning or restricting movements but to protect public health (VOV 2021); hence, the implementation of measures should not be about trying to ban all movements or as many movements as possible.
In Communauté Genevoise D'action Syndicale (CGAS) v. Switzerland, the majority of ECtHR ultimately concludes that the COVID-19 measure of Switzerland was disproportionate in light of a number of factors, including the importance of freedom of assembly in a democratic society, the long duration of the absolute prohibition, the fact that a range of other activities (including indoors) remained permitted under the relevant COVID-19 regulations, and the threat of criminal sanctions (Smet 2022).
In the Vietnamese case, the sanction against movements was less coercive; however, the importance of finding subsistence, the fact that a range of indoor activities remained permitted, and the sine die nature of the measures at that time should be taken into account to find a possible breach of the right to freedom of movement. Article 3 of UDHR, Article 9 of ICCPR, and equivalent articles under other conventions provide for the protection of three separate but related rights that are the right to life, the right to liberty, and the right to security of person. It needs to make clear that the case at hand relates to the right to security of a person (but not the right to life and the right to liberty). Security of a person concerns freedom from injury to the body and the mind, or bodily and mental integrity (General Comment No 35, para. 3). State agents must refrain from treatment which damages a person's physical health or causes them mental or psychological harm (European Court of Human Rights 2015).
Article 12 of UDHR, Article 17 of ICCPR, and equivalent articles under other conventions provide for the protection of some interrelated rights which are the right to respect for privacy, family, home, correspondence, and protection of honour and reputation. The right to respect for privacy, family, home and correspondence, and protection of honour and reputation is required to be guaranteed against all arbitrary interferences and attacks whether they emanate from State authorities or natural or legal persons (General Comment No 16, para. 2). States are under both positive and negative obligations to secure the right to effective respect for physical and psychological integrity. In any event, measures should be reasonable in particular circumstances. Compliance should be guaranteed de jure and de facto. Even concerning interferences that conform to the Covenant, relevant legislation must specify in detail the precise circumstances in which such interferences may be permitted. A decision to make use of such authorised interference must be made only by the authority designated under the law, and on a case-by-case basis (General Comment No 16, para. 8). Free and informed consent to medical treatment is under the auspice of this right.
In South Africa, compelled testing is generally not permitted, except when supported by a court order. In the case of C v Minister of Correctional Services (1996), informed consent as a necessary pre-requisite to HIV testing was reaffirmed, with pre-counselling forming a key part of the process to obtain such informed consent. Meanwhile, under the jurisdiction of ECtHR, the Court found that relatively minor medical tests, which are compulsory (see Acmanne and Others v. Belgium, Commission decision; Boffa and Others v. San Marino, Commission decision; Salvetti v. Italy (dec.)) or authorised by court order (X v. Austria, Commission decision; Peters v. the Netherlands, Commission decision), may constitute a proportionate interference with Article 8, even without the consent of the patient. Within the context of Vietnam, Article 21 of LPCCD on the contents of infectious disease surveillance provides a ground for mandatory testing, accordingly '…In case of necessity, competent health agencies may take testing samples from persons suspected of suffering infectious diseases for supervision.' Vietnamese law does not require an order from a court or equivalent bodies for mandatory testing. This may pose constitutional questions as to whether orders should be required for mandatory testing [by the law]. Besides, the terms used such as 'necessity' or 'suspected' are not well explained which can result in the endowment of unfettered discretion on those charged with their execution. Notwithstanding, because there is evidence that buildings that have heating, ventilating, and air conditioning (HVAC) systems but the systems do not exchange old air for fresh air can be all at risk of infection of COVID-19 (MoH 2022; CDC 2021; McKee 2021), the level of risk and urgency of testing should be recognized. However, what is contested in this case is mandatory testing and mandatory taking a test administered by public officers in public space need not be the same. A mandatory taking RAT together with a tele-supervision can be less intrusive and reach the same policy target. Besides, it is doubtful if packing people in a dense environment to force them to get a COVID-19 test administered by public officers while medical tools, e.g. medical gloves, are in shortage do help prevent and control diseases or may worsen the situation. Moreover, a legal provision that prescribes mandatory testing does not give a ground for coercive testing by breaking into homes or using force. Bearing these considerations in mind, Vinh Phu Ward's measures can be disproportionate. Depending on the assessment of whether the interference with the body and private space reaches the threshold of damaging a person's physical health or causing them mental or psychological harm, Vinh Phu Ward's measures may violate either or both the right to life, liberty, and security of person and the right to respect of privacy, family, home, and correspondence, and protection of honour and reputation.
Case 3: the Right to Property
This case relates to the right to property (see Article 17 of UDHR, Protocol 1 to the ECHR, Article 14 of African Charter on Human and Peoples' Rights, Article 21 of American Convention on Human Rights Paragraph 17 of ASEAN Human Rights Declaration). The 2013 Constitution particularly stresses the right to private property (Article 23, Article 32, Article 51).
Vietnamese law provides a ground for culling animals provided that such animals are vectors of transmission of diseases to humans or other animals (see Article 50, 54.2.g of LPCCD, Article 25 of Law on Veterinary Medicine). Article 46 of LPCCD prescribes that an anti-epidemic steering committee has the tasks of taking anti-epidemic measures and overcoming epidemic consequences, and setting up mobile anti-epidemic teams to directly render first aid, provide medical treatment, and deal with epidemic foci. Reading 46, 50, 54.2.g of LPCCD allows an interpretation that (i) there is a legal foundation for culling, and (ii) the power of ordering culling of 'infected' animals is under the authority of the chairman of the Tran Van Thoi District People Committee, public health officials and veterinary stations. What is required next is to assess whether the culling activity is proportionate and if any other measures are less restrictive and intrusive. In assessing the proportionality, the [urgency of the] context, the [limited] resources at stake, and the margin of appreciation should also be taken into account. Even if the proportionality condition is satisfied, to determine the legality of measures, procedural matters, i.e. seeking informed consent and compensating should be factored in. In this case, there is no clear evidence that before culling dogs, other less intrusive measures have been considered and appraised (the Chairman of the Tran Van Thoi District People Committee justified this on the ground of urgency), that informed consent has been collected, and that sufficient remedy has been provided.
The Need for Decentralization
Contrary to some claims of bioethicists as to how centralization might be more effective to handle emergencies (ten Have 2022), from the above cases, it is clear that many measures have been made and implemented decentrally, primarily by commune authorities. The question is why commune authorities have been put in the driving seat of making [such controversial] policies.
Actually, contrary to bioethicists' observations, the fact that decentralization has taken place is hardly surprising. It is a conventional shared belief that amidst emergencies, power needs to be decentralized and executive power needs to be deferred in response to the demand for swift, decisive, effective, and appropriate decisions (Dung 2021;Giao and Đức 2021;Locke 1967). Indeed, the Vietnamese Government recognized this conventional shared belief. It claimed that decentralization in public health decision-making and implementation facilitate timely and innovative decisions that take into account the specific local context and avoid bureaucratic delays (Nguyen 2021). Indeed, over-centralization can be slow and disconnected from the social context of different localities: it is not only that the spread of diseases is different among regions but it is arguably true that 1000 infectious cases detected in populous cities can be more alarming than 1000 infectious cases in countrysides. Besides, centralized decisions, predictably, might cause uneven damage to different localities. For populous and large countries, decentralization somehow is unavoidable: it is more accurate to say that population and geographical factors allow effective centralization takes place than centralization seems to be more efficient than decentralization. It would follow that for many countries, there are reasons for decentralizing and decentralization being institutionalized as a response to disease outbreaks. The LPCDD recognizes this. Article 46 of LPCDD requires that when an outbreak of a contagious disease is declared, the central government and the local governments establish steering committees for the prevention and control of contagious diseases [the steering committee(s)]. The steering committees are bodies in the driving seat of organizing the enforcement of public-health measures (Article 46.3).
Another reason that explains why decentralization took place is that given the context of the pandemic or public health emergencies, making a centralized policymaking process is like carrying a mountain of responsibility on the shoulder. Typically, the centralized policy-making process must opt for either millions of lives or millions of dollars (and hence, again millions of lives). Any miscalculation may cost too much for a whole nation. This is significantly stressful, both in terms of morality and political responsibility. Making decisions is never easy; making decisions that can affect millions of lives, especially in a situation where millions of lives have been lost, is traumatic. As such, authorities might incline to delegate their power to a lower office to take down burdens of moral and political responsibilities. Decentralization, as such, can be used as a tool to mask and be shielded from moral and pollical responsibilities. In reality, the central government delegated power to provincial authorities. Provincial authorities then passed documents which read that the city or district (city) authorities had an obligation to make and implement measures and be responsible for such measures. In turn, the city authorities delegated power and responsibility to commune or ward (commune) authorities.
Why does Decentralization go Wrong?
From these cases, it can be observed that many problems happen with decisions made and implemented decentrally, especially by commune authorities. The question then is why commune authorities-or other local authorities-may commit human rights breaches. This paper argues that several social, legal, and pollical elements are underlying factors, including poor understanding of human rights, the undefined breadth of discretion, and lack of supervision.
Inadequate Understanding of Human Rights
Even in the scenario of centralization, as the implementation of measures needs to respect human rights, an adequate understanding of human rights matters a lot for public health measures. It is even more important in the scenario of decentralization.
Unfortunately, on the one hand, human rights have been cast into doubt constantly during the pandemic. Observing how 'ambitious', 'stringent', and 'aggressive' measures deployed by the Chinese Government in the first wave of the COVID-19 outbreak in Wuhan can be super effective, gives credit to criticism against human rights as to how democratic principles and human rights are irrelevant and burdensome in 'the war' against the pandemic (Huang 2020). Such kind of sentiments can be found flooded in Chinese and Vietnamese social networks and also in narratives of decision-makers, such as Thuan An's City Party Secretary, for example.
On the other hand, it should be noted that for a while, human rights have not been purely legal but also politicized and political. The Conservative wing in Vietnamese society and public offices is quite hostile against human rights, seeing the discourse of human rights as a mere tool of colonialism and interventionism, or put it differently, the discourse of human rights has been exploited by hostile forces ('thế lực thù địch') and monopolized by foreign forces to criticize governmental affairs and fuel internal disorders, thereby making justifications for external interventions. Human rights have long been cast negative light and distorted. Meanwhile, human rights education and training have been inadequate. Only recently, on 5 September 2017, the Vietnamese Prime Minister issued Decision No. 1309/QD-TTg, approving the Scheme to incorporate human rights content into the national education system's curricula. Still, materials on human rights education and training, including core international instruments, have not been adequate (Giao and Tùng 2008;Minh 2021).
As this paper will demonstrate, with the undefined breadth of discretion and the lack of supervision, it is not surprising that local authorities who have not been equipped adequately with human rights knowledge or even held hostility against human rights might breach human rights. Even for officers who are not charged with and capable of inventing measures, inadequate understanding of human rights may result in just-following-orders acts which are, as shown in 'Eichmann in Jerusalem' (Arendt and Kroh 1964).
The Undefined Breadth of Discretion: Socialist Legality
Under Article 46.3 of LPCDD, the steering committees on prevention and control of diseases are bodies in the driving seat of organizing the enforcement of public-health measures and under Article 53 and Article 54 of LPCDD, chairs of the steering committees are capable to take any 'other necessary measures', read these provisions together, it is not clear whether LPCDD vests in the steering committees only the power to implement measures or both the power to make and implement measures. In the latter case, the question is how far the breadth of discretion to 'invent' measures is. It seems that the breadth of discretion is not well-defined and not aligned with the 2013 Constitution and the Siracusa principles.
Socialist legality might be the primary reason for the undefined breadth of discretion. Since the 1950s, the Soviet legal-politico traditionhas been embraced in Vietnam (Nghia and Ha 2018). A key pillar of Soviet legal-politico theory is the 'socialist legality' (pháp chế xã hội chủ nghĩa). According to 'socialist legality', the law is a mere instrument to concretize the control over the means of production of such a ruling class, reflecting the 'will of the ruling class' (ý chí của Giai cấp thống trị) (Duẩn 1980;Tâm 2009). It is not difficult to see that 'socialist legality' provides a ground for the undefined breadth of discretion.
First, in light of socialist legality, all other constitutions, except for the 2013 Constitution, allow fiat to restrict human rights if deemed necessary. Provisions of LPCDD that endow local officers' undefined breadth of discretion are an expression of socialist legality and are also in line with the 1992 Constitution (Dung 2021).
Second, in light of socialist legality, the law can be elastic, depending upon the state's mind. Consequently, cogent legal reasoning or justifications for discrepancies in policies and measures are not obligatory and deemed as not necessary. For example, in 2020, then-Prime Minister Nguyễn Xuân Phúc interpreted Directive 16 as neither a lockdown nor a traffic ban that would result in 'blocking the river or prohibiting markets' (ngăn song cấm chợ) and warmed against divergent, inconsistent, and excessive measures of local authorities (Anh 2020;Tuân 2020); however, in light of the Delta wave, several official dispatches and telegrams from the Government, e.g. Official Telegram 1099/CĐ-TTg, 1102/CĐ-TTg, permitted and encouraged provinces to put measures that prevent citizens from leaving their homes and flexibly 'invent' and implement measures. The Government also required that local Asian Bioethics Review (2023) 15:103-123 authorities' measures must be 'one-level higher' and 'one-step earlier' than the measures of the Government (Tuổi Trẻ 2021e).
Unclear Chain of Command
Pursuant to Art. 46 of LPCDD, the national steering committee on prevention and control of diseases is chaired by either the Ministry of Health, a Deputy Prime Minister,8 or the Prime Minister. Meanwhile, local steering committees on the prevention and control of diseases are chaired by chairmen or chairwomen of the Peoples Committee of the same level who are also the deputy secretaries of the Party Committees at the same level. It can be observed that there are multiple chains of command over a position-chairmen or chairwomen of the Peoples Committees. Practically speaking, as matters of law and politics, the voice of the chair of the National Steering Committee on Prevention and Control of Diseases [except for the case in which the Prime Minister chairs the National Steering Committee] is not quite influential. The reason for this is that (i) the LPCDD does not make clear the legal value of commands of the chair of the National Steering Committee (consequently, the extent to which the voice of the chair is influential is derived from the power of office that the chair holds); (ii) except for the Prime Minister, other positions, e.g. a Ministry of Health, or a Deputy Prime Minister, are not empowered by law to command chairmen or chairwomen of the Provincial Peoples Committee; (iii) as a matter of politics, according to the principle 'Tập trung dân chủ' (Democratic centralism)-the key pillar in Vietnamese politics, 'Party members and bodies have to obey the Party Resolutions. The minority has to obey the majority, bodies at a lower level have to obey ones at the higher level, individuals have to obey the organiza-tion…' (see, for example, Article 9 of the VCP's Charter), chairmen or chairwomen of the Peoples Committees-the deputy secretaries of Provincial Party Committees must strictly comply with decisions reached by Provincial Party Committees and Provincial Standing Committee of Party Committees after the collective deliberation and voting procedure internal to such committees (as opposed to the Chair of the national steering committee on prevention and control of diseases); (iv) Provincial Party Committees and Provincial Standing Committee of Party Committees are chaired by Party secretaries who can be members of the Central Party Committee or even the Politburo; these persons are no less politically influential-if not to say more influential-than a Deputy Prime Minister, a member of the Central Party Committee. Consequently, the National Steering Committee on the Prevention and Control of Diseases lacks the capacity to supervise and ensure the coherence and consistency of measures.
Saving-Face and Oral Orders
'Saving face' has long been a widespread practice in Sinosphere countries (Barbalet 2014). 'Face' is a sociological-cultural concept that has nuanced implications, including "socio dynamic valuation", "respect", and/or "prestige". To not make a higher-ranking officer lose face, a lower-ranking officer is required to fully obey the orders of the higher-ranking officer even when these orders are just made orally, via phone calls, for example. As shown earlier, under the principle of 'Democratic Centralism', collective deliberation and voting internal to the Provincial Party Committees and Provincial Standing Committee of Party Committees are required for adopting a policy or measure, oral orders do not follow this procedure, and as such, can bypass being supervised. Besides, without being recorded in official documents, oral orders might mask and shield the officer making decisions from responsibility.
Unfortunately, the principle of 'Democratic Centralism' in requiring a lowerranking officer to submit to the higher-ranking authority (the lower-ranking officer can only and must report concerns to the supervising body of that higher authority while carrying out the order) can be too demanding to prevent misuse of powers fuelled by saving-face practice.
Lacking a Judicial Review or Equivalent Mechanism
Because under the socialist legality, the law is only a product of, and as such, submissive to, the State's mind (CPV 1960;Nghia and Ha 2018;Tâm 2009), judicial review-the power of the courts to examine the actions of the state, specifically the legislative, executive, and administrative bodies, to determine whether such actions are consistent with the constitution-does not exist.
Too Many Documents to Supervise
Empirical research conducted by Le et al. (2021) shows that in the first 6 months of the pandemic (from 13 January 2020 to 24 July 2021), 959 policy documents were issued and new policy documents were issued every single day. A search on Law Library (Thư viện pháp luật)-one of the biggest databases of Vietnamese policy documents-shows that there have been around 2051 documents as of 11 October 2021, the date on which Vietnam declared to return to normalcy in light of Resolution 128/NQ-CP (Resolution 128). Too many documents being promulgated also increases the difficulty in supervision, especially when the central government does not have an idea as to how to simultaneously leave room for the autonomy of local authorities and control such autonomy.
Recommendations
From all three cases above, it is clear how human rights can be vulnerable during the pandemic. As many scholars argue, it is doubtful that the protection of other human rights should be [blanketly] derogated under the justification of the pandemic (Dzehtsiarou 2020;Frowde et al. 2020;McQuigg 2022) and if the blanket derogation of human rights does add any additional values to the protection of health. From the three cases above, it is also not hard to imagine how abusive and unreasonable measures might harm dignity, physical and mental integrity, and health. The need for human rights and upholding human rights during the pandemic or emergencies, as such, should be recognized. from the above cases and the arguments presented, two solutions are suggested.
Improving the Supervision of the Decentralization Process
The Vietnamese Government has recognized the lack of supervision over the decentralization and the delegation of power and the fragmentation, divergence, and inconsistency of measures and called for strengthening the supervision . Accordingly, it passed Resolution 128/NQ-CP (Resolution 128) issuing regulations on "safety, flexibility, and effective control of the COVID-19 epidemic", whose aims are to establish new normalcy and unifying measures across the nation. Resolution 128 suggests a two-layer mechanism for supervision: first, by emphasising the leadership of the Party, it calls for the engagement of the local Party Committees and Provincial Standing Committee of Party Committees in supervising measures made by administrative bodies at the same level; second, it requires that if a local authority wants to invent a measure that is more intrusive than Government's measures, it must get permission from an upper authority. Notwithstanding, it seems that Resolution 128 is more about a political message that reminds local authorities to stop drastic and divergent measures because such measures are no longer needed than a technical legal solution. The reasons supporting this claim are that (i) when Resolution 128 was promulgated, a large portion of the Vietnamese population had been vaccinated, infections and risks were significantly reduced; (ii) Resolution 128 cannot suspend divergent measures outright, and such measures were dropped gradually by local authorities later; (iii) the practical enforcement of Resolution 128 has never been observed.
Besides, there are reasons to believe that the supervision mechanism launched by Resolution 128 is just a bandage solution and if a public health emergency emerges, it shall not work.
First, the supervision mechanism launched by Resolution 128 has not been institutionalized in law. Resolution 128 is only a specific solution at the end of the COVID-19 pandemic and authorities are not bound to follow it in other emergencies.
Second, even if the Government replicates the Resolution in the future in response to other emergencies, the supervision mechanism shall fail because the criteria for which an upper authority might permit a lower authority to restrict human rights is not transparent. Moreover, it is quite nonsensical that an upper authority that delegates its power to a lower authority and that might have the same or similar motivations to restrict human rights disproportionately as those of a lower authority is also charged with permitting the lower authority to restrict human rights.
To improve the supervision of the decentralization process, first, the supervision mechanism must be institutionalized; second, the supervision mechanism must be 1 3 transparent which means criteria for permitting measures that restrict human rights must be articulated explicitly and clearly and reasons justifying any permission must be accessible to the public.
Improving the Understanding of Human Rights
Awareness and understanding of human rights are pivotally important to ensuring human rights protection. In the Preamble of the Universal Declaration of Human Rights (UDHR), it is read that '… [the] Universal Declaration of Human Rights as a common standard of achievement for all peoples and all nations, to the end that every individual and every organ of society, keeping this Declaration constantly in mind, shall strive by teaching and education to promote respect for these rights and freedoms…'. Several reasons support improving the understanding of human rights.
First, supervision is always insufficient or slower than measures. It is simply not realistic to oversee and fix a huge number of problematic measures and breaches, especially in the implementation of measures.
Second, however well-structured a mechanism is, who runs that mechanism does matter too. This is evidence in the fact that more than once judicial review overlooks human rights, to name a few: Dred Scott v. Sandford, Buck v. Bell, Korematsu v. the United States. It is unrealistic to expect an authority who despises or has an inadequate understanding of human rights to safeguard human rights.
Third, it is always better if human rights violations have never happened because no compensation can be duly and fully remedy damage and loss caused by breaches.
Fourth, as shown in the three above cases, some breaches are a sort of banality that can be prevented by means of raising awareness.
Conclusion
By studying three cases concerning public health measures, this paper gives a sense of how controversial public health measures made and implemented decentrally might be concerning human rights. It is argued that several social, legal, and pollical factors, including poor understanding of human rights, the undefined breadth of discretion, and lack of supervision, are the underlying factors for such problematic decentralized measures. Accordingly, this paper proposes two solutions (i) improving the supervision of the decentralization process, and (ii) improving the understanding of human rights. | 2022-10-24T15:11:19.234Z | 2022-10-22T00:00:00.000 | {
"year": 2022,
"sha1": "f086a1241488fa81f609fef5e8d1f9db17c17713",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s41649-022-00226-1.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "df03d466ea084751968173331829cc299a7fff03",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
252139585 | pes2o/s2orc | v3-fos-license | Radiographic and Histological Evaluation in Canine Femur after Implantation of 304 Stainless-steel-based Plate
in Canine Femur after Implantation of 304 Stainless-steel-based Plate. ABSTRACT Fractures are known to be high-risk traumatic cases in domestic animals. Surgery was performed to reposition and immobilize bone using a plate as a biomaterial component. This study aimed to evaluate the radiographic and histological findings in canine femur after implantation of a 304 stainless-steel-based plate. A total of six male dogs aged 3-4 months were used in this study. Dogs were acclimatized for a week and then randomly assigned to 2 groups and 3 replication, respectively. The left femoral diaphysis was cracked and fixed by (A) commercial plate; (B) 304 stainless-steel plate. Radiographic observations were performed at 24 h and 28 d postoperatively during the recovery period. Histological figures were evaluated at 28 d postoperative descriptively. As a result, physical examination of muscle tissue in both groups did not reveal discoloration, swelling, and fluid accumulation around the plate. The radiographic figures presented a slight callus production around the crack and a decrease in the gap between cracks after 28 d. The histological figures observed the proliferation of osteoblasts, osteocytes, connective tissues, and bone trabeculae. It can be concluded that no significant difference between 304 stainless-steel plate compared to commercial plate. A 304 stainless-steel-based plate is recommended for bone immobilization in canine femoral fractures.
INTRODUCTION
Femoral fractures are common in dogs with a prevalence of 31.9% in Cairo, Egypt, 1 17.4% in India, 2 50% in Brazil, 3 and 57.7% in Ibadan, Nigeria. 4 Fracture is a break in the continuity of bone and cartilage tissue which is generally caused by trauma. Fractures are characterized by substantive soft tissue damage, separation of the periosteum from the bone, extensive bleeding, lacerations and muscular damage. 5 Severe bone damage due to trauma, where many fracture fragments cannot be maintained so that it can inhibit bone healing. In addition, delayed healing or non-union complications after surgery are also causes of bone damage. In general, simple fractures are easily treated by fixing the fracture fragments using intramedullary pins or external fixators. 6 The application of implant materials as endoprosthetics in animals requires good biocompatibility, strength, and resistance to corrosion, especially body fluids. Materials commonly used as implant materials are stainless steel, Cobalt-based metal alloys and titanium. 7 Titanium alloys have good biocompatibility and corrosion resistance, but are very expensive. Stainless steel has a lower level of biocompatibility than titanium alloys and cobalt-chromiummolybdenum alloy (CoCrMo). However, it has good mechanical properties and is cheaper than titanium and CoCrMo alloys. The basic requirements of metal-based biomaterials are low corrosive properties and must have biocompatibility. 8 This is important because biomaterials are implanted in the body and are in direct connection with living cells. Metals used as biomaterials must not release ions that are toxic or carcinogenic to living cells. Corrosion reactions on implant materials can cause an inflammatory reaction around the tissue so that if used in the long term it will be very dangerous for the body. A bone plate is one of the components of implants that are made to replace bone structure and function and support fractures. 9 In the context of this study, implant materials can be applied internally or externally. The aim of this study was to evaluate the radiographic and histological findings in canine femur after implantation of a 304 stainlesssteel-based plate.
MATERIALS AND METHODS
This study was approved by the Ethical committee: Animal care and laboratory use, Faculty of Veterinary Medicine, Universitas Gadjah Mada with Certificate No.0098/EC-FKH/Int./2019. Ethical approval in this study is required to prevent stress and animal abuse.
A total of 6 local male dogs aged 3-4 months, weighing 4-5 kg, were reared in individual cages, Department of Surgery and Radiology, Faculty of Veterinary Medicine, Universitas Gadjah Mada. Dogs were acclimatized for a week and fed commercial feed (Rotto ® , Thailand), drinking ad libitum and 25 mg/kg of anthelmintic Pyrantel pamoate was administered. After acclimatization, dogs were assigned into 2 groups and 3 replications i.e. (A) commercial plate and (B) 304 stainless-steel plate, respectively.
intramuscularly. The dog was positioned in right lateral recumbency, then the hair in the left femur area was shaved and cleaned with 70% alcohol and povidone-iodine solution. The sterile drape was covered over the whole body of the dog except the head area. The skin incision was performed along the craniolateral bone in the line from the greater trochanter to the patella, as well as the subcutaneous tissue. The skin and subcutaneous tissue were retracted, the fascia lata was incised along the cranial margin of the biceps femoris muscle. The biceps femoris muscle was retracted caudally and the vastus lateral muscle was retracted cranially so that the surface of the femur bone will be seen. In both groups, the left femoral diaphysis was cracked and fixed with a plate as assigned. Thereafter, the muscle was sutured using Vicryl and the skin using silk. Radiographic observations were performed at 24 h and 28 d postoperatively. A bone biopsy was performed on day 28 and fixed in 10% buffered formalin for histological evaluation using hematoxylin and eosin (HE) staining. 10 Histological analysis was performed using a binocular microscope (Eclipse E200 LED, Nikon, Japan). The following variables were observed i.e. newly formed trabecular bone, connective tissue (including fibrous tissue, blood vessels, and adipose tissue), total tissue volume, osteogenic cells (including osteoblast and osteoclast). All data were expressed as mean ± standard error and analyzed using T-test independent sample (p<0.05).
RESULTS
After 24 h, radiographs in both groups showed radiolucent findings with a 1 mm gap representing bone fragments in the left femoral diaphysis and no callus production ( Figure 1). Meanwhile, after 28 d, radiolucent findings were observed without gaps and callus production was initiated. The callus production was not excessive around the radiopaque visible fragments indicating a mineralization process ( Figure 2). On the other hand, physical examination of the plates in both groups showed no change in muscle coloration or excessive fluid production in post-implantation bone tissue ( Figure 3).
In this case, it showed that there was no excessive inflammation reaction and no rejection effect in the implantation of 304 stainlesssteel plates. In the present study, the histologic findings at 28 d observed proliferation of fibrous connective tissue, trabecular bone formation, new vascularization, osteoblasts at the bone margins of the trabeculae, and osteocytes in the center of the bone matrix ( Figure 4). The muscles around the fracture area showed no inflammation, myopathy, or atrophy. In addition, the new formation of trabecular bone, connective tissue, and osteogenic cells appeared similar in the two treatment groups (Table 1).
DISCUSSION
Based on radiographic observations at 24 h postoperatively, it was revealed that both groups had not yet produced callus. Callus production depends on the end of the inflammatory phase. The inflammatory phase is a response that occurs when there is fracture, injury, and inflammation that can reach peak production at 48 h and then decrease in 1-2 weeks. The inflammatory phase is initiated immediately after bone injury and surrounding soft tissue. 11 In the case of fractures, tissue disorders can occur in cells, blood vessels, bone matrix, muscles, and nerves. Bleeding occurs from the periosteum, endosteum, soft tissue around the fractured bone, and can also be caused by large blood vessels. 12 The hematoma will develop at the fracture site for several hours on the first day. Hematoma consists of platelets and macrophages which are stimulated to release a series of cytokines and promote the healing stage. 13 The types of cytokines involved in the healing stage are platelet derived growth factor (PDGF), transforming growth factor beta group of protein (TGF-β), interleukin-1 (IL-1), interleukin-6 (IL-6), and prostaglandin E2 (PG-E2). 14 Fibrin clot platelets degranulate releasing TGF-β and PDGF. Furthermore, TGF-β stimulates mesenchymal stem cells (MSCs), osteoblast proliferation and macrophage chemotaxis. 15 Hematomas production requires ischemic environmental conditions, hypoxia, low pH and high concentrations of potassium and lactate. 16 The hematoma may produce a buffer before the growth of fibrous tissue and provide fibrin stability. The inflammatory reaction occurs at the fracture site by releasing lysosomal enzymes from the fracture border and soft tissue. 17 Clinically, the end of this inflammatory phase is marked by decreased pain and swelling. This phase takes about 1-7 days. The inflammatory phase prolongs the fracture time until it produces a bony callus. 18 Radiographically it can be observed that the end of the fracture looks less opaque and the edges rather vague. Healing progress was observed after 28 d postoperative, in which the bone fragments were not visible and were filled with callus. 19 The callus production formed is characterized by a radiopaque appearance that is not excessive on X-rays. After that, the callus formed will be mineralized so that the radiograph looks more radiopaque. 20 Bone healing depends on an adequate vascular supply and supported by osteoblasts modulation around the blood vessels so that the formation of bone tissue is more organized on a stable and solid surface. 21 The initial stage of bone union is callus formation, followed by vascularization from the host to the mid-fracture area and terminated by bone matrix resorption and new bone replacement. 22 Based on histological evaluation 28 d postoperatively, both groups showed similar formations, i.e. the occurrence of fibrous tissue proliferation, trabecular bone formation, stimulation of angiogenesis, and osteoblast proliferation at the border of trabecular bone. 23 Subsequently, fibroblasts develop and infiltrate blood capillaries into the blood clot to form granulation tissue called procalus. Granulation tissue will develop dense fibrous tissue and turn into cartilage. This tissue is a temporary callus to hold the fractured bone. The temporary callus is then progressively replaced by spongy bone derived from the osteoblasts proliferation. 24 Vascularization that occurs in the fracture area is an indication of the healing process. Bone healing from vascularization will be initiated immediately after reconstruction because the periosteal blood supply is complete after the vascular anastomosis. 25 Inflammatory cells will aggregate in the fracture area, then monocytes that enter the fracture area will transform into macrophages which play an important role in bone healing. This will lead to granulation tissue formation, neovascularization, and migration of osteogenic cells. 26 Histological picture of the tissue around the implanted plate depicts the absence of inflammatory cells, myopathy or muscular atrophy. These findings indicate that the universal use of 304 stainless-steelbased bone plate does not have a negative effect on the healing process or the surrounding tissue. 304 stainless steel is an austenitic category that has a face-centered-cubic (fcc) structure so that it is superior to ferritic stainless steel in terms of corrosion resistance due to its higher crystallographic atomic density, yield strength ratio and very low tensile test. 27 Type 316L stainless-steel is the most commonly used material for implant materials. 316L stainless-steels is a low carbon type with chemical composition of ≤0.030% carbon, ≤1.0% silicon, ≤2.0% manganese, ≤0.045% phosphorus, ≤0.030% sulfur, 12.0-15.0% nickel, and 16.0-18.0% chromium. 28 Another type of alloy metal is CoCr alloy consisting of 65% cobalt and 30% chromium with a small amount of carbon element which has a hard, rigid, strong texture and excellent corrosion resistance. 29
CONCLUSIONS
In conclusion, non-commercial plates of 304 stainless-steel based have no impact on inflammatory reactions at the implant site. In addition, the proliferation of osteoblasts, osteocytes, and bone trabeculae proved primary ossification in the canine femur during the healing period. Values are expressed in mean ± standard error. T-test independent sample was carried out with significant differences for p<0.05. | 2022-09-09T17:08:30.554Z | 2022-09-05T00:00:00.000 | {
"year": 2022,
"sha1": "51b76f3cc42a6eaacd527d9f89edaeefc188661b",
"oa_license": "CCBY",
"oa_url": "https://phcogj.com/sites/default/files/PharmacognJ-14-4-388.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "67567095c39ba7b98bf6883aa76cc055d95c935d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
233015986 | pes2o/s2orc | v3-fos-license | Progress of Clinical Evaluation for Vascular Aging in Humans
Abstract Human society is experiencing a serious aging process. Age-related arteriosclerotic cardiovascular diseases (ASCVD) are the most common cause of deaths around the world and bring a huge burden on the whole society. Vascular aging-related pathological alterations of the vasculature play an important role in the pathogenesis of ASCVD and morbidity and mortality of older adults. In this review, we describe the progress of clinical evaluation of vascular aging in humans, including functional evaluation, structural assessment, and cellular molecular markers. The significance of detection for vascular aging is highlighted, and we call for close attention to the evaluation for a better quality of life in the elderly population.
INTRODUCTION
The famous 17th-century physician Thomas Sydenham, recognized as "English Hippocrates" and the author of Observationes Medicae, observed, "a man is as old as his arteries". Indeed, the world is stepping into an aging society. Aging is a major risk factor for the occurrence of arteriosclerotic cardiovascular diseases (ASCVD), which are the leading causes of long-term disability and mortality among the elderly around the world. [1] Importantly, advanced aging is the single most essential risk factor that dwarfs the effect conventional risk factors (e.g., hypertension, hyperglycemia, hypercholesterolemia), [1] and age-related ASCVD account for nearly half of all deaths worldwide (World Health Organization), consequently, addressing the value of clinical evaluation of agerelated vascular diseases is considerably meaningful. Vascular aging causes multifaceted structural and functional vessels' impairment, which shows the deleterious impact on cellular metabolism and function, and thus leads to the pathogenesis of ASCVD. [2] Thus, for better amelioration of unsuccessful vascular aging and prevention of age-related vascular pathologies, it is critical to identify vascular aging and dysfunction from multiple perspectives as to prevent the occurrence and development of vascular disease associated with old age (Table 1).
FUNCTIONAL EVALUATION OF VASCULAR AGING
Vascular endothelial dysfunction is the initial process of vascular aging-related ASCVD. Detection for vascular function is of great significance to identify the vascular risk and prevent the pathogenesis of vascular-related ASCVD. In recent years, with the continuous development of vascular function research and the update of technology, noninvasive function detection indicators reflecting vascular aging have gradually been used in clinical and scientific research. At present, a large number of studies have confirmed that early detection of flow-mediated dilation (FMD), [3] brachial ankle pulse wave velocity (baPWV) [4] and ankle brachial index (ABI) [5] are important strategies to slow down vascular aging. In addition, more and more new technologies have been applied to the detection of vascular function. [6] FMD and baPWV FMD is the method describing the vascular vasodilation caused by the increase of lumen blood flow and shear stress. FMD decreased in patients with elevated blood pressure, and endothelial function assessed by FMD was impaired regardless of the blood pressure level of hypertensive patients after antihypertensive drug treatment [3] . At the same time, FMD is also an important assessment method of target organ injury. [7] baPWV is another main index to detect the degree of arteriosclerosis, and its change is the overall reflection of abnormal structure and function of arteries. The results showed that baPWV was an independent predictor of hypertension, and the risk ratio of baPWV increased by 1
Methods Description
Functional evaluation FMD FMD, carried out noninvasively with ultrasonography on the brachial artery, is a commonly used method for the evaluation of endothelial dysfunction. FMD provides a non-invasive, cheap, and repeatable measure of endothelial function and has been widely used in pathophysiological studies and management of aging vessels.
baPWV baPWV has been reported as an index of arterial stiffness and serves as an indicator of cardiovascular risk and the severity of vascular aging-related damage.
Endo-Pat Endo-PAT, a safe, accurate, and non-invasive diagnostic device, is used for evaluating both small and large arteries to detect the status of vascular function and earliest stages of cardiovascular diseases.
Infrared thermal imaging technology
Infrared thermal imaging technology is a method to improve visibility of participants in a dark environment by detecting their infrared radiation, in order to reflect the microcirculation and reveal the vascular function.
Structural assessment MRI MRI is a non-invasive imaging method, which provides high spatial resolution for the morphological and compositional evaluation of the vessel wall and is a promising modality for assessment of vascular aging.
IMT Carotid intima-media thickness is a surrogate measure of aging associated with cardiovascular risk factors and with cardiovascular outcomes, for detection of early stages of atherosclerosis and evaluation of arterial wall structure.
CTA CTA is a vascular imaging technique that can be carried out rapidly and safely for the evaluation of aging-related artery disease as well as other vascular applications.
OCTA
OCTA has emerged as a fast, safe, and noninvasive procedure to provide depth-resolved images of blood flow in the retina and assess the retinal microvasculature, which is essential for the early detection of age-related chronic diseases, especially diabetes and hypertension.
Cellular molecular markers EPCs
EPCs are bone marrow-derived endothelial precursor cells and participate in the endothelial repair and endothelial function regulation, which is a surrogate biologic measure of vascular function and human age.
EMPs
EMPs are microparticles that are detached from the cell membrane of endothelial cells. The increasing level of EMPs is associated with the prognosis of age-related diseases.
Sirtuins
Sirtuins, a class of NAD+-dependent deacetylases, are implicated in numerous biological pathways and are considered as an index of aging and a promising target for treating human diseases.
Telomere and telomerase
Telomeres and telomerase are important in maintaining the chromosomal stability and integrity. The length of the shortest telomere is a key biomarker of the onset of aging.
Mitochondria and superoxide flashes
Superoxide flashes are brief bursts of superoxide production within the matrix of single mitochondria and exhibit all-or-none properties. It has emerged as a relatively early and pivotal event of cell metabolism, differentiation, the stress response, and aging.
Nucleolus
Maintenance of nucleolar homeostasis is demonstrated to contribute to counteracting cells' aging because of the stability of DNA synthesis.
cfDNA cfDNA, all non-encapsulated DNA in the blood stream, can be recognized as a new biomarker of aging according to recent studies.
Humanin
Humanin is an innovative 24-amino acid mitochondrial-derived peptide that has demonstrated cytoprotective effects. Humanin decreases with age in several species, including humans.
Ageotype
Ageotypes may provide a molecular assessment of aging and reflect lifestyle and medical history, in order for better intervening in the vascular aging process.
FMD: flow-mediated dilation; baPWV: brachial ankle pulse wave velocity; MRI: magnetic resonance imaging; IMT: intima-media thickness; CTA: computed tomographic angiography; OCTA: optical coherence tomography angiography; EPCs: endothelial progenitor cells; EMPs: endothelial microparticles; cfDNA: cell-free DNA m/s was 1.10. [8] Based on the traditional cardiovascular risk factors, measuring baPWV can improve the effectiveness of predicting the development risk of ASCVD. [9] Endo-Pat Endo-Pat is based on a set of plethysmography biosensor system, which is used to measure the endothelium vasodilation and contraction function after reactive hyperemia at fingertips, and the result is presented as reactive hyperemia index (RHI). The study found that the sensitivity and specificity of 1.67 as a critical value of RHI for the diagnosis of endothelial dysfunction were 82% and 77%, respectively. [10] Endo-Pat is of great significance for screening of high-risk population and has gradually been set as a clinical function evaluation indicator of vascular aging. [11,12] Infrared thermal imaging technology Infrared thermography, based on the principle of remote temperature measurement, records the infrared radiation wavelength emitted by human skin (affected by skin vascular relaxation and contraction factors) to analyze the multiple information of body surface temperature, and displays it in the form of thermal map, so as to reflect the microcirculation, the blood circulation of the lesion site, etc. [13] The study found that infrared thermal imaging technology has unique value in screening large population, detecting vascular function of chronic diseases such as hypertension, diabetes, and theirs complications. [14][15][16]
STRUCTURAL ASSESSMENT OF VASCULAR AGING
The integrity of vascular structure is an important guarantee for the normal function of blood vessels, and it is also the key factor to maintain vascular health and delay vascular aging. At present, vascular structure evaluation indicators are gradually applied to clinical practice, including magnetic resonance imaging (MRI), carotid intima-media thickness (IMT), computed tomographic angiography (CTA), etc.
MRI
In the case of endothelial dysfunction and structural dysfunction, vascular hyperemia becomes blunted and can be measured noninvasively by a variety of quantitative MRI methods, including a blood oxygenation-level-dependent (BOLD) signal that reflects the combined effect of blood flow and capillary bed oxygen content, arterial spin labeling (ASL) for quantification of regional perfusion, phase contrast (PC) to quantify arterial flow waveforms, macrovascular blood flow velocity and rate, etc. [17] In the development of vascular aging, BOLD MRI can also be observed even in the absence of obvious ASCVD. [18,19] Some studies have also found that there is an association between ASL-based measurement and the occurrence and severity of peripheral vascular disease. [20,21] In addition, based on the baseline characteristics of PC and arterial blood flow waveform, endothelial structure and function, vascular reactivity and blood flow reserve can be deeply understood. [22] MRI can also be used to quantify the crosssectional area of blood vessels and to evaluate the structure of vascular wall or hemodynamics. [23] IMT Carotid IMT refers to the distance between the lumen surface of carotid intima and the interface of adventitia. IMT increases linearly with age and is considered as an independent risk factor for cardiovascular events. [24,25] At present, the international reference values recommended according to different ages are as follows: 40-49 years, <0.7 mm; 50-59 years, <0.8 mm; and 60 years or older, <0.9 mm. [26] Early identification and intervention of IMT plays an important role in delaying vascular aging and preventing subclinical development of atherosclerotic vascular disease. [27] CTA As a common clinical technique, CTA exerts essential role in the diagnosis of vascular structure variation and diseases. Recent studies found many novel noninvasive technologies such as CT-derived fractional flow reserve (FFR) for estimating the blood flow distribution and predicting the functional outcome after coronary stenting. [28,29] Other research showed that CTA may identify coronary arterial abnormalities and add information on coronary artery lesions in patients with Takayasu arteritis (TA), which acknowledges CTA as a useful technique in figuring out the pathological changes in the vessels. [30] Optical coherence tomography angiography (OCTA) Retinopathy caused by age-related diseases, especially diabetes mellitus and hypertension, is a serious neurovascular complication, which is a primary reason of blindness in the aging population. [31] OCTA is a promising and emerging technique for imaging the retinal vasculature without dye injection, and it appears as a comprehensive management of retinopathy related to vascular aging. [32] OCTA visualizes the ocular blood flow in the retinal network and is essential for locating and measuring the sizes of various pathological alternations. Studies have demonstrated that OCTA shows a favorable effect on detecting angiographic features, even before the disease onset, which may be a useful method for vascular aging-related diseases. [33]
CELLULAR MOLECULAR MARKERS OF VASCULAR AGING
At present, there is a lack of a universal marker for the detection of vascular aging. As the molecules expressed by senescent cells will change with the different cell types, senescence stimulation, and stimulation time, the phenotype and molecular markers of cell senescence still need to be further studied.
Endothelial progenitor cells (EPCs) and endothelial microparticles (EMPs)
EPCs are bone marrow-derived endothelial precursor cells that are present in peripheral blood. They can be directed to the vascular injury site and differentiate into mature endothelial cells, which is recognized as a landmark of the capacity of vascular repair. Studies found that circulating EPCs deficiency contributes to reduced arterial elasticity in persons of advancing age, [34] and impaired EPCs activity is associated with impaired vascular function. [35,36] These results demonstrated that the decrease in circulating EPCs number and activity may serve as a surrogate biologic measure of vascular function and human age.
EMPs are microparticles that are detached from the cell membrane of endothelial cells during activation, metabolism, apoptosis, and other cellular process.
Elevations of EMPs in the circulation, mostly defined as CD31+/CD42+ microparticles, as well as alterations in their phenotype, were found in various pathological conditions related to endothelial dysfunction. [37] In addition, EMPs increase along with aging and vascular damage and emerged as the sensitive marker of endothelial perturbation in response to adverse stimuli. [38] Therefore, increasing level of EMPs is associated with prognosis of ASCVD, suggesting vascular aging.
Sirtuins
Sirtuins are a class of NAD+-dependent deacetylases.
Anti-aging properties of sirtuins were first observed in 1999, for its overexpression was demonstrated to increase yeast lifespan by 70%. [39] Growing evidence has shown that sirtuins are essential for vascular aging through repair DNA damage, lowering oxidative stress, and alleviating inflammation. [40] For instance, SIRT1 is one of the most extensively studied proteins because of its capability of delaying vascular aging and preventing age-related diseases by catalyzing protein deacetylation and regulation of a series of transcription factors and coactivators, such as the forkhead box O (FOXO), nuclear factor-kappa B (NF-kB), and peroxisome proliferator-activated receptorgamma coactivator-1alpha (PGC-1α). [41] Besides, SIRT3 and SIRT5, two major NAD+-dependent deacetylases located in mitochondria that deacetylases mitochondrial proteins at their lysine residues, are confirmed to be beneficial for improving vascular functions via maintaining the balance of oxidative stress and augmenting the capacity of angiogenensis. [42,43] Except for these, SIRT2, SIRT4, SIRT6, and SIRT7 also play an important role in regulating endothelial cell senescence. [39] Taken together, sirtuins, have multifaceted roles in the vascular function, are crucial for the maintenance of vascular homeostasis, and are acknowledged as a novel biomarker of vascular aging.
Telomere and telomerase
It is well established that the length of the shortest telomere is a key biomarker of the onset of aging. There is an increasing number of studies demonstrating that telomeres shortened during the aging of human both in cell culture and in normal tissues in vivo. [44][45][46][47] In human, meta-analyses have supported a strong relation between short telomeres and mortality risk of ASCVD, particularly at younger ages. [48] Telomerase is a reverse transcriptase that adds new DNA onto the telomeres. Recent evidence indicated that vascular aging can be reverted by telomerase activation, and normal physiological aging can be delayed by systemic viral transduction of telomerase. [49][50][51] Further insight into the underlining mechanisms of telomerase is, therefore, fundamental for both the prevention and the development of treatment for age-related ASCVD.
Mitochondria and superoxide flashes
Vascular endothelial cells aging exhibit a variety of changes in mitochondrial function, dynamics, and morphology, including decreased membrane potential, increased proton leakage, decreased fusion and division rates, increased species of tricarboxylic acid (TCA)-circulating metabolites, dysfunction of electron transport chain (ETC), overproduction of reactive oxygen species (ROS), and changes in AMP: ATP and ADP: ATP ratios. [52] Detection of mitochondrial function and morphology is another novel method for evaluation of vascular aging. Interestingly, superoxide flashes are a novel mechanism for quantal ROS production by individual mitochondria, which are triggered by transient openings of the mitochondrial permeability transition pore (mPTP) and are fueled by electron transfer complexes-dependent superoxide production. [53] A study suggested that superoxide flash is a relatively early and pivotal event of cell metabolism, differentiation, stress response, and aging. [54] Nucleolus Scientists have found that about half of the cells age through a gradual decline in the stability of the nucleolus, which is a region of nuclear DNA where key components of protein are synthesized. [55] Maintenance of nucleolar homeostasis is suggested to contribute to counteracting cells aging. [56] Activation of the NF-κB pathway by nucleolar stress and induction of apoptosis by nucleolar sequestration of NF-κB/RelA may emerge as a mechanism of vascular aging. [57] Further studies are necessary for better understanding the relationship between the nucleolar homeostasis and the process of vascular aging.
cfDNA
The study discovered that the distribution of nucleosomes on the cfDNA (cell-free DNA) strand of healthy young people was regular, while the distribution was scattered in the elderly. Moreover, the cfDNA signal of the two transposons began to decrease with age. [58] On this basis, cfDNA can be recognized as a new biomarker of aging, and its alternation can be used to clarify the "real age" of organisms and reflect the health status of the body.
Humanin
Humanin is a new member of the polypeptide family encoded by a short open reading frame in the mitochondrial genome. Studies have showed that humanin decreases with age in several species, including humans, and more interestingly, the offspring of centenarians has higher levels of humanin, [59] which also indicates that humanin is expected to be a new marker for clinical detection of aging in the future.
Ageotype
Recently, scientists have identified four different ageotypes: immunity ageotype, metabolic ageotype, liver dysfunction ageotype, and kidney dysfunction ageotype, by collecting biological samples from 43 subjects, including blood, inflammatory substances, microorganisms, genetic materials, proteins, and byproducts of metabolic processes, and tracking how the samples change over time. [60] Ageotypes may provide a molecular assessment of aging, reflective of lifestyle and medical history, which may ultimately be helpful in detecting and intervening in the vascular aging process. [60] PERSPECTIVES Vascular aging is a common basis of various ASCVD. In order to delay the process of vascular aging and prevent age-related vascular pathologies, addressing the clinical evaluation of age-related vascular diseases is of critical importance. This review describes the progress of clinical evaluation of vascular aging, including functional evaluation, structural assessment, and cellular molecular biomarkers. Although great progress has been achieved in characterizing aging-induced changes in vascular function, research efforts should persist in this direction to excavate the underlying mechanisms and develop novel methods to evaluate vascular aging, in order for the promotion of vascular health in older adults. In addition, the construction of comprehensive intervention system based on a risk assessment of vascular aging is the hotspot and direction of future research, which will make it possible to better prevent age-related ASCVD and prolong the lifespan and healthspan of the elderly. | 2021-04-05T13:11:15.730Z | 2021-01-05T00:00:00.000 | {
"year": 2021,
"sha1": "0279dac56af548eaed11d09cfe6804eee65edc15",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.2478/jtim-2021-0002",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "021fdf635a073dc7ffddffbf1b816ccc03430f00",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
268751208 | pes2o/s2orc | v3-fos-license | SLC25A3 negatively regulates NLRP3 inflammasome activation by restricting the function of NLRP3
The NACHT, leucine-rich repeat, and pyrin domains-containing protein 3 (collectively known as NLRP3) inflammasome activation plays a critical role in innate immune and pathogenic microorganism infections. However, excessive activation of NLRP3 inflammasome will lead to cellular inflammation and tissue damage, and naturally it must be precisely controlled in the host. Here, we discovered that solute carrier family 25 member 3 (SLC25A3), a mitochondrial phosphate carrier protein, plays an important role in negatively regulating NLRP3 inflammasome activation. We found that SLC25A3 could interact with NLRP3, overexpression of SLC25A3 and knockdown of SLC25A3 could regulate NLRP3 inflammasome activation, and the interaction of NLRP3 and SLC25A3 is significantly boosted in the mitochondria when the NLRP3 inflammasome is activated. Our detailed investigation demonstrated that the interaction between NLRP3 and SLC25A3 disrupted the interaction of NLRP3-NEK7, promoted ubiquitination of NLRP3, and negatively regulated NLRP3 inflammasome activation. Thus, these findings uncovered a new regulatory mechanism of NLRP3 inflammasome activation, which provides a new perspective for the therapy of NLRP3 inflammasome-associated inflammatory diseases.
The NACHT, leucine-rich repeat, and pyrin domainscontaining protein 3 (collectively known as NLRP3) inflammasome activation plays a critical role in innate immune and pathogenic microorganism infections.However, excessive activation of NLRP3 inflammasome will lead to cellular inflammation and tissue damage, and naturally it must be precisely controlled in the host.Here, we discovered that solute carrier family 25 member 3 (SLC25A3), a mitochondrial phosphate carrier protein, plays an important role in negatively regulating NLRP3 inflammasome activation.We found that SLC25A3 could interact with NLRP3, overexpression of SLC25A3 and knockdown of SLC25A3 could regulate NLRP3 inflammasome activation, and the interaction of NLRP3 and SLC25A3 is significantly boosted in the mitochondria when the NLRP3 inflammasome is activated.Our detailed investigation demonstrated that the interaction between NLRP3 and SLC25A3 disrupted the interaction of NLRP3-NEK7, promoted ubiquitination of NLRP3, and negatively regulated NLRP3 inflammasome activation.Thus, these findings uncovered a new regulatory mechanism of NLRP3 inflammasome activation, which provides a new perspective for the therapy of NLRP3 inflammasome-associated inflammatory diseases.
Solute carrier family 25 member 3 (SLC25A3), the inner mitochondrial membrane phosphate transporter, provides inorganic phosphate to the mitochondrial matrix (30,31).It has been uncovered that decrease of SLC25A3 protein will lead to diminished mitochondrial ATP synthesis rates (30).Therefore, function of SLC25A3 is essential for ATP production in mitochondria.Interestingly, mammalian phosphate carrier protein SLC25A3 can also transport copper both invitro and in vivo (32).Recently, it has been showed that SLC25A3 deletion could induce mitochondrial energy dysfunction drives remodeling of the cardiac mitochondrial protein acylome (33).However, the role of SLC25A3 in regulation of NLRP3 inflammasome activation has not been uncovered.
In this study, we elucidate the mechanism underling suppression of NLRP3 inflammasome activation.The results revealed that SLC25A3 could interact with NLRP3, and overexpression of SLC25A3 negatively regulated NLRP3 inflammasome activation in THP-1-derived macrophages, and knockdown of SLC25A3 could accelerate NLRP3 inflammasome activation in THP-1 derived macrophages and bone marrow-derived macrophages (BMDMs).Moreover, the interaction of NLRP3 and SLC25A3 was significantly enhanced in the mitochondria when NLRP3 inflammasome was activated.The interaction of NLRP3 and SLC25A3 disrupted the interaction of NLRP3-NEK7, promoted ubiquitination of NLRP3, and finally negatively regulated NLRP3 inflammasome activation.Therefore, the work uncovered a distinct mechanism by which SLC25A3 negatively regulates NLRP3 inflammasome activation.
SLC25A3 interacts with NLRP3
In order to reveal the regulatory mechanism of NLRP3 inflammasome activation, we initially screened targeted proteins interacting with NLRP3 by mass spectrometry in HEK293T cells (Fig 1A).Three targeted proteins (SLC25A3, SLC25A3 negatively regulates NLRP3 inflammasome activation EEF1A1, and EEF1A2) were identified through mass spectrometry (Table 1).The result indicated that SLC25A3 and EEF1A1 could interact with NLRP3, and the interaction between NLRP3 and SLC25A3 is stronger (Fig. 1B).We found that targeted protein SLC25A3 might interact with NLRP3.To reveal if the interaction between NLRP3 and SLC25A3 is reliable, we had conducted different experiments.First, coimmunoprecipitation (Co-IP) confirmed that NLPR3 could interact with SLC25A3 in HEK293T cells (Fig. 1, C-E).Next, glutathione-S-transferase pull-down confirmed that NLRP3 or leucine-rich repeat domain of NLRP3 could interact with SLC25A3 in vitro (Fig. 1, F and G); finally, Co-IP confirmed that nucleotide-binding domain and leucine-rich repeat domains of NLRP3 could interact with SLC25A3 in HEK293T cells (Fig. 1, H and I).The NLRP3 inflammasome comprises NLRP3, ASC, and caspase-1 (15).To reveal the specificity of their interaction between NLRP3 and SLC25A3, we cotransfected SLC25A3 with NLRP3, ASC, or caspase-1 in HEK293T cells, respectively.The result indicated that SLC25A3 could specially interact with NLRP3 (Fig. 1, J and K).Overall, we demonstrated that SLC25A3 could interact with NLRP3.
Overexpression of SLC25A3 could negatively regulate NLRP3 inflammasome activation
To reveal whether SLC25A3 could regulate NLRP3 inflammasome activation, we constructed a reconstructed NLRP3 inflammasome model, in which HEK293T cells were cotransfected with four plasmids encoding NLRP3, ASC, pro-caspase-1, and pro-IL-1β proteins.Reconstructed NLRP3 inflammasome model was constructed successfully by detecting the secretion of mature IL-1β (Fig 2A).IL-1β secretion (Fig. 2B), and mature IL-1β or mature Casp-1 in the supernatant (Fig. 2C) was inhibited by SLC25A3 in reconstructed NLRP3 inflammasome model, and the inhibitory effect is dependent with the plasmid concentration of SLC25A3.To further reveal the role of SLC25A3 in repression of endogenous NLRP3 inflammasome activation, we constructed THP-1 cell lines stably overexpressing SLC25A3.The THP-1 cell lines stably expressing control or SLC25A3 were differentiated into macrophages and stimulated with lipopolysaccharide (LPS) plus nigericin, LPS plus ATP, LPS plus MSU, or LPS plus Alum.IL-1β secretion (Fig. 2D), the release of lactate dehydrogenase (LDH) in the supernatant (Fig. 2E), and mature IL-1β and mature Casp-1 in the supernatant (Fig. 2F) were significantly attenuated by SLC25A3.Thus, we uncovered that SLC25A3 suppressed NLRP3 inflammasome activation.
Knockdown of SLC25A3 could promote NLRP3 inflammasome activation
To further reveal whether SLC25A3 could regulate NLRP3 inflammasome activation, we constructed plasmids stably overexpressing shRNA-control, shRNA-SLC25A3-1, shRNA-SLC25A3-2, or shRNA-SLC25A3-3.HEK293T cells were transfected with these vectors.The result indicated that knockdown effect of shRNA-SLC25A3-2 is the best by RT-PCR (Fig. 3A).THP-1 cell lines stably expressing shRNA-control or shRNA-SLC25A3 were differentiated into macrophages and stimulated with LPS plus nigericin, LPS plus ATP, LPS plus MSU, or LPS plus Alum.IL-1β secretion (Fig. 3B), the release of LDH in the supernatant (Fig. 3C), and mature IL-1β and the cleavage of GSDMD in the supernatant (Fig. 3D) were obviously reduced by SLC25A3.To further uncover the function of SLC25A3 in NLRP3 inflammasome activation in primary cells, we also constructed four vectors (shRNA-control, shRNA-mSLC25A3-1, shRNA-mSLC25A3-2, and mSLC25A3-3).In L929 cells, we validated the effect of these vectors.The result indicated that knockdown effect of shRNA-mSLC25A3-2 is the best (Fig. 3E).BMDMs were infected with lentivirus containing shRNA-control vector or shRNA-mSLC25A3, subsequently stimulated by LPS plus nigericin, LPS plus ATP, LPS plus MSU, and LPS plus Alum.The results indicated that IL-1β secretion (Fig. 3F), and mature IL-1β and mature Casp-1 in the supernatant (Fig. 3G) were also markedly reduced by mSLC25A3.Overall, we found that knockdown of SLC25A3 could promote NLRP3 inflammasome activation.
SLC25A3 specifically suppressed NLRP3 inflammasome activation
To reveal whether SLC25A3 specifically suppressed NLRP3 inflammasome activation, we would explore its function in other inflammasomes (NLRP1, NLRC4, and AIM2 inflammasomes).SLC25A3 could interact with NLRP3, NLRP3, or NLRC4, but not with AIM2 (Fig 4A).We also constructed a reconstructed AIM2/NLRC4/NLRP1 inflammasome model, in which HEK293T cells were cotransfected with four plasmids encoding AIM2/NLRC4/NLRP1, ASC, pro-caspase-1, and pro-IL-1β proteins.The reconstructed AIM2/NLRC4/NLRP1 inflammasome model was successfully constructed by detecting the secretion of mature IL-1β (Fig. S1, A-C).IL-1β secretion was not inhibited by SLC25A3 in reconstructed AIM2/NLRC4/ NLRP1 inflammasome model, and the content of IL-1β secretion was independent with the plasmid concentration of SLC25A3 (Fig. S1, D-F).THP-1 cell lines stably expressing shRNA-control or shRNA-SLC25A3 were differentiated into macrophages and stimulated with LPS plus ATP, LPS plus poly(dA:dT), LPS plus muramyl dipeptide (MDP), or LPS plus Salm.IL-1β secretion (Fig. 4B), and mature IL-1β and mature Casp-1 in the supernatant (Fig. 4C) were obviously reduced by ATP (a stimulator of NLRP3 inflammasome), but not by poly(dA:dT) (a stimulator of AIM2 inflammasome), MDP (a stimulator of NLRP1 inflammasome), or Salm (a stimulator of NLRC4 inflammasome).These results indicated that knockdown of SLC25A3 could specifically promote NLRP3 inflammasome activation.THP-1 cell lines stably expressing plenti-control or plenti-SLC25A3 were differentiated into macrophages and stimulated with LPS plus ATP, LPS plus poly(dA:dT), LPS plus MDP, or LPS plus Salm.IL-1β secretion (Fig. 4D) and mature IL-1β and mature Casp-1 in the supernatant (Fig. 4E) were significantly enhanced by LPS plus ATP, but not by LPS plus poly(dA:dT), LPS plus MDP, or LPS plus Salm.BMDMs were infected with lentivirus containing shRNA-control vector or shRNA-mSLC25A3, subsequently stimulated by LPS plus ATP, LPS plus poly(dA:dT), LPS plus MDP, or LPS plus Salm.The results indicated that IL-1β secretion (Fig. 4F), and mature IL-1β and mature Casp-1 in the supernatant (Fig. 4G) were also markedly enhanced by LPS plus ATP, but not by LPS plus poly(dA:dT), LPS plus MDP, or LPS plus Salm.Knockdown of SLC25A3 could specifically enhance NLRP3 inflammasome activation in BMDMs.However, for the THP-1 cells, poly(dA:dT) and Salm is good to activate AIM2 and NLRC4 inflammasome, MDP was not suitable to activate NLRP1 inflammasome, and it was more widely known as a activator of NOD2.To verify the accuracy of the above results, Vbp (Val-boroPro) was used as a stimulus for NLRP1 inflammasome.Three siRNA-SLC25A3(-1, -2, -3) were identified by RT-PCR in THP-1 cells, and the effect of siRNA-SLC25A3-3 specific to SLC25A3 was the best (Fig. 4H).THP-1 derived macrophages were transfected with siRNA-control or siRNA-SLC25A3-3 and then stimulated with LPS, or LPS plus Val.
The result indicated that knockdown effect of siRNA-SLC25A3-3 in the cells were effective (Fig. 4I).Vbp could indeed activate NLRP1 inflammasome by measuring the content of IL-1β secretion (Fig. 4J).However, knockdown of SLC25A3 could not enhance NLRP1 inflammasome activation stimulated by LPS plus Vbp (Fig. 4J).Overall, we found that SLC25A3 could specifically suppress NLRP3 inflammasome activation.
SLC25A3 negatively regulates NLRP3 inflammasome activation
indicated that knockdown of SLC25A3 promoted ASC oligomerization and the cleavage of GSDMD in the cells (Fig. 5A).Furthermore, the speck formation of endogenous ASC was enhanced by LPS plus ATP in the cells transfected with siRNA-SLC25A3-3 compared to the cells transfected with siRNAcontrol (Fig. 5B).THP-1 cell lines stably expressing plenticontrol or plenti-SLC25A3 were differentiated into macrophages, and subsequently stimulated with LPS plus ATP.The result demonstrated that overexpressing of SLC25A3 promoted ASC oligomerization and the cleavage of GSDMD in the cells (Fig. 5C).Also, the speck formation of endogenous ASC was attenuated by LPS plus ATP in the cells stably expressing plenti-SLC25A3 compared to the cells stably expressing plenti-control (Fig. 5D).Taken together, we found that SLC25A3 suppressed ASC oligomerization mediated by NLRP3 inflammasome.
and inhibit N-terminal GSDMD oligomerization; it could also block pyroptotic cell death and IL-1β release in human monocytes/macrophages (35).We also indicated that NSA could markedly suppress secreted IL-1β induced by LPS plus nigericin, and significantly inhibit the interaction of NLRP3 and SLC25A3 induced by LPS plus nigericin in the THP-1 derived macrophages.Overall, we found that the interaction between SLC25A3 and NLRP3 in the mitochondria was strengthened by NLRP3 inflammasome inducers.
SLC25A3 inhibits the interaction of NLRP3-NEK7 and promotes ubiquitination of NLRP3 How SLC25A3 could negatively regulate NLRP3 inflammasome activation, we explored the deep mechanism by which SLC25A3 regulated the function of NLRP3.Firstly, we identified the interaction between NLRP3 and NEK7.The results indicated that NLRP3 could interact with NEK7 in HEK293T cells and the interaction between NLRP3 and NEK7 could be strengthened by the inducer of NLRP3 inflammasome (ATP) (Fig. 7, A and B).We also found that SLC25A3 could not interact with NEK7 in the HEK293T cells (Fig. 7, C and D).SLC25A3 or NEK7 could competitively bind to NLRP3 and suppress the interaction between NLRP3 and NEK7 or the interaction between NLRP3 and SLC25A3 (Fig. 7, E and F).We also investigated whether the interaction between SLC25A3 and NLRP3 affected the posttranslational modification of NLRP3.In the HEK293T cells, NLRP3 could be ubiquitinated in cells (Fig. 7G).Overexpression of SLC25A3 could promote ubiquitination of NLRP3 in the HEK293T cells (Fig. 7H).To study the stability of NLRP3 protein, we performed a protein decay assay with cycloheximide which blocked cellular protein synthesis.The results showed that overexpression of SLC25A3 had no effect on the stability of NLRP3 protein (Fig. S4).Knockdown of SLC25A3 could suppress ubiquitination of endogenous NLRP3 in THP-1-differentiated macrophages (Fig. 7I).Ubiquitination of NLRP3 always suppressed NLRP3 inflammasome activation (21)(22)(23)(24)(25)(26).That SLC25A3 promoted ubiquitination of NLRP3 and may suppress the activation of NLRP3 inflammasome.These results indicated that SLC25C3 might suppress the activation of NLRP3 inflammasome by disrupting the interaction of NLRP3-NEK7 and promoting ubiquitination of NLRP3.
Discussion
NLRP3 inflammasome is stimulated by pathogen-associated molecular patterns and damage-associated molecular patterns to induce inflammatory responses (36,37).Accurate and tight regulation of NLRP3 inflammasome activation is essential for host cells.It is critical to explore these mechanisms underlying repression of NLRP3 inflammasome activation.We initially showed that SLC25A3 could interact with NLRP3.Subsequently, overexpression of SLC25A3 and knockdown of SLC25A3 could regulate NLRP3 inflammasome activation in macrophages, and the interaction of NLRP3 and SLC25A3 significantly boosted in the mitochondria when NLRP3 inflammasome is activated.Detailed investigation demonstrated that the interaction between NLRP3 and SLC25A3 disrupted the interaction of NLRP3-NEK7, promoted ubiquitination of NLRP3, and finally negatively regulated NLRP3 inflammasome activation.
SLC25A3 is an inner mitochondrial membrane phosphate transporter; our data also indicated that SLC25A3 was located in the mitochondria.SLC25A3 virtually could not interact with NLRP3 without the stimulators of NLRP3 inflammasome in THP-1-derived macrophages.Perhaps, the wide distribution of the endogenous NLRP3 in the cytoplasm could explain this result.Certainly, we found that SLC25A3 could interact with NLRP3 in the HEK293T cells transfected with SLC25A3 and NLRP3 plasmids.This indicated that SLC25A3 could structurally interact with NLRP3.Interestingly, our data indicated that the interaction of NLRP3 and SLC25A3 was significantly enhanced in the mitochondria in the THP-1-derived macrophages stimulated by the stimulator of NLRP3 inflammasome.NLRP3 could be localized to the mitochondria when NLRP3 inflammasome was activated (38,39).Our data also indicated that NLRP3 could be localized to the mitochondria in the THP-1-derived macrophages stimulated by ATP.However, NLRP3 could not interact with SLC25A3 in the space, because SLC25A3 was localized in the mitochondrial inner membrane.It seems unable to explain the interaction of SLC25A3 and NLRP3 in the THP-1-derived macrophages stimulated by ATP.Recently, it has been reported that the mitochondria was damaged early in GSDMD-mediated pyroptosis, and the Nterminal pore-forming GSDMD fragment (GSDMD-NT) caused mitochondrial damage by permeating mitochondrial inner and outer membranes to accelerate and enhance pyroptosis (40,41).The viewpoints seem to explain that the interaction of NLRP3 and SLC25A3 could obviously be enhanced in the THP-1 derived macrophages stimulated by ATP.Damaged mitochondria may provide space for the interaction of NLRP3 and SLC25A3.Exposed SLC25A3 may suppress NLRP3 inflammasome activation by binding unassembled NLRP3.Disulfiram (an inhibitor of GSDMD, inhibits GSDMD pore formation in liposomes) could significantly suppress IL-1β secretion and the interaction of SLC25A3 and NLRP3 in the THP-1 derived macrophages.Acetylcysteine (an inhibitor of reactive oxygen species) treatment had no obvious inhibitory effect on the interaction of SLC25A3-NLRP3 compared to Disulfiram.However, Disulfiram could also block inflammatory TLR4 signaling by targeting MD-2 and its inhibition of TLR4 signaling is independent of GSDMD and caspase-1 (42).To further highlight the role of the inhibitors of GSDMD, we also used NSA rather than Disulfiram as NSA does not inhibit other innate immune pathways such as tolllike receptor signaling and GSDME-mediated cell death (35).The result also indicated NSA could significantly suppress IL-1β secretion and the interaction of SLC25A3 and NLRP3 in the THP-1 derived macrophages.These results support the above hypothesis that NLRP3 could interact with SLC25A3 in the mitochondria by GSDMD.Of course, a large number of experiments will be performed to support our hypothesis.
Cells
L929, HEK293T, and THP-1 cells were purchased from China Center of Type Culture Collection (CCTCC).L929 and HEK293T cells were cultured in Dulbecco's modified Eagle's medium purchased from Gibco supplemented with 10% fetal bovine serum (FBS), 100 U/ml penicillin, and 100 μg/ml streptomycin sulfate.THP-1 cells were cultured in RPMI 1640 purchased from Gibco supplemented with 10% FBS, 100 U/ml penicillin, and 100 μg/ml streptomycin sulfate.THP-1 was induced by phorbol-12-myristate-13-acetate (40 ng/ml) for 12 to 16 h to form THP-1 derived macrophages.BMDMs were isolated from bone marrow of 6 to 8-week-old female mice; these experiments were performed as described previously (24).Briefly, bone marrow was flushed with RPMI 1640 and collected cells were resuspended and passed through a 200-pore sized mesh.The cells were pretreated with Red Blood Cell Lysis Buffer for 5 min, and then the cells were cultured in Dulbecco's modified Eagle's medium and Flag-NEK7.The cell lysates were immunoprecipitated with IgG (Rabbit) or anti-NLRP3 antibodies and then immunoblotted with indicated antibodies.B, THP-1 derived macrophages were primed with LPS (100 ng/ml) for 6 h, then stimulated by ATP (5 mM) at different time points (0 min or 30 min).The cell lysates were immunoprecipitated with IgG or anti-NLRP3 antibody and then immunoblotted with indicated antibodies.C, HEK293T cells were transfected with Vector-Flag and HA-SLC25A3, or Flag-NEK7 and HA-SLC25A3.The cell lysates were immunoprecipitated with anti-Flag antibody and then immunoblotted with indicated antibodies.D, HEK293T cells were transfected with Flag-NEK7 and Vector-HA, or HA-SLC25A3 and Flag-NEK7.The cell lysates were immunoprecipitated with anti-HA antibodies and then immunoblotted with indicated antibodies.E, Flag-NLRP3 and Flag-NEK7 were co-transfected with Vector-HA or HA-SLC25A3 into the HEK293T cells as indicated in the figure.The cell lysates were immunoprecipitated with IgG (Rabbit) or anti-NLRP3 antibodies and then immunoblotted with indicated antibodies.F, Flag-NLRP3 and HA-SLC25A3 were cotransfected with Vector-Flag or Flag-NEK7 into the HEK293T cells as indicated in the figure.The cell lysates were immunoprecipitated with IgG (Rabbit) or anti-NLRP3 antibodies and then immunoblotted with indicated antibodies.G, HEK293T cells were transfected with Flag-NLRP3 and HA-Ub.The cell lysates were immunoprecipitated with anti-Flag antibodies and finally immunoblotted with indicated antibodies.H, HEK293T cells were transfected with Flag-NLRP3, HA-Ub, or Flag-SLC25A3 as indicated in the figure.The cell lysates were immunoprecipitated with anti-NLRP3 antibodies and finally immunoblotted with indicated antibodies.I, THP-1 derived macrophages were transfected with two siRNAs (siRNA-NC and siRNA-SLC25A3-3) and primed with LPS (100 ng/ml) for 6 h.THP-1 derived macrophages then were stimulated with ATP (5 mM, 30 min).The cell lysates were immunoprecipitated with anti-NLRP3 antibodies and finally immunoblotted with indicated antibodies.LPS, lipopolysaccharide; NEK7, NIMA-related kinase-7; NLRP3, NACHT, leucine-rich repeat, and pyrin domain-containing protein 3; SLC25A3, solute carrier family 25 member 3.
Western blot analysis
Firstly, cells were lysed in lyses buffer for Western blot analysis.Cell lysates were separated by 7.5 to 12.5% SDS-PAGE and then transferred onto a nitrocellulose membrane.The membranes were sealed in PBS with 0.1% Tween 20 containing 5% nonfat dried milk for 45 min at room temperature (RT) and then were incubation with first antibodies at 4 C overnight.Next, the membranes were incubated with second antibodies for 45 min at RT. Finally, the membranes were detected with the Western enhanced chemiluminescence substrate (Bio-Rad).
Figure 1 .
Figure1.SLC25A3 interacts with NLRP3.A, the schematic of mass spectrometry.B, HA-NLRP3 was cotransfected with Vector-Flag, Flag-SLC25A3, Flag-EEF1A1, or Flag-EEF1A2 into the HEK293T cells.The cell lysates were immunoprecipitated with anti-Flag antibody, and then immunoblotted with indicated antibodies.C, HEK293T cells were transfected with Flag-SLC25A3 and Vector-HA, Flag-SLC25A3 and HA-NLRP3, or Vector-Flag and HA-NLRP3, respectively.The cell lysates were immunoprecipitated with anti-Flag or anti-HA antibodies and then immunoblotted with indicated antibodies.D, HEK293T cells were transfected with Flag-NLRP3 and Vector-HA, or HA-SLC25A3 and Flag-NLRP3 as indicated in the figure, respectively.The cell lysates were immunoprecipitated with anti-HA antibodies and then immunoblotted with indicated antibodies.E, HEK293T cells were transfected with HA-SLC25A3 and Vector-Flag, or HA-SLC25A3 and Flag-NLRP3 as indicated in the figure, respectively.The cell lysates were immunoprecipitated with anti-Flag antibodies and then immunoblotted with indicated antibodies.F, HEK293T were transfected with HA-SLC25A3.The cell lysates were mixed with GST or GST-LRR, then immunoprecipitated with anti-GST antibodies and finally immunoblotted with indicated antibodies.G, HEK293T cells were transfected with Flag-NLRP3.The cell lysates were mixed with GST or GST-LRR, then immunoprecipitated with anti-GST antibodies and finally immunoblotted with indicated antibodies.H, HEK293T cells were transfected with Flag-SLC25A3 and Vector-HA, HA-NLRP3, HA-Pyrin, HA-NBD, or HA-LRR as indicated in the figure.The cell lysates were immunoprecipitated with anti-HA antibodies and then immunoblotted with indicated antibodies.I, HEK293T cells were transfected with HA-SLC25A3 and Vector-Flag, Flag-NLRP3, Flag-Pyrin, Flag-NBD, or Flag-LRR as indicated in the figure.The cell lysates were immunoprecipitated with anti-Flag antibodies and then immunoblotted with indicated antibodies.J, HEK293T cells were transfected with Flag-SLC25A3 and Vector-HA, HA-NLRP3, HA-ASC, or HA-caspase-1 as indicated in the figure.The cell lysates were immunoprecipitated with anti-HA antibodies, and then immunoblotted with indicated antibodies.K, HEK293T cells were transfected with HA-SLC25A3 and Vector-Flag, Flag-NLRP3, Flag-ASC, or Flag-caspase-1 as indicated in the figure.The cell lysates were immunoprecipitated with anti-Flag antibodies and then immunoblotted with indicated antibodies.GST, glutathione-S-transferase; HA, hemeagglutinin; LRR, leucine-rich repeat; NBD, Nucleotide-binding domain; NLRP3, NACHT, leucine-rich repeat, and pyrin domain-containing protein 3; SLC25A3, solute carrier family 25 member 3.
Figure 7 .
Figure 7. SLC25A3 inhibits the interaction of NLRP3-NEK7 and promotes ubiquitination of NLRP3.A, HEK293T cells were transfected with Flag-NLRP3 and Flag-NEK7.The cell lysates were immunoprecipitated with IgG (Rabbit) or anti-NLRP3 antibodies and then immunoblotted with indicated antibodies.B, THP-1 derived macrophages were primed with LPS (100 ng/ml) for 6 h, then stimulated by ATP (5 mM) at different time points (0 min or 30 min).The cell lysates were immunoprecipitated with IgG or anti-NLRP3 antibody and then immunoblotted with indicated antibodies.C, HEK293T cells were transfected with Vector-Flag and HA-SLC25A3, or Flag-NEK7 and HA-SLC25A3.The cell lysates were immunoprecipitated with anti-Flag antibody and then immunoblotted with indicated antibodies.D, HEK293T cells were transfected with Flag-NEK7 and Vector-HA, or HA-SLC25A3 and Flag-NEK7.The cell lysates were immunoprecipitated with anti-HA antibodies and then immunoblotted with indicated antibodies.E, Flag-NLRP3 and Flag-NEK7 were co-transfected with Vector-HA or HA-SLC25A3 into the HEK293T cells as indicated in the figure.The cell lysates were immunoprecipitated with IgG (Rabbit) or anti-NLRP3 antibodies and then immunoblotted with indicated antibodies.F, Flag-NLRP3 and HA-SLC25A3 were cotransfected with Vector-Flag or Flag-NEK7 into the HEK293T cells as indicated in the figure.The cell lysates were immunoprecipitated with IgG (Rabbit) or anti-NLRP3 antibodies and then immunoblotted with indicated antibodies.G, HEK293T cells were transfected with Flag-NLRP3 and HA-Ub.The cell lysates were immunoprecipitated with anti-Flag antibodies and finally immunoblotted with indicated antibodies.H, HEK293T cells were transfected with Flag-NLRP3, HA-Ub, or Flag-SLC25A3 as indicated in the figure.The cell lysates were immunoprecipitated with anti-NLRP3 antibodies and finally immunoblotted with indicated antibodies.I, THP-1 derived macrophages were transfected with two siRNAs (siRNA-NC and siRNA-SLC25A3-3) and primed with LPS (100 ng/ml) for 6 h.THP-1 derived macrophages then were stimulated with ATP (5 mM, 30 min).The cell lysates were immunoprecipitated with anti-NLRP3 antibodies and finally immunoblotted with indicated antibodies.LPS, lipopolysaccharide; NEK7, NIMA-related kinase-7; NLRP3, NACHT, leucine-rich repeat, and pyrin domain-containing protein 3; SLC25A3, solute carrier family 25 member 3. | 2024-03-31T06:18:59.870Z | 2024-03-01T00:00:00.000 | {
"year": 2024,
"sha1": "2fc78aebdc3535de582dcfaaf4775a648ae49be9",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/article/S0021925824017307/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8195af17f464c287725ade1ba49a8dc730b3ca5c",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
5947734 | pes2o/s2orc | v3-fos-license | OPTICAL FLOW FOR GLACIER MOTION ESTIMATION
Quantitative measurements of glacier flow over time are an important ingredient for glaciological research, for example to determine the mass balances and the evolution of glaciers. Measuring glacier flow in multi-temporal images involves the estimation of a dense set of corresponding points, which in turn define the flow vectors. Furthermore glaciers exhibit rather difficult radiometry, since their surface usually contains homogeneous areas as well as weak texture and contrast. To date glacier flow is usually observed by manually measuring a sparse set of correspondences, which is labor-intensive and often yields rather irregular point distributions, with the associated problems of interpolating over large areas. In the present work we propose to densely compute motion vectors at every pixel, by using recent robust methods for optic flow computation. Determining the optic flow, i.e. the dense deformation field between two images of a dynamic scene, has been a classic, long-standing research problem in computer vision and image processing. Sophisticated methods exist to optimally balance data fidelity with smoothness of the motion field. Depending on the strength of the local image gradients these methods yield a smooth trade-off between matching and interpolation, thereby avoiding the somewhat arbitrary decision which discrete anchor points to measure, while at the same time mitigating the problem of gross matching errors. We evaluate our method by comparing with manually measured point wise ground truth.
INTRODUCTION
It has become common practice to use photogrammetric tools to acquire and analyze the motion of glaciers.The corresponding points in the images, which define the glacier motion, are still often measured manually.In this work we show how to automate the process by applying optical flow techniques to the problem.Optical flow algorithms have advanced to a state at which they can densely compute motion vectors at every pixel despite the difficult radiometry of glaciers.In this work we compare several methods on a data set with available manual measurements and perform a thorough evaluation using different error metrics.The evaluation shows that in areas where the glacier surface is properly visible modern optical flow methods are competitive with human observers.
Glacier Motion
The motion observed at the surface of a glacier is due to gravitational deformation of the ice and sliding at the base.The resulting ice flow transports mass from areas of snow accumulation to lower areas with mass loss by melt.The magnitude of the internal ice deformation depends on thickness and surface slope, whereas basal sliding depends on the basal conditions, which are mainly influenced by the seasonally varying water pressure (Paterson, 1994).This results in an overall smooth spatial pattern of the surface flow field.Observed temporal variations in glacier motion range from hours to seasons and decades (Iken, 1977).Thus, knowledge of surface flow fields of glaciers contributes to understanding glacier dynamics and is important data for many glaciological applications.
Direct measurements by geodetic methods (GPS, total station) can only be performed on a limited number of accessible points.Remote sensing has the potential to provide dense spatial coverage over large areas.A wide range of methods (optical/radar, feature tracking/interferometric, airborne/spaceborne platforms) have been applied especially on the polar ice sheets of Greenland and Antarctica (Lucchitta and Ferguson, 1986, Bindschadler and Scambos, 1991, Joughin et al., 1998, e.g.).Investigations of spaceborne systems are restricted to the time of satellite passes and appropriate orbits, which limits application for mountain glaciers with more complex and small scale flow fields.
Ortho-images are a standard product of digital photogrammetric analysis and are suitable for periodically repeated glacier mapping.Areas free of seasonal snow show a variety of features and texture suitable for motion analysis.Early glaciological applications have been implemented for semi-automatic and manual use with analytical stereo plotters (Flotron, 1979).More recently photogrammetric methods became popular to determine fully digital deformation measurements (Kaab, 2002).Analysis of pairs or sequences of ortho-images acquired at different points in time allow one to evaluate the horizontal motion field.The vertical component of the ice motion and the mass balance at the surface can then be separated by subsequent numerical flow modeling (Gudmundsson and Bauder, 1999).
Optical Flow
The problem of image alignment and 2D motion estimation has been studied in computer vision over the past 30 years and has reached an impressive level of reliability and accuracy.Numerous variants are already used in the industry performing tasks such as driver assistance, medical image registration and human motion analysis.In this section we give a short overview of the most common approaches for optical flow estimation, and introduce the basic terminology.
In general there are dense as well as sparse techniques.Sparse optical flow is often preferred in time critical applications and usually performs some kind of feature tracking (Tomasi and Kanade, 1991).In a way sparse techniques have already been applied to glacier flow (Debella-Gilo and Kaab, 2011).Here correspondences are established from correlation coefficients, later mismatches are eliminated by thresholding and manual inspection.
Dense techniques on the other hand compute a motion vector at every pixel.It is usually assumed that the observed motion between adjacent frames is small, although there are techniques which explicitly try to compute large displacement optical flow.A classic model for optical flow estimation was introduced by (Horn and Schunck, 1981).Many contemporary techniques are variants of this method, see the popular benchmark (Baker et al., 2007), thus we use it as basis for this chapter.
Instead of treating each motion independently, a regularizationbased framework is established, which enforces data fidelity and spatial smoothness at the same time.In the original paper the data term is expressed on a per pixel basis, using the well known brightness constancy assumption (BCA).The BCA states that the gray value of a moving pixel remains constant throughout the explored frames (Eq.1): where I denotes the image brightness at a given pixel and timestep, and (u, v) t denotes the optical flow vector.Taylor series expansion at (u0,v0) yields the optical flow constraint Here (Ix,Iy) t = ∇I and It = I(x+u0,y+v0,t+1)−I(x, y, t) is the temporal derivative.Using a variational formulation the data term is finalized as: Integration is performed over the image domain Ω.In the original formulation the penalty function ρD is the quadratic penalty ρD(x)=|x| 2 .From Eq. 2 we can directly observe that the problem is ill-posed, since only a single constraint per pixel can be extracted.In the literature this is known as the aperture problem.The problem is not limited to this specific formulation but also exists when image regions are to be matched, for instance a patch containing a single edge or a textureless area.Furthermore noise can degrade the quality of a matching process.Therefore a smoothness constraint on the motion field is introduced, resulting in an overall energy The parameter λ controls the amount of smoothing.The Euler-Lagrange equations of the energy term, derived with variational calculus, yield necessary conditions for a minimum of the energy functional.Since the functional is given in terms of continuous variables, variational methods naturally lead to sub-pixel accuracy.The resulting energy minimization problem is highly non-convex, because the optical flow constraint, Eq. 2, is only valid in a small local neighborhood.Therefore minimization is performed in a hierarchical coarse-to-fine approach.Using image pyramids, solutions from a lower pyramid level initialize the next-finer level.
The original formulation of (Horn and Schunck, 1981) suffers from the fact that quadratic error functions ρD and ρS lead to over-smoothing at motion boundaries.Different error functions have been proposed.The most popular one nowadays is ρ(x)= √ x 2 + 2 , a differentiable variant of the L1-norm, which is both robust and convex.Larger displacements can be estimated thanks to image warping and postponing any linearisation to the numer-ical scheme (Brox et al., 2004).Certain versions of variational optic flow can be computed in real-time (Wedel et al., 2008).
In contrast to variational methods, which minimize the energy functional by continuous optimization, combinatorial methods which are dominant in stereo matching are less popular in the literature.Mainly because of the enlarged two-dimensional search space.Notable exceptions are (Lei andYang, 2009, Lempitsky et al., 2008).Finally, local methods exist, which do not minimize a global energy function at all e.g.(Rhemann et al., 2011).
ALGORITHMS
In this section we describe in more detail the methods selected for our evaluation.We choose three different algorithms as representatives of the large number of optical flow methods developed over the years.All three algorithms are parallelizable and efficient in terms of memory and speed, which is important due to the sheer size of aerial images used for glaciology.
Total variation optical flow
The first algorithm in our set is a representative of the popular variational approaches.The energy functional is based on the classical formulation already presented in Eqs. 3, 4 and 5. However instead of approximating the L1-norm in order to make the functional differentiable, the integral equations are addressed directly, using a primal-dual scheme.In (Zach et al., 2007) it was proposed to add additional auxiliary variables v to the energy function, to simplify the optimization: where the data term is given by the optical flow constraint Eq. 2, ρ(v):=∇I,v−v0+I(x+v0,t+1)−I(x,t) and θ is a small constant restricting v to be a close approximation of u.The convex energy can be minimized by alternating steps updating either u or v. Fixing v, optimization with respect to u is achieved using the primal-dual algorithm proposed in (Chambolle, 2004).For a fixed u the optimization yields a sum of decoupled pixel-wise energies, which can be minimized individually.More recently an improved primal-dual scheme was introduced by (Chambolle and Pock, 2011), superseding the need for auxiliary variables.Moreover a slightly improved optical flow constraint is used which additionally models varying illumination: The parameter β controls the influence of the illumination term.
The function w is assumed to be smooth and therefore requires additional regularization.The final energy was already proposed in (Shulman and Hervé, 1989), Details of the optimization scheme go beyond the scope of this paper, please refer to (Chambolle and Pock, 2011).
Cost-Volume Filtering
The second algorithm we have selected considers optical flow estimation as a labeling problem.The space of possible solutions for the flow vectors is given as a discrete set, and each pixel is assigned to one element of the set.Because the solution space is no longer continuous, the task reduces to a search problem.A naive solution would be to select the label with the lowest data error for each pixel.Filter based methods, e.g.(Yoon et al., 2006), instead apply a local filter on the energy values for different flow vectors (the cost volume) before assigning the labels.These techniques can also be seen as an approximation to discrete energy based approaches.In contrast to the spatially global smoothing of a conditional random field, smoothness of the flow field is only defined locally.The intuition why similar solutions are achieved is that pixels far apart usually have only little influence on each other.The method we choose for evaluation was proposed recently by (Rhemann et al., 2011).The algorithm mainly differs from previous filter-based techniques by the choice of the filter weights.The guided filter introduced by (He et al., 2010) preserves edges in the input image and has a runtime independent of the filter size.Incorporated into the filter framework the technique achieves highquality solutions, and is competitive with energy-based methods on standard benchmarks.The method can potentially handle both small scale motion structure and large displacements.
More formally, let the set of labels be denoted by L = {1,...L} and l ∈Lbe a label, in our case a displacement (u, v) t .A cost C(i, l) is assigned to each pixel i and label l by evaluating the data term.For a label l the filtered cost at pixel i, C(i, l), is the weighted sum over the neighborhood N (i) of i: wi,j(I)C(j, l).
The final labeling is now given by taking the label li with minimal filtered cost at each pixel i, li = argmin C(i, l) .The key to high quality results is to use an edge-preserving filter.The weights of the guided filter depend on a guidance image I, in our case the reference image: Here µ k and σ 2 k are the mean and variance of the region N (k) centered at pixel k in I.The sum is composed of all image regions containing both pixel i and j.The edge preservation can be seen by considering a region with a single edge.If pixel i and j are on the same side of the edge, the weight wi,j(I) becomes 2, and close to 0 otherwise.For flat regions with σ 2 k <we have wi,j(I)=1for all pixel in the region, which results in a simple averaging filter.An extension to color images is straightforward.For more details please refer to (He et al., 2010).
As data term we select two different popular cost functions.Firstly the truncated absolute difference of the gray values and gradient at the matching points in the reference image I(•,t) and the displaced image I(•,t +1), and secondly the negative normalized cross correlation (NCC).The truncated absolute difference, Eq. 11, has been shown to be robust against illumination changes: The truncation values are denoted by τ1 and τ2 in the equation.
In contrast to this pixel-wise data cost the NCC is defined as a sum over a region N (i) centered at pixel i: where µi,t and µi+1,t+1 are the means of images I(•,t) and I(•,t +1 )over the region N (i) and σ 2 i,t and σ 2 i+l,t+1 are the respective variances.
The major challenge in the approach is the huge label space to be evaluated.In the examples the flow vectors extend over a region of {−75,...,75} 2 pixel, which leads to 22500 labels.That problem is tackled by very fast implementations of the filtering: the weighted filtering can be achieved by a sequence of box-filters with a time complexity linear in the number of pixels.Sub-pixel precision is accomplished by upscaling the second image by an integral factor and evaluating the cost function accordingly with respect to the new, increased label space.
Pyramid Lucas-Kanade Optical Flow
Another widely used approach for optical flow estimation was first introduced by (Lucas and Kanade, 1981).Originally developed for image registration the method is still commonly used for feature tracking and template matching.An image patch around a pixel i undergoes a deformation governed by a parameter vector p, in order to minimize the squared difference to a template T : Here f denotes a warping function parameterized by p.The key idea, compare also Eq. 2, is to perform gradient descent on the sum of squared distance (SSD) energy function, using a Taylor series expansion of the image function at p0: Iteratively solving the corresponding normal equations with respect to p delivers a solution for the parameter vector.In the 2D case it is common to restrict the parameter set to allow for pure translations or affine deformations only (Baker and Matthews, 2004).In order to achieve dense flow the scheme is applied at each pixel individually and embedded into a hierarchical coarseto-fine framework.
The scheme estimates the motion at each pixel independently, therefore the implementation can be easily parallelized.Care has to be taken if the normal matrix is close to singular, for instance in textureless regions.
RESULTS AND DISCUSSION
We evaluate the different optical flow methods on ortho-photos of the Unteraargletscher located in the Bernese Alps, Switzerland.This large valley glacier has two main tributary forming the common tongue with an area of about 23 km 2 and a length of 13 km.
The grayscale ortho-images are divided into three parts of 26, 20.5 and 12 mega-pixel, consisting of the individual tributaries and the tongue.One pixel corresponds to one meter ground resolution.Image pairs were acquired in a temporal distance of one whole year, at 1970/71, 1982/83 and 1997/98 (Fig. 1).Details about the acquisition and image processing chain can be found in (Bauder, 2001).The maximal motion observed over the one year period is about 40m at the tongue area and 90m in the area of the tributaries.
We study three algorithms: total variation (TV-L1), cost volume filtering (CF) and pyramid Lucas-Kanade (Pyr.-LK).Quantitative evaluation is done by comparing the results with manually measured, sparse correspondences (Bauder, 2001).The dataset consists of 5606 measurements on a 50 pixel grid and is only available for the years 1997/98.The accuracy of the manual measurements is evaluated by measuring several feature points twice and independently, see table 1.For all images the differences are in the range of 1 to 3 pixel (=meters).
Figure 1: Reconstruction of all image pairs.Flow vectors are color coded w.r.t.their magnitude (blue=slow flow, red=fast flow).
Because of the large temporal gap between the acquisitions, changes in image appearance are inevitable, which make optical flow estimation challenging.Difficulties include varying snow coverage, non-uniform melt out or crevasse patterns, moving and tumbling rocks, shadows and generally different illumination conditions.The biggest problem is posed by the varying snow cover extent.
The data term measuring visual similarity cannot identify corresponding parts correctly as their appearance is too different.
The second major problem are large textureless regions e.g.due to snow coverage or shadows.Those parts carry only little, if any, information about the motion.Usually optical flow methods deal with textureless segments by propagating information from neighboring areas into those regions.In our case however, the varying borders of the snow cover and shadow borders are the main source of information here.Since these are completely independent from the motion of the glacier, the quality of the reconstructions is low in these regions (compare Fig. 3).
Error Metrics
Popular error metrics in the optical flow literature (Baker et al., 2007) are the average end point error AEP and the average angular error AAE.The former measures the distance between two flow vectors in 2D.The latter compares the angle between two flow vectors (u, v, 1) in homogeneous 3D space.The errors are averaged over all test points.We also report the normalized root mean squared error NRMS of the end point error, where the normalization is performed w.r.t. the difference between the maximal and minimal flow magnitude in the image.Further we also report quantiles of the AEP, since the metrics are typically dominated by few gross outliers.
We compare the results of the optical flow algorithms with the sparse manual measurements.The results are shown in table 2.
The TV-L1 algorithm performs best in almost all cases, except for the image Unteraar l .Here the algorithm cannot handle the varying snow cover at the top of the glacier.A close-up of that part is shown in the first row of Fig. 3, and also on the right of Fig. 4.
Cost filtering (CF) with NCC performs better in that area, although only for parts where the data function delivers reasonable costs.Using the SAD data cost already produces unacceptable results, although the cost function works well for smaller images with relaxed environmental conditions.In all cases Pyramid-LK performs worst and is not further considered in the evaluation.Without any regularization, the algorithm is not competitive in these complex conditions.In Fig. 4 (left) we visualize the difference of the solution of the TV-L1 algorithm and the expert's solution.Areas where the estimates diverge are marked in red, and where they agree in green.The regions of greatest disagreement are found in spots showing limited visual correspondence in the original images.In Fig. 3 we visualize several of those areas.Both algorithms are misled by the ill-posed data term and can not estimate the flow correctly.In the case of the CF algorithm these regions have limited impact on neighboring areas, however the estimates within the regions are completely wrong.This can be explained by the edge preserving property of the filter.Vectors from regions separated by an image edge have little influence on each other.On the other hand, the TV-L1 algorithm generally seems to handle those parts better, and in most cases still produces reasonable motion estimates.Potentially the influence of erroneous areas could be restricted by the use of an anisotropic smoothing kernel.
Illumination Changes
Due to different environmental conditions the assumption of brightness constancy between successive image acquisitions is violated.
To alleviate the negative influence of illumination changes, sensor noise, or shadows on the data term, three different methods were tested in case of the TV-L1 algorithm.Structure-texture decomposition (STT) (Aujol et al., 2006), has been shown to be a successful pre-processing step for 2D optical flow estimation (Wedel et al., 2008).Images are decomposed into a structural part, corresponding to larger connected regions of the image, and a texture part containing fine scale details.Further it is assumed that using the textural part for the computation of the optical flow is more
Unteraarz
Figure 2: Image similarity at corresponding points according to TV-L1 and to manual flow estimates.NCC scores are overlain onto the images using color-code.Areas in red mark higher similarity with TV-L1, green areas higher scores of the manual measurements.robust against shadow and shading artefacts.A study comparing various filters is given in (Vaudrey et al., 2009).
The second method evaluated is an improved optical flow constraint in which the varying illumination is modeled by an additive term (Eq.7).Additionally the data term is extended to also include gradient images.In table 3 we summarize the experiment.
It is obvious that all methods significantly improve our results.In all other experiments the setup "STT&additive function" is used, which worked best overall.
Qualitative Evaluation
In a further experiment we attempt to evaluate the quality of the estimated flow fields by investigating the similarity of corresponding image regions.We compute the NCC coefficients in a 11×11 and a 5×5 window located at matching positions.Note that NCC is not used in the TV-L1 algorithm, so in that case it constitutes an independent check, whereas the CF method uses NCC, which constitutes a bias of the evaluation.We compute correlation scores at all pixel containing manual estimates.Table 4 shows that on average the score produced by the algorithms is noticeably higher than the score of the expert.Results for the TV-L1 algorithm are consistently between 11% and 28% better than the human expert.The CF algorithm achieves at most 16% improvement, and is 3% worse than the human in one case.We conclude that in terms of visual coherence the automatically produced results are at least at good as the annotated ground truth.
In Fig. 4 we visually compare the matching quality of the TV-L1 algorithm to the human expert.For this purpose we warp patches of size 200×200 using the flow vectors.Ideally the warps should be identical to their counterpart in the reference image.For simple comparison we show the difference image between reference and warped patch.In all cases, warps created by the algorithm appear more similar to the reference.The global deviation is smaller and the error patches appear smoother.
Fig. 2 compares NCC-scores of the TV-L1 flow and the human expert.In regions colored red the score of the algorithm was higher, green regions mark areas with a higher NCC-score for the expert.Clusters of green spots are located in areas affected by changes in environmental conditions, where expert knowledge is needed to overrule the observations.Otherwise red (TV-L1 better) clearly dominates.
CONCLUSIONS AND FUTURE WORK
We have evaluated three different optical flow techniques on the task of estimating the motion of glaciers.Especially results obtained from the TV-L1 based method look promising.Based on an independent image matching metric, and also by visual inspection, the estimates appear to be more accurate than manually measured correspondences.Regions with little visual coherence however can lead to distorted results.A solution could be to simply detect and exclude these areas from the estimation process.Going further, the regularization term could be adapted to the task by adopting a numerical model from glaciology.Finally an extension of the algorithm to the 3D domain could be interesting.
Figure 3 :
Figure 3: Depicted are patches in which the computed flow-fields differ grossly from the manual estimates.Errors occur if the true motion is not visible on the surface (col.1,2).Here the objective to match corresponding image content can distort the flow fields (col.3,4).Flow vectors are color coded with respect to their magnitude (blue=slow flow, red=fast flow).
Table 1 :
Deviations of the control sample set of the manual measurements (unit: pixel=m)Unteraarz Unteraarr Unteraar l
Table 4 :
Mean NCC scores of different algorithms | 2015-08-11T20:29:18.000Z | 2012-07-23T00:00:00.000 | {
"year": 2012,
"sha1": "9442c1e2bd86be3b259eb324dac883e955ce5df7",
"oa_license": "CCBY",
"oa_url": "https://www.isprs-ann-photogramm-remote-sens-spatial-inf-sci.net/I-3/359/2012/isprsannals-I-3-359-2012.pdf",
"oa_status": "GOLD",
"pdf_src": "Crawler",
"pdf_hash": "ff1934d8594a3dc5ac55649b47ef5b9b7ed131a2",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
87585871 | pes2o/s2orc | v3-fos-license | Awareness, Concerns, and Protection Strategies Against Bloodborne Viruses Among Surgeons
Background: Surgeons are at high risk of contracting infectious viruses such as human immunodeficiency virus (HIV), hepatitis B virus (HBV), and hepatitis C virus (HCV) through exposure to patients’ blood. The purpose of this study was to assess the surgeons’ awareness of contracting bloodborne viruses. Methods: A cross-sectional study with a questionnaire distributed to 241 surgeons at King Abdulaziz Medical City - Riyadh (KAMC-R) during the period June 2017 through January 2018. Descriptive statistics were used to analyze data collected using Stata®, v14 (StataCorp LLC, College Station, Texas, USA). Categorical variables were analyzed using Pearson chi-square test. P-value of < 0.05 was considered significant. Results: A total of 241 surgeons answered the questionnaire, 179 (74.3%) surgeons were male and 62 (25.7%) were female. The mean age ± standard deviation (SD) of male surgeons was 35.8 ± 11.0 years while for females was 33.3 ± 9.1 years. The majority of our cohort were vaccinated for HBV (96% in males and 97% in females). Two-thirds of the study cohort did not know the conversion rate post-needlestick injury by HIV, HBV, and HCV. Two-thirds of the study cohort think there is a need for HIV screening before surgery. Mixed answers were received from the cohort when asked about their concern regarding contracting HIV infection from their patients; only one-third of the surgeons were extremely concerned. When asked about the risk of needlestick injury during treating patients positive for HBV, the majority of the surgeons said no. However, a significant difference between the female and male surgeons was found in which 12 of the 62 female surgeons answered yes (19.4%) compared to 11 of the 179 male surgeons (6.1%) (p = 0.002). Conclusion: The majority of our surgeons are vaccinated for HBV. However, female surgeons appear to be at higher risk of needlestick injury from HBV patients. This requires further investigation into the reasons for such high incidents. More education is needed about bloodborne viruses.
Introduction
Surgeons are at high risk of contracting infectious viruses, such as the human immunodeficiency virus (HIV), hepatitis B virus (HBV), and hepatitis C virus (HCV), through exposure to patients' blood [1][2][3][4][5]. The prevalence of HBV in the Kingdom of Saudi Arabia (KSA) decreased significantly after the introduction of the HBV vaccine; it dropped from 7% to 0.3% [6]. According to the World Health Organization (WHO), the reported prevalence of HCV in KSA is 1.8%, and the HIV annual incidence in KSA is less than 4/100,000 [7][8]. The risk of infection after an occupational exposure for healthcare workers (HCWs) to HBV (unvaccinated), HCV, and HIV is between 6% to 30%, 1.8%, and 0.3%, respectively [3]. However, the cumulative lifetime risk increases dramatically with increased years of experience for surgeons and repetitive exposure to blood [9][10][11]. Mckinney et al. reported that the cumulative risk was 10% for the surgeon operating on HIV-positive patients for more than a 30-year profession [12]. In addition, a national survey conducted in Italy demonstrated that the 30-year lifetime risk of getting an HBV, HCV, or HIV infection for surgeons was 42.7%, 34.8%, and 0.54%, respectively [13]. A study was done in Najran in 2014 to evaluate seroprevalence of HBV and HCV among medical students and HCWs which indicated a seroprevalence of HBV of 1.7% and 8.7% for medical students and HCWs, respectively [14]. In addition, the seroprevalence of HCV was 0% and 0.3% in medical students and HCWs, respectively.
Needlestick injuries are one of the occupational risks for HCWs, and it is a route for transmitting bloodborne diseases [2,4,15]. A study conducted on the Ministry of Health Hospitals of Saudi Arabia estimated that the annual incidence of percutaneous injuries was 3.2 per 100 occupied beds [16]. El-Hazmi et al. reported that 14.3% of needlestick injuries happened during surgery at King Khalid University Hospital, Riyadh [17]. However, most of the needlestick injuries are underreported by HCWs worldwide and in Saudi Arabia [15,[18][19][20][21][22][23]. A study done by Patterson et al. showed that only 17% of surgeons reported needlestick injuries [18]. In addition, a study by Samargandy et al. was done in Jeddah to evaluate the clinical consequences of occupational exposure to blood and other body fluids which revealed surgeons had a higher risk of needlestick injuries in comparison to other doctors [24]. After reviewing 326 charts of occupational exposure, all of the exposed HCWs did not seroconvert to HIV, HBV, or HCV with adequate post-exposure prophylaxis.
To the limit of our knowledge, no study has been conducted in Saudi Arabia to evaluate surgeons' awareness of infectious viruses in the operating room and their reporting of needlestick injuries. Therefore, the purpose of this study was to assess the surgeons' awareness of contracting bloodborne viruses.
Study area/setting
This study was performed at the Department of the Surgery, Obstetrics/Gynecology, and Cardiac Center at King Abdulaziz Medical City, Riyadh (KAMC-R), Saudi Arabia. A questionnaire was completed by surgeons from different subspecialties.
Study subjects
This study included all surgeons from different positions who were working at KAMC-R. Different surgical subspecialties were included (Cardiovascular, Vascular, General Surgery, Neurosurgery, Ophthalmology, Plastic, Otolaryngology, Urology, Orthopedics, Thoracic, and Obstetrics/Gynecology) as were different surgical positions, including residents, associate consultants, fellows, and consultants. Oral and maxillofacial surgeons and other healthcare workers, including nurses, medical students, and interns, were excluded.
Study design
This was a cross-sectional study with a questionnaire distributed to 241 surgeons at KAMC-R during the period of June 2017 to January 2018.
Sample size
The sample size was calculated using the Roasoft online calculator (www.raosoft.com/samplesize.html) assuming a 5% margin of error, 95% confidence level, the population size (the number of surgical staff at KAMC-R was 318), and a 50% response distribution. The necessary sample size was 175.
Data collection methods, instruments used, measurements
The questionnaire was sent by email to surgeons. Due to an inadequate response, personal interviews were then conducted. The questionnaire was delivered in an electronic Google form, as well as a hard copy, based on the surgeons' preferences. The questionnaire was adapted and modified from two previous studies by Wright and Patterson [18,25]. The questionnaire included demographic information, such as age, gender, subspecialty, and surgical experience. It also addressed the risk of transmission, awareness of seroconversion rates, and reporting patterns of needlestick injuries.
Data management and analysis plan
Descriptive statistics were used to analyze the data collected using Stata software v14 (StataCorp LLC, College Station, Texas, USA). Categorical variables were analyzed using the Pearson chi-square test. Risk of needlestick injury was measured by the odds ratio (OR) and 95% confidence interval (CI). A P-value of < 0.05 was considered significant. This research was also presented in 2018 (Abstract: Hakeem A, Alsaigh S, Alasmari A, Aloushan A, Bin Saleh F, Yousef Z: Awareness, concerns and protection strategies against bloodborne viruses among surgeons. Patient Safety Forum, Riyadh, Saudi Arabia, April 9, 2018).
Results
A total of 241 surgeons answered the questionnaire; 179 (74.3%) surgeons were males and 62 (25.7%) were females. The mean age ± standard deviation (SD) of male surgeons was 35.8 ± 11.0, while for females, it was 33.3 ± 9.1. The female mean surgical experience ± SD was 7.8 ± 7.8 and for males was 8.6 ± 8.8. A summary of the baseline surgeons' demographics is outlined in Table 1. The majority of surgeons were vaccinated against HBV (96% in males and 97% in females) with no statistically significant difference between genders (p = 0.672) (
Surgeons' awareness of serum conversion rates of bloodborne viruses
Almost two-thirds of the surgeons (male and female) were unaware of the conversion rate postneedlestick injury by HIV, HBV, and HCV ( Table 3).
HIV concerns
Almost two-thirds of the surgeons believed that there is a need for HIV screening before surgery ( Table 4). Mixed answers were given about their concern regarding contracting HIV from patients; only one-third of the surgeons were extremely concerned about contracting HIV ( Table 4).
Female N (%) P-value
Do you think that patients must be regularly screened for HIV before surgery?
Reporting needlestick injuries
Almost half of the surgeons (115, 47.7%) rarely or never reported a needlestick injury ( Table 5). Table 6 shows the self-reported occurrence of needlestick injuries by male and female surgeons.
Female surgeons were at a higher risk of needlestick injury while treating patients with HIV, HBV, and HCV infections. However, the only statistical significance was noticed in female surgeons treating an HBV positive patient (OR 3.7, 95% CI: 1.4 -9.7, p = 0.002).
N (%)
How frequently do you report a needlestick injury?
Discussion
The aim of this study was to investigate the awareness of surgeons about bloodborne diseases. Most of our cohort of surgeons were vaccinated against HBV; however, a small proportion still remains unvaccinated. The majority of the surgeons in this study were not aware of the conversion rates of the bloodborne pathogens after needlestick injury. Female surgeons were also at higher risk of needlestick injury when treating patients with HIV, HBV, or HCV.
As evidenced by previous studies, surgeons are at increased risk of contracting bloodborne viruses [1-5, [9][10][11][12][13]. Hepatitis B vaccination is recommended for all healthcare workers by the Center for Disease Control (CDC) [4]. However, local and international studies have reported low adherence to HBV vaccination among HCWs [14,[26][27][28][29]. In contrast, 95% of surgeons were vaccinated in this study. Although surgeons were extremely concerned regarding contracting bloodborne viruses, only a small number reported a needlestick injury which is similar to previous studies [15,[17][18][19][20][21][22][23]. A lack of awareness was common among surgeons regarding the seroconversion rate as the majority of them answered the questionnaire incorrectly. Therefore, surgeons need to be more educated about the risk of transmission of bloodborne pathogens and ways of protection. Reporting needlestick injuries was not a widespread practice among the surgeons in this study, which is alarming. This has also been noted in other studies [15,[17][18][19][20][21][22][23].
In this survey, the average number of needlestick injuries per year was similar to other studies.
An interesting finding of this study was that female surgeons were more likely to be stuck by a needle while treating a patient positive for HBV, which necessitates the importance of education about needlestick injury and their risk. Why females are at higher risk of needlestick injury was not addressed in this survey; however, it could be that male surgeons are underreporting needlestick injuries. On the other hand, 53.8% of HCWs did not report needlestick injuries in a governmental hospital in Medina, Saudi Arabia [19]. Even in the United Kingdom (UK), under-reporting of needlestick injuries is a widespread practice among surgeons, according to a study where only 2.26% of injuries were reported to occupational health [20]. Junior surgeons had a higher probability to report needlestick injuries versus senior surgeons (9.82% vs. 1.10%) [20]. Since most of the HCWs do not report their needlestick injuries, they will not get their post-exposure prophylaxis, which leads to an increase in the risk of acquiring bloodborne diseases [2]. A high percentage of surgeons believe that patients should be screened for HIV, even though it is not prevalent in Saudi Arabia [8].
Based on the study findings, surgeons must implement protection strategies against bloodborne pathogens and should be obliged to do so. Also, more education to surgeons regarding the needlestick injuries protocol is required as some do not have the knowledge on how to report it. One of the study limitations was the surgeons' busy schedules; this influenced their ability to answer the questions precisely as they wanted to finish quickly which affects the accuracy of the responses. In addition, the study had a small sample size compared to similar studies. Therefore, a national survey is recommended to draw consistent conclusions.
Conclusions
In conclusion, reporting needlestick injuries is not a widespread practice among surgeons, which may lead to an increased risk of contracting bloodborne infections and less use of prophylaxis as mandated by the CDC and WHO. In addition, female surgeons appear to be at a higher risk of needlestick injury. This requires further investigation into the reasons for such high incidents.
Additional Information Disclosures
Human subjects: Consent was obtained by all participants in this study. King Abdullah International Medical Research Center issued approval RC17/143/R. The approval for the research is valid. Animal subjects: All authors have confirmed that this study did not involve animal subjects or tissue. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work. | 2019-03-31T13:32:31.130Z | 2019-03-01T00:00:00.000 | {
"year": 2019,
"sha1": "e04694f3639a5da74d71f38388266e7a8f84fb6c",
"oa_license": "CCBY",
"oa_url": "https://www.cureus.com/articles/18393-awareness-concerns-and-protection-strategies-against-bloodborne-viruses-among-surgeons.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2e66c81740eab35af1b5a4f14690a947aaf8d289",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
15603150 | pes2o/s2orc | v3-fos-license | Protein Inference and Protein Quantification: Two Sides of the Same Coin
Motivation: In mass spectrometry-based shotgun proteomics, protein quantification and protein identification are two major computational problems. To quantify the protein abundance, a list of proteins must be firstly inferred from the sample. Then the relative or absolute protein abundance is estimated with quantification methods, such as spectral counting. Until now, researchers have been dealing with these two processes separately. In fact, they are two sides of same coin in the sense that truly present proteins are those proteins with non-zero abundances. Then, one interesting question is if we regard the protein inference problem as a special protein quantification problem, is it possible to achieve better protein inference performance? Contribution: In this paper, we investigate the feasibility of using protein quantification methods to solve the protein inference problem. Protein inference is to determine whether each candidate protein is present in the sample or not. Protein quantification is to calculate the abundance of each protein. Naturally, the absent proteins should have zero abundances. Thus, we argue that the protein inference problem can be viewed as a special case of protein quantification problem: present proteins are those proteins with non-zero abundances. Based on this idea, our paper tries to use three very simple protein quantification methods to solve the protein inference problem effectively. Results: The experimental results on six datasets show that these three methods are competitive with previous protein inference algorithms. This demonstrates that it is plausible to take the protein inference problem as a special case of protein quantification, which opens the door of devising more effective protein inference algorithms from a quantification perspective.
Introduction
Mass spectrometry (MS)-based shotgun proteomics is currently the most widely used method for the identification and quantification of proteins [1]. As shown in Fig.1, it first digests the sample into a mixture of peptides by enzymes such as trypsin. The resulting peptide mixtures are scanned by tandem mass spectrometry (MS/MS) to generate a set of MS/MS spectra. Then the peptide search engine reports a set of peptide-spectrum matches (PSMs) by searching the MS/MS spectra against a protein database. From these peptide identifications, we infer the existence of proteins with protein inference algorithms and calculate the relative or absolute abundances of proteins with protein quantification approaches [2,3].
Until recently, people tackle the identification and quantification of proteins as two individual and subsequent tasks: first select a subset of proteins that are truly present and then determine the abundances of these proteins. For both problems, many elegant approaches have been developed in the past decades. The readers can refer to two recent reviews [2,3] for details.
The starting point of this paper is the observation of some key underlying connections between these two problems. In protein inference, the objective is to generate a binary presence indicator value (1 or 0) for each candidate protein. In this regard, "protein existence inference" is probably more accurate in describing the original protein inference task. In protein quantification or protein abundance inference, the objective is to determine the abundances of a set of proteins. Clearly, if one protein is not present, its abundance should be 0. Hence, we argue that the protein inference problem can be investigated from the perspective of protein quantification: present proteins are those proteins with non-zero abundances. In other words, we can adopt available protein quantification methods directly to solve the protein inference problem. This new angle may enable a better understanding of the protein inference problem and help in devising improved or hybrid methods by combining elements from two areas that would otherwise be considered incompatible.
As a proof of concept, we investigate the feasibility of solving protein inference problem with existing protein quantification methods in the context of label-free proteomics. In label-free quantitative proteomics studies, quantification methods based on peak ion intensities (from MS data) [34] and spectral counting (from MS/MS data) [35,36] have been widely used.
Spectral counting measures the abundance of each protein based on the number of MS/MS spectra that match its constituent peptides. Compared to peptide intensity values, spectral counting information is easier to obtain since we just need to count the number of the MS/MS spectra. In this paper, we use spectral counting as the quantification approach for solving the protein inference problem.
We first try two simple spectral counting methods in the literature. In both methods, the protein abundance is calculated as the sum of peptide abundance. Their difference lies in how to handle the shared peptide. If the abundance of one shared peptide is b and it has k parent proteins, then b is used as its abundance in the first method while b/k is used in the second method. These two methods assume that all the candidate protein are present in the sample and they have non-zero abundances. However, this assumption contradicts the objective of protein inference: distinguish present proteins with non-zero abundances from absent proteins with zero abundances. Thus, we come up with another linear programming model to shrink some protein abundances to zero.
To our knowledge, our paper is the first attempt to use protein quantification methods for protein inference. Such an attempt connects two important computational problems that have long been investigated separately. The experimental results show that we can obtain better performance in most datasets even when the most simple version of spectral counting is utilized. Hence, the advance in protein quantification studies will promote the development of more effective protein inference algorithms.
In Section 2, we describe the details of three methods. Section 3 shows the experimental results on six datasets. Section 4 concludes the paper. )-7,.5-1 6.9,:*- ; <)##=( 3 Figure 1: Protein identification and quantification using mass spectrometry in shotgun proteomics. There are three major computational problems: peptide identification, protein inference and protein quantification.
Methods
As shown in the left side of Fig.2, the input of the protein inference problem can be represented as a tripartite graph G = (X ∪ Y ∪ Z, E 1 ∪ E 2 ) where X, Y and Z are the set of l MS/MS experimental spectra, m identified peptides and n candidate proteins respectively. For all x i ∈ X, y j ∈ Y , there is an edge (x i , y j ) ∈ E 1 if and only if spectrum x i matches the peptide y i in the peptide identification results. Similarly, (y j , z k ) ∈ E 2 means that peptide y j is one part of the protein sequence z k . Each MS/MS spectrum corresponds to one and only one identified peptide whereas some peptides may have more than one matching spectrum, such as peptide y 2 and y 3 in Fig.2.
The relationship between the peptides and proteins is more complex: candidate proteins may have several identified peptides and peptides can be shared by multiple proteins. How to correctly distribute these shared peptides is one of the most challenging problem in protein inference. We first formulate the protein inference problem as a special case of protein quantification problem. The objective of protein inference is to determine whether each candidate protein is present in the sample. The aim of protein quantification is to estimate the abundances of a set of proteins. Clearly, if one protein is not present, its abundance should be 0. In this paper, the protein inference problem is re-visited from the perspective of protein quantification through seeking those proteins with non-zero abundances.
To obtain the protein abundance, we start with calculating the peptide abundance. The abundance b j of peptide y j is calculated as the sum of PSM probabilities (or scores): where a i is the probability that spectrum x i matches peptide y j . a i can be also viewed as the weight of edge (x i , y j ) ∈ E 1 , which can be obtained from peptide identification algorithms such as Mascot [37] or post-processing tools such as PeptideProphet [38]. In traditional spectral counting methods, the peptide abundance is simply the number of MS/MS spectra identified for each peptide. Here, we generalize this spectral counting method to account for the quality of PSMs. More precisely, the contribution of each spectrum to the peptide abundance is a quantitative value between 0 and 1 rather than a fixed value of one. Such an extension is extremely important for protein inference since it may help us to distinguish proteins with the same number of PSMs.
To calculate the protein abundance, we need to distribute the abundance of each peptide to its parent proteins. The main difficulty is how to deal with the degenerate peptide that is shared by more than one protein since such a peptide can be generated by any subset of its parent proteins.
There are several approaches to solve the shared peptide problem in protein quantification [32,39,40], as shown in the right side of Fig.2. The first approach is to simply discard the shared peptides and only use the unique peptides to calculate the protein abundance. But this approach has one disadvantage: it causes the information lost, especially for proteins whose identified peptides are all shared peptides. In Fig.2, if we delete the shared peptide y 2 , then proteins z 2 and z 3 don't have any identified peptide and they would be considered as being absent in the sample. In fact, at least one of these two proteins must be present if we assume the existence of peptide y 2 . Alternatively, we can use both unique and shared peptides to estimate the protein abundance. In the second approach, the abundance of each shared peptide is utilized in the abundance calculation of all its parent proteins. In other words, each peptide is counted multiple times so that the abundances of some proteins may be over-estimated. We call this method "multiple counting" in this paper. For example, peptide y 2 in Fig.2 is counted twice in the second approach, which means that we artificially increase the abundance of peptide y 2 from b 2 to 2 * b 2 . The third approach divides the abundance of one shared peptide into different parts and then distributes each part to one of its parent proteins. This approach ensures that each peptide is "counted" only once. One typical representative in this category is the "equal division" method, which partitions the peptide abundance into k equal parts (k is the number of proteins that share this peptide).
Since both multiple counting and equal division are the most popular and simple approaches for spectral counting based protein quantification, we first try these two methods for protein inference and see how they perform. Both of these two methods assume that all the candidate proteins are present in the sample and should have non-zero abundances. However, this assumption doesn't hold in protein inference because some absent proteins should have zero abundances. Thus, a new linear programming model is proposed as well to distribute peptide abundance automatically and set the abundances of some proteins to be zero.
Multiple Counting
In this method, shared peptides are used in the same way as the unique peptides and receive no special treatment. The protein abundance is simply the sum of peptide abundance from both shared and unique peptides corresponding to each protein: where c k is the abundance of protein z k . If peptide y j has q j parent proteins, then it is counted q j times and its actual abundance used in the calculation is q j * b j .
Equal Division
Different from the above method that counts shared peptides multiple times, this method counts each peptide only once. It equally distributes the abundance of each shared peptide to its parent proteins: where q j is the number of candidate proteins sharing peptide y j . If peptide y j is a unique peptide, then q j = 1. :4-;,*< Figure 2: Three approaches used in the spectral counting for solving shared peptide problem. y 1 and y 3 are unique peptides while y 2 , y 4 and y 5 are shared peptides. The abundance of peptide y j is represented by b j . We use peptide y 2 as an example to explain how these three approaches work.
Linear Programming Model
For each identified peptide y j , the peptide abundance can be computed as: where d jk can be interpreted as the abundance that protein z k contributes to peptide y j . The variable d jk can serve as the bridge between peptide abundance and protein abundance. On one hand, we can use d jk to explain the known peptide abundance. On the other hand, we can calculate the unknown protein abundance through d jk . Therefore, the protein quantification based protein inference problem is equivalent to find an optimal matrix D = (d jk ). According to the above analysis, we propose a linear programming (LP) model to solve the protein inference problem: Some further illustrations on the model are listed as follows: • Constraint (6) is to find the maximum value in each column vector d k (the kth column of matrix D). Since we regard the proteins with non-zero abundances as being present in the sample, the abundances of absent proteins should be zero. Therefore, we minimize the sum of maximum peptide abundance from each protein in the objective function so as to shrink some protein abundances to 0.
• The left-hand side of constraint (7) is the difference between the observed and predicted peptide abundance. b j is viewed as the observed value and the sum of d jk is the predicted value. The zero difference means that each peptide should be "counted" only once in distributing the abundance of shared peptide.
• In constraint (8), we set d jk = 0 if (y j , z k ) / ∈ E 2 and consider only the remaining elements of matrix D as variables. This greatly improves the running efficiency of the LP model.
• Dost et al. [33] have presented a similar LP model in their F 2 formulation. It aims at inferring the protein abundance and peptide detectability simultaneously. The biggest difference between these two LP models is that our model sets some protein abundances to 0 while Dost's method doesn't.
After obtaining the matrix D, the protein abundance c k is calculated as:
Converting Scores into Probabilities
After knowing the protein abundance, it is beneficial to convert the abundance into well-calibrated probability. The main reason is that the probability estimation allows us to select the appropriate threshold for reporting the present proteins. In fact, the problem of converting ranking scores into estimated probabilities has been widely investigated in different domains (e.g., [41]). In this paper, we use the method proposed in [41] to fulfill this task. We first estimate the probability p k that protein z k is present in the sample given its abundance c k : P r(z k = 1|c k ) = P r(c k |z k = 1)P r(z k = 1) P r(c k |z k = 1)P r(z k = 1) + P r(c k |z k = 0)P r(z k = 0) where f k = log P r(c k |z k = 1)P r(z k = 1) P r(c k |z k = 0)P r(z k = 0) .
Assuming f k has a Gaussian distribution with equal covariance matrices, Equation (10) becomes p k = 1 1 + exp(Ac k + B) .
Now, we need to learn the parameters, A and B. Let r k be a binary variable whose value is 1 if protein z k is present in the sample and 0 otherwise. Then, R = (r 1 , r 2 , · · · , r n ) is the presence indicator vector of n candidate proteins. If we assume that the existence of each protein is independent with other proteins, the probability of observing R given C is: where C = {c 1 , c 2 , · · · , c n }. The optimal parameter values should maximize P r(R|C), i.e., minimize the following negative log likelihood function: Equation (14) is based on the assumption that we have already known the indicator vector R. However, we don't know such information in the protein inference process. Thus, we consider r k s as hidden variables and employ an EM algorithm [41] to simultaneously estimate A, B and R.
5
The EM algorithm utilizes an iterative procedure to estimate the parameter values θ = {A, B}. The procedure includes two steps: set r s+1 k = E(r s k |C, θ s ) (E-step) and compute θ s+1 = arg min θ LL(R s+1 |C) (M-step) where s is the iteration index. During the E-step, the unknown vector R is replaced by its expected value R s+1 under the current estimated parameter values θ s . Since θ s are fixed, LL(R|C) is minimized by setting r k = 0 if Ac k + B > 0 or r k = 1 if Ac k + B ≤ 0. During the M step, a new parameter estimation θ s+1 is computed by minimizing LL(R|C) given the R s+1 values calculated by the first step. Since R s = [r s k ] is fixed, minimizing LL(R|C) with respect to A and B is a two-parameter optimization problem, which can be solved using the model-trust algorithm described in [42].
Experimental Results
To test the performance of our methods, we have compared them with ProteinProphet [9] and MSBayesPro [13] on six datasets.
Datasets
We use six datasets that are publicly available and their URLs are given in Table 1. Among these six datasets, 18 mixtures [43], Sigma49 and yeast [25] have a corresponding protein reference set as the set of ground-truth proteins. An identified protein is labeled as a true identification if it is present in the protein reference set. Another three datasets, DME [44], HumanMD [27] and HumanEKC [25], have no such sets. Thus, we use a target-decoy strategy for performance evaluation, in which the MS/MS spectra are searched against a mixed protein database containing all target protein sequences and an equal number of decoy sequences. Using this strategy, an identified protein is considered as a true identification if it comes from the target protein database.
Mixture of 18 Purified Proteins (18 mixtures). The first dataset is a synthetic mixture of 18 highly purified proteins from ISB Standard Protein Mix Database [43]. The protein database consists of 1819 protein sequences including 18 standard proteins with contaminant entries appended to the database.
Sigma49 Dataset. Sigma49 is a synthetic mixture of 49 human proteins. The database used for peptide identification is composed of 15682 Swiss-Prot human protein sequences.
Yeast Dataset. This dataset has been used in [25]. The reference set is generated by an intersection of identified proteins from 4 MS-based proteomics datasets and 3 non-MS-based datasets. It contains 4265 proteins observed in either two or more MS datasets or any of non-MS datasets and is available at http://www.marcottelab.org/MSdata/gold/yeast.html. The database used in the experiment contains 6,714 protein sequences.
D. melanogaster Dataset (DME). DME comes from the embryonal Kc 167 cell line of D. melanogaster [44]. Its corresponding protein database is the release 5.2 from Flybase with 20,726 entries.
HumanMD Dataset. This dataset has been used in [27]. Its sample is a medulloblastoma Daoy cell line obtained from American Type Culture Collection (ATCC). The protein database is Ensembl version 49.36k with 22,997 entries.
HumanEKC Dataset. HumanEKC has been used in [25]. It is generated from a human embryonic kidney T293 cell line of ATCC. Its database is the same as that of HumanMD dataset.
Peptide Identification
We use X!Tandem (v2010. 10.01.1) [45] as the peptide identification software. For 18 mixtures, Sigma49 and yeast datasets, all MS/MS spectra are only searched against the target protein databases. For DME, HumanMD and HumanEKC datasets, the spectra need to search against both target and decoy protein databases.
Protein Inference
We compare our methods with ProteinProphet and MSBayesPro. ProteinProphet is the most popular method for protein inference so far. MSBayesPro is one representative of recently proposed methods and its software package is publicly available. We run ProteinProphet with its default parameter setting. MSBayesPro uses peptide detectability information as one part of its input. For some peptides whose detectabilities cannot be predicted by the current software, we calculate them by ourselves: the detectability value = median(predicted detectability scores from the same parent protein)/3.
For the proteins that cannot be distinguished with respect to identified peptides, Protein-Prophet and our LP model put all of them into the same group. Whenever we refer to the number of true positives (TPs) or false positives (FPs) identified at a threshold or use these values in a calculation, all proteins in the group are reported and the group probability is used as their protein probabilities.
Results
We evaluate the performance of different methods using a curve that plots the number of TPs as a function of q-value. An identified protein is labeled as a TP if it is present in the protein reference set or target protein sequence database, and as a FP otherwise. Given a certain probability threshold t, suppose there are T t TPs and F t FPs, the false discovery rate (FDR) is estimated as F DR t = F t /(F t + T t ). The corresponding q-value is defined as the minimal FDR that a protein is reported: q t = min t ′ ≤t F DR t ′ . The curve is produced by varying the probability threshold t. Figure 3 plots the number of TPs identified by five methods at different q-values. It shows that our methods are competitive with available protein inference algorithms. Throughout six datasets, our three methods can always achieve zero FPs among the highest ranking proteins while other two algorithms don't have such a property. This fact indicates that our methods have a strong distinction power of protein scores. More specifically, we have the following important observations.
First, the multiple counting method performs the best on DME, HumanMD and HumanEKC datasets. For DME and HumanMD, it reports the largest number of TPs under zero q-value. For HumanEKC, it just identifies 17 less proteins than ProteinProphet when q-value=0. Even though the multiple counting method doesn't keep such excellent performance on 18 mixtures and Sigma49 datasets, it doesn't perform the worst.
Second, equal division is the best performer (or tied with other algorithms) on 18 mixtures, Sigma49 and yeast datasets. Similarly, under zero q-value, it identifies the most TPs on 18 mixtures, Sigma49 and yeast datasets. For DME, HumanMD and HumanEKC datasets, equal division doesn't have the worst performance as well. It beats at least one algorithm on DME, HumanMD and HumanEKC datasets. . We only plot the curve up to 0.1 along the x-axis for yeast, DME and HumanMD datasets since people are particularly interested in the performance of different algorithms when the q-value or FDR is very small. For HumanEKC, the maximum q-value is < 0.04 so that we choose 0.03 as the limit of x-axis. We cannot set the q-value range very small for 18 mixtures and Sigma49 datasets since the probabilities of top-scoring proteins in the several algorithms are all equal to one, hence we have to ship these proteins with same probabilities and then calculate the q-value of the first appearing protein with a different probability. Figure 4: Identification performance comparison between the generalized spectral counting methods (MP, ED, LP) and the traditional spectral counting methods (NMP, NED, NLP). The y-axis is the number of true positives and x-axis is the corresponding q-value (the minimum FDR to report these proteins). The abbreviations for different methods are the same as those in Figure 3.
Third, the LP model exhibits the most stable identification performance among these five methods. More precisely, its performance is at least the third best across all the five datasets. Other four algorithms cannot achieve such a property and they perform worse than at least three algorithms on some datasets. The number of these datasets is 2, 1, 2, 4 for multiple counting, equal division, ProteinProphet and MSBayesPro, respectively.
In the calculation of protein abundance, we generalize the number of MS/MS spectra to the sum of PSM probabilities. We wish such an extension may help us to distinguish proteins with the same number of PSMs and further improve the identification performance. To show this fact, Figure 4 describes the performance gain when the generalized spectral counting is used instead of the traditional spectral counting. The experimental results of these three methods on the six datasets agree with our expectation: using the sum of PSM probabilities actually performs better than using the number of PSMs.
After obtaining the protein abundance, we use an EM algorithm to convert the abundance score into a well-calibrated probability. Alternatively, we can just normalize this abundance by dividing the maximum of all calculated protein abundances. This way gives us a protein score between 0 and 1 as well and keeps the holistic distribution of original protein abundance unchanged. Figure 5 shows the reason why we adopt the more complex probability estimation approach. It compares the distribution of normalized score and estimated probability using protein abundance calculated by LP model. For each of the six datasets, the area under the probability estimation curve is larger than that under the normalized score curve. It indicates that the probability estimation has a more uniform distribution than normalized protein score. Furthermore, the estimated probabilities of top-ranking proteins are very close to one but not equal to 1 so as to allow for distinction on a fine level.
Conclusion
Protein inference and protein quantification have been considered as two individual computational problems for a long time. In this paper, we investigate the feasibility of solving protein inference problem with existing protein quantification methods in the context of label-free protein quantification. The experimental results show that such a new angle enables us to obtain better identification performance even with some very simple quantification approaches available in the literature.
We have tested three spectral counting methods for solving the protein inference problem. These three methods can achieve good performance but none of them are consistently the best method on all the datasets. Thus, it is still necessary to develop better algorithms. In the future work, we plan to try more quantification methods to check if we can further improve the identification performance. | 2012-10-09T00:36:26.000Z | 2012-10-09T00:00:00.000 | {
"year": 2012,
"sha1": "91a0913f88a3c066a9bfaa66b90e546b8a13b543",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "91a0913f88a3c066a9bfaa66b90e546b8a13b543",
"s2fieldsofstudy": [
"Biology",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Biology",
"Mathematics"
]
} |
267117717 | pes2o/s2orc | v3-fos-license | Rapid Plaque Progression in a Patient with Non-ST-Segment Elevation Acute Coronary Syndrome: A Case Report
Acute coronary syndrome (ACS), diagnosed by optical coherence tomography (OCT), is caused primarily by plaque rupture, plaque erosion, and calcified nodules. Plaque erosion is more common in patients with non-ST-segment elevation myocardial infarction than ST-segment elevation myocardial infarction. This study reports the details of a case of ACS caused by massive thrombus formation due to plaque erosion. The factors associated with plaque erosion and thrombosis could not be confirmed on the basis of clinical examination findings and the physicians’ experience; therefore, the initial diagnosis of an ischemic event was replaced by a diagnosis of rapid plaque progression, as microscopically confirmed by OCT. Therefore, OCT examination must be performed for blurred or “rapidly progressing” lesions identified through angiography.
Introduction
Non-ST-segment elevation acute coronary syndrome (NSTE-ACS) is acute thrombosis caused by severe stenosis of the coronary artery and/or rupture or erosion of a vulnerable plaque, with or without vasoconstriction or microvascular embolism, thus decreasing coronary artery blood flow and leading to myocardial ischemia [1].Patients with non-STsegment elevation myocardial infarction (NSTEMI) are more prone to plaque erosion than patients with ST-segment elevation myocardial infarction (STEMI).The formation of much thrombi caused by plaque erosion in the patient's vasculature which subverts the traditional ischemic events caused by plaque rapid progression.Consequently, optical coherence tomography (OCT) examination must be performed for blurred or "rapidly progressing" lesions identified via angiography.
worsened for 1 day.The patient had no history of smoking or drinking.She had a family history of coronary heart disease, and her mother had died of heart disease.Her physical examination after admission indicated the following findings: temperature 36.3 °C, pulse rate 71 bpm, respiration 19 breaths/minute, and blood pressure 132/78 mmHg.She had clear respiratory sounds in both lungs, without dry or wet rales; no murmur was heard in the auscultation area of each valve; and she had a soft abdomen without tenderness.The ECG after admission showed sinus rhythm and was normal (Figure 1).Her troponin was 1.7 μg/L (reference value 0.010-0.025μg/L).She was diagnosed with acute NSTEMI (Killip grade I).On May 10, 2021, coronary angiography (CAG) was performed, and the left main artery was normal.The degree of stenosis was 40%-50% in the proximal and middle segments of the left anterior descending branch (LAD) and 70%-80% in the middle segment (Figure 2A).The blood flow myocardial infarction thrombolysis test (TIMI) grade was level 3.The degree of stenosis in the distal segment of the left circumflex branch (LCX) was 30%-40% (Figure 2B), and the blood flow TIMI grade was level 3.The degree of stenosis in the middle segment of the right coronary artery was 20%-30%, and the blood flow TIMI grade was level 3.Because the lesions in the middle of the LAD were blurred, and the patient's troponin level was elevated, the culprit lesion might be located in the LAD.Therefore, OCT examination was suggested for the patient, but her family refused and requested conservative management.After CAG, she took aspirin 100 mg (once per day), ticagrelor 90 mg (twice per day), atorvastatin 20 mg (once at night), metoprolol tartaric acid 6.25 mg (twice per day), and isosorbide mononitrate 20 mg (twice per day).No further symptoms of angina occurred.
On June 28, 2021, the patient experienced sudden chest pain during intense activity, which was relieved after resting for several minutes.Starting at 09:00 on June 29, 2021, the patient's intermittent chest pain was relieved within several minutes.At 22:00, the symptoms of sudden chest pain were more severe than before, and were accompanied by pain in the left upper limb, which lasted for 30 minutes.She subsequently visited the emergency department.The ECG showed a sinus rhythm and was generally normal (Figure 3A).The patient was admitted to the hospital on June 30, 2021, and was diagnosed with unstable angina.During hospitalization, several episodes of chest pain symptoms occurred.During the attacks, dynamic changes were observed in the electrocardiogram coinciding with the onset of chest pain, thus indicating that her sinus rhythm and the ST segment of leads I, II, aVF, and V4-V6 were slightly depressed (<0.05 mV) (Figure 3B).After she took a "Suxiao Jiuxin pill," her symptoms were relieved.(The patient could not tolerate nitrate drugs.)On July 2, 2021, CAG examination showed that the left main artery was normal, the degree of stenosis in the proximal and middle segments of the LAD was 40%-50%, and the degree of stenosis in the middle segment of the LAD was 70%-80% (Figure 4A), with level 3 TIMI blood flow.The degree of stenosis in the distal segment of the LCX was 99% (Figure 4B), and the TIMI blood flow was at level 2. The degree of stenosis in the middle segment of the right coronary artery was 20%-30%, and the TIMI blood flow was at level 3.After the patient's family consented, an OCT examination was performed.Because of severe LCX lesions, balloon predilation was performed before the OCT examination.The results indicated fibrous plaques, a torn intima, and suspected organized thrombi.The minimum lumen area was 1.28 mm 2 (Figure 5A-C).After redilation with a predilated balloon, a 3.0 × 23 mm drug-coated stent (MicroPort) was delivered through the guidewire to the distal lesion of the LCX, and the stent was released at a rate of 8 atm (1 atm = 101.325kPa).CAG (Figure 6) and OCT examination were performed after noncompliant high-pressure balloon dilation.The OCT examination showed satisfactory stent expansion and good adhesion, with a minimum stent internal area of 4.86 mm 2 (Figure 6A-C).The OCT examination of the LAD showed fibrous plaques, organized thrombi, and a minimum lumen area of 1.36 mm 2 (Figure 7A-C).Therefore, a 2.75 × 18 mm drugeluting stent (Medtronic) was delivered through the guidewire to the middle lesion of the LAD at a rate of 12 atm.CAG (Figure 8) and OCT examination were performed after noncompliant high-pressure balloon dilation.The OCT examination results indicated satisfactory stent expansion, good adhesion, and a minimum lumen area of 4.19 mm 2 (Figure 8A, B).After PCI, the patient regularly took 100 mg aspirin (once per day), 75 mg clopidogrel (once per day), 10 mg rosuvastatin (once at night), and 47.5 mg metoprolol succinate (once per day).Because she was unable to tolerate the associated suffocation symptoms, ticagrelor was replaced with oral clopidogrel.The aggregation rates of AA and ADP in platelet function tests during hospitalization were 7.62% and 23.06%, respectively (clopidogrel was replaced by ticagrelor during the second hospitalization).In the follow-up to date, no further symptoms of angina had been observed.
Discussion
NSTE-ACS is acute thrombosis caused by severe stenosis of the coronary artery and/or rupture or erosion of a vulnerable plaque, with or without vasoconstriction or microvascular embolism, thus decreasing coronary artery blood flow and leading to myocardial ischemia [1].Jia et al. [2] have found that the main manifestations of ACS on OCT imaging are plaque rupture, plaque erosion, and calcified nodules.Patients with NSTEMI are more prone to plaque erosion than patients with STEMI.Plaque erosion is characterized by an abnormal or discontinuous surface of the vascular endothelium with attached thrombi, and no damage to the fiber cap.Thrombosis organization refers to thrombus formation, wherein endothelial cells and fibroblasts grow from the vascular wall toward the thrombus.The granular tissue extends into and gradually replaces the thrombus, thus leading to mechanization.Mechanized thrombus adheres to the blood vessel wall and is not easily detached.The newly formed endothelial cells in the organized thrombus cover the cracks generated by the drying up of the thrombus, thus forming a labyrinthine but communicable channel, allowing for partial communication of blood flow upstream and downstream of the thrombus.The phenomenon known as " recanalization" has been found in several studies and case reports [3][4][5].The occasional vascular lesions with a "woven" appearance found on CAG images are actually organized thrombus recanalization, as observed in OCT images.Multiple recanalization channels are referred to as "honeycomb vasculature," "Swiss cheese," or "lotus root lesions."Our patient was diagnosed with NSTE-ACS during both hospitalizations.OCT examination did not reveal plaque rupture but instead indicated an organized thrombus, which adhered to the surfaces of fibrous plaques.Plaque erosion led to thrombus formation, thus indicating that severe stenosis of the lumen did not originate from the plaque itself but from the erosion of the plaque, thus forming thrombus that did not completely occlude the lumen.Through thrombus organization, the effective area of the lumen was decreased; therefore, the thrombus might have been a recanalized thrombus rather than a multichannel organized thrombus.Because of the large volume of the thrombus, the effective area of the lumen (single) decreased, thereby leading to the occurrence of NSTE-ACS.
Men younger than 50 years or women younger than 70 years are at high risk of plaque erosion [6].A multicenter retrospective study has shown that patients with plaque erosion are often younger than 68 years, and have a history of angina pectoris, no history of diabetes, hemoglobin >150 g/L, and normal renal function.Patients with NSTE-ACS with these five clinical characteristics are prone to plaque erosion, which occurs with an incidence as high as 73.1% [7].Some studies have shown that the lesions in patients with NSTE-ACS caused by plaque erosion occur primarily in the middle segment of the LAD [8].From a pathological perspective, in plaque erosion, the fiber cap on the surface of the plaque is intact and damage occurs to endothelial cells.Studies have shown that, in addition to endothelial damage, changes in the extracellular matrix, particularly elevated levels of proteoglycans and hyaluronic acid, can be observed on the surfaces of eroded plaques [9,10].Simultaneously, many neutrophils aggregate, thus causing endothelial cell damage, activating platelets and coagulation systems, and leading to thrombus-induced ACS.During our patient's two hospitalizations, laboratory examinations revealed neutrophil levels within the upper limit of the normal range.The C-reactive protein level (9.4 mg/L) and erythrocyte sedimentation rate (25 mm/h) were slightly elevated.Immunological indicators and tumor markers were normal.From a clinical perspective, we were unable to confirm factors related to plaque erosion and thrombosis.However, The formation of much thrombi caused by plaque erosion in the patient's vasculature which subverts the traditional ischemic events caused by plaque rapid progression.OCT confirmed this diagnosis from a microscopic perspective.Consequently, OCT examination must be performed for blurry or "rapidly progressing" lesions found via CAG.
In addition, our patient experienced rapid plaque progression, caused primarily by plaque instability -particularly that of vulnerable plaques, which are likely to rupture -as well as plaque erosion accompanied by thrombosis or intraplaque bleeding.Inflammatory infiltration, lipid accumulation, and positive vascular remodeling of plaques are all involved in the rapid progression of plaques [11].Some studies have shown that the vascular adventitia, the connective tissue of the adventitia and adipose tissue, affects the vulnerability of atherosclerotic plaques [12].In one case report, a 41-year-old man showed noncriminal lesions progressing from 30% stenosis to occlusive lesions after 10 days.However, this patient did not undergo intracavitary imaging examination.This rapidly progressing plaque, which is considered to be related to chronic inflammation caused by long-term smoking, dyslipidemia, etc. [13].Intraplaque hemorrhage was found in our patient's OCT images, but no significant microvessel formation was observed within the plaques.However, in the early stages of neovascularization, incomplete maturation and fragility of the vessels can lead to bleeding within plaques, thus increasing plaque instability.Plaques accompanied by intraplaque hemorrhage often have thin fiber caps, macrophage infiltration, and necrotic cores, which together lead to plaque rupture [14].This case report has several limitations, such as the lack of testing for inflammatory mediators such as high sensitivity C-reactive protein and interleukin-6.The exact reason for the patient's susceptibility to plaque erosion and organized thrombosis remains unclear.
Figure 1 First
Figure 1 First Electrocardiogram after Admission.
Figure 2 First
Figure 2 First Coronary Angiogram.
Figure 6
Figure 6 LCX-Coronary Angiography and OCT Imaging after PCI.
Figure 8
Figure 8 LAD Coronary Angiography and OCT Imaging after PCI.
Figure 7
Figure 7 LAD-OCT Imaging before PCI. | 2024-01-24T18:57:27.752Z | 2024-01-01T00:00:00.000 | {
"year": 2024,
"sha1": "91bca701ddf50c1fb1ac56a15e437163d79c8544",
"oa_license": "CCBYNC",
"oa_url": "https://www.scienceopen.com/document_file/a716be42-92fd-4e79-ab89-08f9e0df6755/ScienceOpen/CVIA.2023.0081.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "48aff0e5a00b3872b0166eecacbc8a432bed8cf9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
238531548 | pes2o/s2orc | v3-fos-license | Optimizing Oil and Gas Acquisitions Using Recommender Systems
Well acquisition in the oil and gas industry can often be a hit or miss process, with a poor purchase resulting in substantial loss. Recommender systems suggest items (wells) that users (companies) are likely to buy based on past activity, and applying this system to well acquisition can increase company profits. While traditional recommender systems are impactful enough on their own, they are not optimized. This is because they ignore many of the complexities involved in human decision-making, and frequently make subpar recommendations. Using a preexisting Python implementation of a Factorization Machine results in more accurate recommendations based on a user-level ranking system. We train a Factorization Machine model on oil and gas well data that includes features such as elevation, total depth, and location. The model produces recommendations by using similarities between companies and wells, as well as their interactions. Our model has a hit rate of 0.680, reciprocal rank of 0.469, precision of 0.229, and recall of 0.463. These metrics imply that while our model is able to recommend the correct wells in a general sense, it does not match exact wells to companies via relevance. To improve the model's accuracy, future models should incorporate additional features such as the well's production data and ownership duration as these features will produce more accurate recommendations.
INTRODUCTION
The ongoing downturn in oil prices since 2014 (Stocker, 2018) has led to a surge in mergers and acquisitions among oil and gas companies to improve company finances and reduce debt.
Companies have different needs to fulfill based on production, and are looking for wells that match their specific demands. Typically, companies prioritize geographical proximity and other convenience factors over a well's long-term output. While initially producing adequate results, this approach ultimately damages the company's overall profit ("What Are the Factors for Selection of Oil and Gas Well Drill Sites?", 2011). On the contrary, recommender systems provide customized content and recommendations to companies based on the data available about them (Lee et al., 2008). We see recommender systems as a way to help companies evaluate which wells are favorable to purchase based on their characteristics. In our model, the recommender system uses information from the companies' interactions with the wells it currently owns or has owned.
From this information, the model then uses collaborative filtering to recommend wells for the company to purchase.
Collaborative filtering (CF) is a process where a recommender system relies on past behavior. The CF makes recommendations based on patterns for wells that a company has not interacted with (Ma, 2014). We use a CF approach in place of content-based filtering (CBF) because CBF requires data regarding wells' features. For example, CBF needs additional information about wells such as porosity and permeability. As a CF based approach is domain-free, it is more scalable and more accurate as it embeds both companies and wells in the same embedding space. A CF model analyzes co-occurrence patterns to determine similarities between companies and wells and will make recommendations solely based on the company's previous interactions (Ma et al., 2014).
Although the most common CF approaches use matrix factorization, this approach is not ideal for this study as we use implicit data rather than explicit data. Here, explicit data refers to given data, whereas implicit data refers to information derived from the given data (Wigmore, 2012). In the dataset, there are no negative or positive ratings about specific wells. Here, Factorization Machines are the ideal solution because they extend the latent factor approach to integrate auxiliary features and specialized loss functions that directly optimize item-rank order using implicit feedback data (Lee et al., 2008). A traditional implicit feedback alternating least squares model uses a binary preference approach that only indicates whether a company interacted with a specific well (Loni et al., 2014). Although useful, this prohibits us from using auxiliary information about wells (oil production, coordinates, length of time owned by the company, and depth of well), and binary representations can be flawed in their ability to predict a company's preference of a well simply though interaction, or lack of. Because of these limitations, we use a Factorization Machine (FM) approach.
Factorization Machines (FMs) are generic supervised learning models that map arbitrary realvalued features into a low dimensional latent factor space. Thus, they are used for regression, ranking, or classification (Rendle, 2010). Unlike a classic recommender system model, the FM model represents company-well interactions as tuples of real-valued feature vectors and numeric target variables (Rendle, 2013). FMs are modelled via a linear combination of parameters as shown in Equation 1: where are the weights for the feature vector ( ), and ⟨ , ⟩ is a k-dimensional factorized vector for feature i, k is the number of factors, and n is the dimensionality of feature vectors. FM's rely on latent factor space embeddings, which serve as a compressed representation of the data in which similar data points are closer together. The feature interaction weights are the inner product of the two features' latent factor space embeddings. Optimizing all of the FM parameters is commonly done using learning-to-rank.
Learning-to-rank (LTR) optimization techniques are much more efficient than labeling all observed company-well interactions as 1 and unobserved interactions as -1. This less efficient method would lead to creating many unobserved training samples as well as imbalanced data due the sparseness of observed data. LTR techniques learn rank-order directly instead of minimizing prediction error by training on pairs or lists of training samples. The RankFM library (Lundquist, 2020) implements a Bayesian Personalized Ranking (BPR) as an LTR technique. BPR learns the correct rank-ordering of wells for each company by maximizing the posterior probability of the model parameters, given a dataset of observed company-well preferences. The observed companywell preferences are assumed to be preferred over the unobserved wells. It creates tuples of training samples in the form of: (company, observed well, unobserved well) and maximizes the function (2) Here, (> u |θ) is the model's predicted well ranking for a company, and ( ) represents the probability of the parameter vector of the model class. This is learned by maximizing the joint probability that company's observed wells are preferred over their unobserved wells. Equation 3 is the difference between the predicted utility scores of the company's observed and unobserved wells mapped onto [0, 1], where ( , | ) and ( , | ) are the company-well utility scores generated by the FM equation, and represents the variance between these scores.
Combining equations 2 and 3 along with a L2 regularization on the right-hand side yields Equation 4: Now that we have explained the equations and mechanics behind Factorization Machines, we will discuss how we applied them to an energy dataset.
METHODS
With the use of collaborative filtering, the base features we use are the oil companies and wells.
Data is from the New York State Department of Conservation Oil and Gas and is depicted in Figure 1. It consists of company names, well names, and auxiliary domains: total depth, oil, gas, and water production, elevation, length in production and owner ("Data on Oil, Gas and Other Wells in New York State", n.d.). The training data contains two sub-matrices where each training sample has binary indicators for company and well interactions.
Figure 1
This map details the locations of the training wells.
We train the model on a data frame organized in the following manner by columns: well operator number (company), well API number (well). Auxiliary domains were added in the following order: well production, well elevation, and the duration a well is operated/owned by the company. To validate the predictions, we use predicted class probability distribution plots to categorize the quality of wells into two groups: desirable and undesirable. This distinction is based on the wells' position relative to a particular threshold. The threshold at which the wells are divided is set by the company and can be manually adjusted.
We evaluate the accuracy of our model with several metrics. First is hit rate, which takes the top 10 recommendations and removes one well from the company's training data each time, using the other wells as inputs into the recommender system. If the removed well is in the top 10 recommendations, then it is considered a hit (de Lichy et al., 2015). The next measurement is reciprocal rank. This metric takes the reciprocal of the rank of the actual well on the recommendation list (e.g., the 2 nd recommendation would have a reciprocal rank of ½ , or 0.5).
We also use precision to evaluate our model by calculating the percent of recommendations that were considered "relevant" to the company. Here, relevance was defined as a minimum threshold score calculated by the Factorization Machine equation (Loni, 2014). Finally, we use recall, which was calculated by the percentage of relevant wells which were in the top 10 recommendations requested (Gunawardana et al., 2009). These metrics serve as benchmarks to measure the model's overall accuracy and robustness.
RESULTS
We determined the optimal hyperparameters for our model through trial and error, by attempting various values and observing their effects on the model's fit and accuracy. The hyperparameters for the model were as follows to ensure that the model was not overfit, but still had enough data to provide an accurate score: (1) 20 factors, (2) warp loss, (3) maximum samples set to 20, (4) alpha of 0.1, (5) sigma of 0.1, (6) learning rate set to 0.1, and (7) learning schedule set to invscaling.
The hit rate was 0.680, which indicates that our model is adequate at getting accurate predictions, based on comparison with other published hit rates in similar studies. In addition, our reciprocal rank was 0.469, which means that the most relevant recommendation was typically close to being the second recommendation for all companies (as ½ = 0.5, to which our value of 0.469 is relatively close).
Our precision was 0.229, suggesting that our model was relatively poor at providing relevant recommendations when asked for them. A threshold score is calculated by the Factorization Machine equation and is then used to evaluate a recommendation as relevant or not. Based on this score, only 22.9% of our recommendations were relevant to the company (Loni, 2014;Gunawardana et al., 2009).
Finally, the recall value of 0.463 suggests that in the top 10 recommendations requested, 46.9% of the total relevant wells were recommended to the company. This indicates that our overall pool of relevant wells was low, because approximately half of the relevant wells were being recommended to the companies, but only 22.9% of the recommendations were relevant. Efforts to increase recall only marginally improve precision after a certain point, as indicated in Figure 2.
Figure 2
This precision-recall curve depicts the tradeoff between precision and recall for one class in a classification.
DISCUSSION
We found no cross-recommendations among wells from companies which is unusual, as some similarities between preferences are expected. However, validation metrics show some promise as they indicate the model is making the right recommendations, but the model needs to improve the relevance to the company. To this end, additional data could be highly useful.
The recommender system uses the companies and wells as input to identify similarities between their interactions and recommends the top 10 well recommendations. The model goes beyond cooccurrences in order to find stronger relationships between the latent representations of each feature and recommends wells for each well based on the identifiable relationships (Loni et al., 2014).
A concern among our findings is that when trying to find the most popular wells (number of times a well is recommended to operators), no well is recommended more than once. This is likely because the model is unable to recognize similar patterns of behavior across different operators.
All operators' primary objectives are to find high producing wells at an effective cost, so the assumption is that there are some similarities between the preferences of operators as it pertains to desirable well characteristics.
One source of bias is sample bias, as there is a larger amount of older data about wells rather than new wells. This does not consider technological advancements that may allow for longer wellholding timespans. One way to fix this issue would be to weight more recent observations such as operator ownership duration. Additionally, using only the company and wells can also lead to association bias as the model may weigh any ownership of the well as the same and may not recognize that operators may have made mistakes by purchasing or not purchasing certain wells.
Due to the size of the dataset, we do not believe there are any values disproportionately affecting the error metrics but rather the model's implicit interpretation of the data. Because of this, length of time and production will be crucial additions to minimize the bias in the model and also add more relevant recommendations.
In Figure 3, the scores calculated by the Factorization Machine formula are used to identify a well as desirable or undesirable. As a test, the median score is used, and the class distribution plot shows that the data is left-skewed. However, the FM model identifies whether a well is desirable or not based on these scores and there is not much overlap. Other thresholds and scores can possibly be investigated as potential binary classifiers for a desirable well.
Figure 3
This class separation plot depicts the desirability of wells within our dataset based off a median threshold.
CONCLUSION
The application of Factorization Machines for well acquisition is relevant due to its unique ability to provide enhanced, context-based recommendations to oil companies. Factorization Machines' strength lies in their efficiency relative to similar models. Faster processing allows them to be applied to real-world situations such as the oil and gas industry-a capability that other models lack. This model performs reasonably well in the task of making relevant well recommendations to companies. With a hit rate of 0.680 and a reciprocal rank of 0.469, the model can make good predictions over multiple attempts. However, with a poor precision of 0.229 and a recall of 0.463, the model seems to have a low pool of relevant recommendations and cannot accurately recommend the correct wells (de Lichy et al., 2015;Loni et al., 2014). This implies that our model manages to generate an accurate pool of recommendations for a general set of companies but is unable to precisely match these recommendations on a one-to-one basis with any given companies.
Additional features such as the well's production data and length of time it has been owned by any given company can help overcome this deficiency. Thus, future works should focus on integrating these features into the recommender system and increasing the recommendation relevance to companies. | 2021-10-11T01:15:58.817Z | 2021-10-07T00:00:00.000 | {
"year": 2021,
"sha1": "cddb88a7c5a4799ae3d74b70b683206bad36c44c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "cddb88a7c5a4799ae3d74b70b683206bad36c44c",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
59130627 | pes2o/s2orc | v3-fos-license | Nebular and Stellar Dust Extinction Across the Disk of Emission-Line Galaxies on Small (KPC) Scales
We investigate resolved kpc-scale stellar and nebular dust distribution in eight star-forming galaxies at z~0.4 in the GOODS fields. Constructing the observed Spectral Energy Distributions (SEDs) per pixel, based on seven bands photometric data from HST/ACS and WFC3, we performed pixel-by-pixel SED fits to population synthesis models and estimated small-scale distribution of stellar dust extinction. We use Halpha / Hbeta nebular emission line ratios from Keck/DEIMOS high resolution spectra at each spatial resolution element to measure the amount of attenuation faced by ionized gas at different radii from the center of galaxies. We find a good agreement between the integrated and median of resolved color excess measurements in our galaxies. The ratio of integrated nebular to stellar dust extinction is always greater than unity, but does not show any trend with stellar mass or star formation rate. We find that inclination plays an important role in the variation of the nebular to stellar excess ratio. The stellar color excess profiles are found to have higher values at the center compared to outer parts of the disk. However, for lower mass galaxies, a similar trend is not found for the nebular color excess. We find that the nebular color excess increases with stellar mass surface density. This explains the absence of radial trend in the nebular color excess in lower mass galaxies which lack a large radial variation of stellar mass surface density. Using standard conversions of star formation rate surface density to gas mass surface density, and the relation between dust mass surface density and color excess, we find no significant variation in the dust to gas ratio in regions with high gas mass surface densities, over the scales probed in this study.
INTRODUCTION
The existence of interstellar dust was suggested about a century ago, even before the great debate about the nature of galaxies (Curtis 1918). The presence of dust in galaxies was later established firmly, not only from dimming and reddening of light but also from dust scattering and Far Infrared (FIR) continuum emission. Dust can absorb more than half of the Ultraviolet (UV) and optical radiation budget of the Universe (e.g. Calzetti 2001). Most of the radiation from star formation in galaxies is emitted in the UV and optical, the wavelength range most susceptible to dust extinction and where most of the observations are taken. Therefore, without a thorough understanding of the attenuation of light of galaxies (coming from both stars and nebulae) by dust, interpretation of their physical properties (e.g. star formation rate, stellar age and stellar mass-to-light ratio) will not be accurate.
A lot of progress has been made in our understanding of the attenuation of light by dust. It is now well established that the amount of extinction in galaxies is wavelength dependent. Variation of dust extinction as a function of wavelength, or the so called extinction curve, is studied from different observations of nearby galaxies and the Milky Way (e.g. Prevot et al. 1984, Cardelli et al. 1989, Calzetti et al. 1994, Gordon et al. 2003). In the extinction curve, there is information about chemical composition and sizes of dust grains. The smoothness of the FIR part of the extinction curve suggests that a variety of grain sizes exist. While the overall shape of the extinction curves measured from these local galaxies agrees, especially towards the infrared, there are some differences in the slopes, the normalizations, and the presence of the 2175Å bump (e.g. Stecher 1965, Calzetti 2001, Scoville et al. 2015. These differences are attributed to different metallicities and dust-to-gas ratios as well as differences in the composition of the dust grains (e.g. Calzetti et al. 1994. Local extinction and attenuation curves are often used to correct for dust attenuation at intermediate and high redshifts. Recently, the study of dust attenuation at higher redshifts has become accessible by infrared surveys. Studies at z ∼ 1 − 2 (e.g. Scoville et al. 2015 have found attenuation curves very similar to the commonly used Calzetti et al. (2000) attenuation curve derived from nearby galaxies. However, other studies reported poor fits from nearby curves to high redshift galaxies and found strong spectral dependence in the attenuation curve in their sample (e.g. Kriek & Conroy 2013), suggesting different star-dust geometry, dust grain properties or both.
Comparison of attenuation towards nebular star forming regions inside galaxies with that of the stellar continuum hints about the geometry of dust relative to stars. Studies of local galaxies have found higher attenuation towards the nebular regions compared to the integrated dust from stellar continuum (e.g. Calzetti et al. 2000, Moustakas & Kennicutt 2006, Wild et al. 2011, Kreckel et al. 2013. This is consistent with the picture from the radiative transfer model by Charlot & Fall (2000) in which recombination lines generated in the HII regions, the birth clouds of the most massive O stars, face an extra amount of attenuation by dust. However, at higher redshifts the picture is not as clear. By comparing star formation rates from different diagnostics, it was found that (e.g. Erb et al. 2006, Reddy et al. 2010, Garn & Best 2010, Shivaei et al. 2015, Oteo et al. 2015 the best agreement between the SFRs is met with no extra color excess towards the nebular regions. Other studies have found quite the opposite, with higher attenuation needed towards the nebular regions (e.g. Förster Schreiber et al. 2009, Wuyts et al. 2011, Ly et al. 2012Price et al. 2014). Using a large sample of emission line galaxies from the MOSFIRE Deep Evolution Field (MOSDEF) survey , Reddy et al. (2015) demonstrated that the ratio of nebular to stellar attenuation is a function of the star formation and specific star formation rates of galaxies. However, there is a huge scatter in the relation. A detailed analysis of spatially resolved colors is needed to understand the source of this scatter.
The amount of nebular attenuation by dust is also shown to correlate with physical properties of galaxies, such as luminosity, Star Formation Rate (SFR), mass and metallicity (e.g. Wang & Heckman 1996, Sullivan et al. 2001, Pannella et al. 2009, Asari et al. 2007). Garn & Best (2010) investigated these different dependencies and found stellar mass to be the best parameter predicting the amount of dust extinction for z ∼ 0.1 galaxies from the Sloan Digital Sky Survey (SDSS DR7; Abazajian et al. 2009). This result was later confirmed by studies at high redshift galaxies (e.g. Sobral et al. 2012, Domínguez et al. 2013, Ibar et al. 2013. It is however not clear whether these relations hold at smaller scales inside galaxies. High spatial resolution observations of galaxies in the local Universe enabled the derivation of well calibrated relations among the physical properties of galaxies (such as stellar mass, dust mass, or SFR) at z ∼ 0 (e.g. Calzetti et al. 2000, Kennicutt et al. 2007, Kreckel et al. 2013, Boquien et al. 2011. However until recently, at intermediate and high redshifts, studies of resolved (kpcscale) properties of galaxies was not possible. High resolution multi-waveband surveys using the Hubble Space Telescope such as Cosmic Assembly Near-infrared Deep Legacy Survey (CANDELS; Grogin et al. 2011, Koekemoer et al. 2011) have enabled such studies through optical and near-infrared photometric observations to z ∼ 2 (e.g. Wuyts et al. 2012, Hemmati et al. 2014, Guo et al. 2015. In these studies, resolved physical properties of galaxies were measured by fitting the observed Spectral Energy Distribution (SED) at each pixel to the theoretical stellar synthesis models. Therefore, the uncertainties inherent to SED fitting will be inevitable using only photometric data. Spectroscopic data from grism obser-vations by surveys such as 3D-HST have improved measurements of resolved physical properties (such as SFR surface densities).
Using a sample of massive galaxies at z 1 in the CAN-DELS and 3D-HST, Wuyts et al. (2013) showed that an extra amount of attenuation is needed for nebular gas for the integrated Hα SFR to agree with the SED inferred SFR. However, one caveat of such studies is the poor spectral resolution of grism observations, which can not resolve the NII and Hα emission lines. The contribution of the NII to the Hα+NII emission depends on the ionization radiation and metallicity of the galaxy and hence a constant ratio assumption could affect the inferred SFRs measured. The advent of Adaptive Optics (AO) aided Integral Field Spectrographs (IFS), enabled spatially resolved spectroscopic observations of intermediate redshift galaxies (e.g. Förster Schreiber et al. 2009, Genzel et al. 2010, Swinbank et al. 2012. While obtaining statistical samples of galaxies with IFS has become possible over the last years (e.g. Sobral et al. 2013, Wisnioski et al. 2015, the same is not yet true for AO aided high resolution IFS observations. In this work, we combine high resolution photometric data from CANDELS with complementary high spatial and spectral resolution spectroscopic data from Keck/DEIMOS observations to study dust distribution in a sample of emission-line galaxies. We demonstrate the usefulness of the technique on a small sample. We measure the stellar continuum and ionized gas dust extinction along the major axis of disk galaxies. Stellar Continuum and the ionized gas extinctions are measured from resolved SED-fitting per pixel (Hemmati et al. 2014) and the Balmer decrements from Keck/DEIMOS spectra, respectively. We investigate how integrated dust measurements in galaxies compare to the spatial variation of dust along the disks.
The structure of this paper is as follows. §2 presents the sample selection and data. In §3 and §4 we describe measurements from photometric and spectroscopic data, respectively. We present our results in §5 and in §6 we finish with a summary and discussion. Throughout this paper all magnitudes are expressed in AB system (Oke & Gunn 1983) and we use standard cosmology with H 0 = 70 kms −1 Mpc −1 , Ω M = 0.3 and Ω Λ = 0.7.
SAMPLE AND DATA
The sample for this study consists of eight disky galaxies at z ∼ 0.4 with median stellar mass of log(M * /M ) = 9.6 and median SFR of SFR(M /yr) = 10.0. These galaxies are selected from a parent sample described in detail in Bundy et al. (2005) and Miller et al. (2011). Here, we present the selection criteria for the sample used in this study.
Galaxies in the parent sample were initially adapted from a Z F 850LP < 22.5 sample in the Great Observatories Origins Deep Survey (GOODS) North and South fields (Giavalisco et al. 2004) in the redshift (spectroscopic when available) range of 0.2 < z < 1.3 and were visually inspected to have prominent disks. A further magnitude cut of K s < 22.2 was applied to ensure reliable stellar mass measurements. We then observed these galaxies with the DEIMOS (Faber et al. 2003 7500Å were used, achieving FWHM spectral resolution of ∼ 1.7Å. The PA of slits are almost aligned with the galaxies major axis. This was to measure rotation curves for the galaxies in the parent sample (Miller et al. 2011).
Out of the galaxies with extended line emission, only eight have both Hα and Hβ covered in the DEIMOS spectra which are not contaminated by OH sky emission lines. These galaxies are selected for the analysis of this pilot study. The photometric data in this study comes from high resolution Hubble Space Telescope (HST) optical and near infrared images taken by the Advanced Camera for Surveys (ACS) and Wide Field Camera3 (WFC3) as part of the CANDELS. We use HST/ACS observations in the F435W, F606W, F775W and F850LP (hereafter B F 435W , V F 606W , I F 775W and Z F 850lp ) and HST/WFC3 observations in the F105W, F125W and F160W (hereafter Y F 105W , J F 125W and H F 160W ) filters. The ACS images have been multi-drizzled to the WFC3 pixel scale of 0. 06 (Koekemoer et al. 2011). Figure 1 shows the stacked HST (B F 435W , Z F 850lp and H F 160W ) images as well as cutouts of Hα and Hβ emission lines from the DEIMOS 2D spectra of all galaxies in the sample.
PHOTOMETRIC MEASUREMENTS
Using the photometric data, we have made 2D maps of physical parameters of galaxies (such as stellar mass and star formation rate surface density, age and extinction) in the sample by measuring the observed SED for individual pixels and fitting them to template SEDs. The method and the uncertainties in the estimated parameters are discussed in Hemmati et al. (2014). Here, we briefly explain the method for producing 2D maps. We use the 2D maps to produce 1D profiles along the disk of the galaxy. These profiles will then be used to directly combine/compare with spectroscopic data from DEIMOS.
We first make 80 × 80 pixel cutouts of galaxies in seven bands HST science and rms error images. We PSF-match them to the resolution of the H F 160W band. By multiplying the segmentation maps from SExtractor (Bertin & Arnouts 1996) to the PSF-matched cutouts we define the boundary of galaxy and remove any surrounding objects. Using the LePhare code (Arnouts et al. 1999;Ilbert et al. 2006), we then fit the SED measured for each pixel with spectral synthesis models to obtain the physical properties at that pixel (redshift is fixed to the spectroscopic redshift of the galaxy).
The model library is built using BC03 (Bruzual & Charlot 2003) models, Chabrier (Chabrier 2003) Initial Mass Function (IMF), solar metallicity, declining star formation histories with a range of τ (including constant and bursts) and ages less than the age of the Universe at the redshift of the galaxy in question. We use Calzetti starburst (Calzetti et al. 2000) attenuation curve with a range of E(B-V) from zero to one. We also include nebular emission lines in the fitting procedure. LePhare code accounts for the contribution of emission lines with a simple recipe based on the Kennicutt relations (Kennicutt 1998 Ilbert et al. 2008. The intensity of the lines are scaled according to the intrinsic UV luminosity of the galaxy. The 2D maps of physical parameters from the SED fitting output correspond to the median of the probability distribution function marginalized over all other parameters. We use 16% lower and higher values from the Maximum Likelihood analysis to measure the 1σ error for each parameter. Using the same library and code, we also measure integrated properties of galaxies by fitting the integrated light of all the pixels in the defined boundary of the galaxy in the same seven ACS and WFC3 bands. Table 1 summarizes the properties of the host galaxies. To be able to compare these measurements with their spectroscopic counterparts from Keck/DEIMOS, we need to convert 2D maps to 1D profiles. We combine measurements of all pixels which correspond to one spatial resolution in the DEIMOS spectra. In making Table 1 Integrated Properties of Galaxies in the Sample the profiles, we only include pixels that are covered by the DEIMOS slit. This is to avoid uncertain slit loss corrections in the spectral measurements. Therefore, in the direction perpendicular to the major axis of the galaxy, we bin all pixels in the slit and bin again in the direction of major axis of the galaxy to match the spatial pixel scale of DEIMOS (0.12 per pixel). Figure 2 shows the 2D maps and 1D profile of stellar mass, SFR and E(B-V) for one of the galaxies in the sample.
SPECTRAL MEASUREMENTS
We measure nebular dust attenuation along the disk of the galaxies using the ratio of the first two Balmer transitions (Hα and Hβ) or the so-called Balmer decrement. Ratio of the luminosities of Balmer transitions arising from HII regions around very massive stars is known to be the most practical indicator measuring attenuation by dust in these regions. In the absence of dust, the intrinsic theoretical Hα to Hβ line ratio, assuming Case B recombination, a temperature of T = 10 4 K, and an electron density of n e = 10 2 cm −3 is known (F (Hα)/F (Hβ) = 2.86; Osterbrock 1989). Any deviation from this amount is indicative of the amount of dust attenuation. We model the line emission at each spatial resolution element of continuum subtracted DEIMOS spectra. We convert the Hα to Hβ line ratio at each spatial resolution to an extinction measure using: log 10 ( 2.86 Hα/Hβ ) (1) We use the Cardelli Galactic extinction curve (Cardelli et al. 1989) for measuring k(Hβ) and k(Hα). A line-ofsight attenuation curve such as Cardelli, is more appropriate for recombination emission of compact HII regions compared to more extended stellar continuum attenuation curves such as Calzetti et al. (2000). The latter would cause smaller E(B − V ) nebular values by a factor of 0.9 (see Reddy et al. 2015 for a more detailed discussion).
We model and subtract the continuum by first making cutouts of ∼ 100Å around the emission line (Hα or Hβ in this case). We mask the wavelength range (∼ 15 − 20 A) where we have the emission in the cutout and bin the rest of the spectra in the wavelength direction with bins of ∼ 5Å. The median values in each wavelength bin are then fitted with a Gaussian function in the spatial direction. We measure the continuum under emission by interpolating these values over the masked wavelength range and subtract this continuum from the cutout spectra to have the continuum free emission line.
To model the line emission at each spatial resolution in the galaxy, we make small cutouts covering only the emission from the continuum subtracted spectra. To be able to trace the emission to furthest points from the center and to avoid tracing and fitting the noise in the background, we use the Otsu method (Otsu 1979). Otsu method is an unsupervised automatic thresholding algorithm which reduces a grey level image to a binary image by finding an optimum threshold to maximize the separability of the two classes. Here, using this method we separate the emission from the noise in the background by multiplying the binary image by the continuum subtracted cutout. We fitted Gaussian functions to the emission at each spatial resolution (in the wavelength/velocity direction) to measure the total flux from that resolution element. In figure 3, we show Hα and Hβ cutouts as well as the thresholded cutout and traced emission for one example galaxy. We have centered emission lines by the spectroscopic redshift, which comes from visually aligning all the emission lines. However, in order to measure the ratio of two lines we need to be more precise in centering the emissions. We center each emission line by fitting the peak of modeled Gaussians with an arctangent function (in the spatial direction) this takes care of velocity offsets due to rotation in the disk. We measure a correction factor from standard stars to account for flux calibration. This is an essential step even though we are only using the ratio of emission lines, due to the large difference between wavelength of Hα and Hβ and the wavelength dependence of flux calibration. This is a factor of ∼ 1.2 − 1.5 from z = 0.32 − 0.47. The Balmer decrement is then measured along the major axis of galaxies by setting a minimum signal-to-noise ratio (S/N) of 5 and 3 at each spatial resolution for Hα and Hβ lines respectively and then dividing the two. Nebular color excess (E(B − V ) nebular ) along the disk is then calculated by the conversion in equation 1. The uncertainty in this ratio is calculated by the standard error propagation method using the Hα and Hβ flux uncertainties at . Integrated nebular color excess of galaxies in the sample measured through the Balmer decrement compared to their stellar color excess measured through SED fitting. Nebular to stellar color excess ratio of 1 and 2.27 (Calzetti et al. 2000) are shown with dashed and dotted-dashed lines respectively. Galaxies are color coded based on their sSFR and symbol size increases with stellar mass.
each spatial position. In the resolved line measurement, we did not correct for the Balmer absorption because the corrections are very small and well within the uncertainty of the line measurements. Extinction uncertainties range from 0.03-0.2 magnitude and its variation is larger between galaxies compared to that of one single galaxy. We also measure the integrated or "global" Balmer decrement and nebular color excess values for the whole galaxy by extracting the 1D spectra and measuring line fluxes. We correct our measurements for the underlying Balmer absorption using the stellar population model fits. The corrections to the Hα to Hβ ratio are 4%. For simplicity, we refer to our dust measurements at each spatial distance from the center as resolved dust measurement. . Nebular and stellar color excess profiles (top panels) and nebular to stellar color excess ratios (bottom panels) along the major axis of galaxies in our sample shown with blue circles and red triangles respectively as a function of distance from the center of galaxies. Blue and red solid lines show the nebular and stellar integrated color excess measured in each galaxy and the blue shaded region is the 1σ uncertainty in the integrated nebular color excess.
Color Excess, Stellar Continuum vs. ionized gas
The ratio of the nebular to stellar dust extinction has been studied extensively over the past decade. Studies of local star forming galaxies (e.g. Calzetti et al. 2000;Wild et al. 2011) have found larger attenuation towards the nebular regions compared to the stellar continuum. However, in almost all of these works, there is a large scatter in the nebular vs. stellar color excess relation. Recent works have attributed the scatter in this relation, seen in samples of higher redshift star-forming galaxies, to physical properties of galaxies, specifically their stellar mass or sSFR.
In Figure 4, we compare the integrated nebular and stellar color excesses of galaxies in our sample. We have color-coded our galaxies based on their sSFR and the symbol sizes increase with stellar mass. All our galaxies sit above the 1 : 1 (dashed line) nebular to stellar color excess ratio, which means there is larger amount of nebular extinction compared to the continuum. While we do not find any clear trend in this ratio with neither sSFR or stellar mass, the sample size is too small to draw any strong conclusions.
We now compare resolved stellar and nebular attenuation measures along the disk of galaxies (as explained in the previous section). Plotted in Figure 5, are nebular (blue circles) and stellar (red triangles) color excess as a function of distance (in kpc) from the center of the galaxy, as well as the global measurement for each galaxy (solid lines). An agreement is seen among the nebular measurements at the resolution elements with the global measured value for each galaxy. The difference between the median of the measured points and the integrated value in each galaxy ranges from zero to maximum of 0.1 magnitude (well within the uncertainty of the integrated value of each galaxy). The small offset towards higher median values, might be due to the S/N criteria which affects the spectra of the outer parts of the disk more than the central parts. The uncertainties in the measured values become considerably smaller in galaxies with larger stellar masses (the right panels). Almost all the galaxies show higher stellar continuum extinction in the central regions compared to the outer parts in the disk. This differs from the nebular color excess of the lower mass galaxies (ID = 1 − 4) which does not significantly vary as a function of their distance from the center. More massive galaxies (ID = 5−8) however, show higher nebular extinction towards the central parts of galaxies similar to the stellar continuum color excesses.
As shown in Figure 4, the nebular to stellar color excess ratio, varies from galaxy to galaxy. By examining ID = 5 and ID = 7, two galaxies with the most extended emissions, highest S/N ratio, exact same redshift and comparable stellar masses, we see very different ionized to continuum color excess ratio. Between the two galaxies the one with the higher sSFR (ID = 7) has smaller color excess ratio, consistent with the findings of Price et al. (2014) for higher redshift galaxies but inconsistent with Reddy et al. (2015), and the framework depicted by radiative transfer models (e.g. Charlot & Fall 2000).
An important factor that might be playing a role is the difference in the inclination of these two galaxies. The inclination affects observed physical properties of galaxies and in particular their surface brightness (e.g. Holmberg 1958). Many studies used inclination and surface brightness to measure the amount of extinction in disky galax-ies (e.g. Giovanelli et al. 1995, Peletier & Willner 1992, knowing that with the same surface brightness, the more inclined galaxy suffers more from dust extinction. More recent study by Yip et al. (2010), confirmed the result of higher stellar extinction in more inclined galaxies using a large sample of local SDSS disk galaxies. More interestingly, this study showed that the amount of the Balmer decrement stays constant with inclination. If true, this can explain some of the dispersion in the ionized vs. stellar dust extinction studies. In the case of this study, if the more inclined galaxy (ID = 7) were face-on, it would have had less stellar continuum color excess and therefore larger ratio of ionized to stellar color excess similar to the other galaxy (ID = 5).
We do not find any significant correlation between the nebular to stellar color excess ratio and distance from the center of galaxies (see bottom panels of figure 5) except for the most massive galaxy (ID = 8) in which this ratio decreases with distance from the center. This is because the resolution of boxes in which the color excess is measured is still significantly larger compared to the sizes of original clouds (sub-kpc) producing the recombination lines. However, future telescopes such as Thirty Meter Telescope (TMT) equipped with integral field unit (IFU) technology and assisted by adaptive optics (AO) systems will be able to provide us with a wealth of information on these clouds at these redshifts.
Variation of Color Excess with Stellar Mass
Surface Density The typical dust extinction of a galaxy has been shown to depend upon different properties of the galaxy, most fundamentally on its stellar mass (e.g. Garn & Best 2010, Ibar et al. 2013). Here, we investigate the variation of color excess in galaxies as a function of the stellar mass surface density at each resolution element along the major axis of the disks. Figure 6, shows the color excess of the ionized gas as a function of stellar mass surface density for two of the galaxies. The data points are color-coded based on their distance from the center and the typical error bars are shown in the top-left corner of each panel. The stellar mass surface density maps from the resolved SED fitting are shown in the lower right part of each plot and over plotted with dashed black lines are the DEIMOS slit position and coverage. It is important to note that while the size of the disk's major axis is about 20 kpc in these galaxies, the extent of the emission lines is only about 12 kpc. This can be either due to lack of strong emission at large distances in these galaxies, or due partly to the over subtraction of background in larger radii in the DEIMOS reduction pipeline.
It is clear from figure 6 that the color excess of ionized gas is increasing from the outskirts towards the bulge of the galaxies symmetrically, similar to the stellar mass. Putting all the resolution elements of all galaxies in the sample together on the color excess vs. stellar mass surface density plot in Figure 7, we see the overall increase of the median of color excess in bins of stellar mass surface density.
Dust to Gas Ratio
A tight correlation exists between the mean surface density of cold gas and the average SFR per unit area Distance from center(kpc) ∼25kpc 5.50 5.75 6.00 6.25 6.50 6.75 7.00 7.25 Figure 6. Variation of nebular color excess as a function of stellar mass surface density in two of the galaxies, at the same redshift and with the same stellar mass. The stellar mass surface density maps are plotted in the corner right of each plot with the DEIMOS slit coverage over plotted. The measurements at each resolution element in the disk are color coded based on their distance from the center (distance from the center decreases going from blue to red color). Average uncertainty in measurements is shown in top left of each panel. Figure 7. Variation of color excess as a function of stellar mass surface density in all resolution elements of the galaxies in the sample. Different colors correspond to different galaxies. There is an overall increasing trend of color excess with increasing stellar mass surface density clear from the median represented with blue stars.
(the so-called SFR law) on global scales (e.g. Kennicutt 1998) as well as on resolved kpc-scales (Kennicutt et al. 2007). This relation ("KS relation") is parameterized using a power-law introduced by Schmidt (1963).
Here, we convert our SFR surface density (Σ SF R ; in units of M yr −1 kpc −2 ) measurements to total gas (molecular and atomic) surface density (Σ H ; in units of M pc −2 ), using: derived by Kennicutt et al. (2007) using observations of the nearby spiral galaxy M51a. We note that as the gas surface densities in higher redshift star forming galaxies is larger compared to M51, using the KS relation is an extrapolation when dealing with high gas surface densities. There are only limited observations of local galaxies with high gas surface densities, and they show evidence of shift in the KS relation at higher redshifts Hodge et al. 2015. Figure 8 shows the variation of nebular color excess per resolution elements along the major axis of galaxies in our sample as a function of gas surface density. There is an increasing trend of color excess with gas surface density at (logΣ H 2.5). We formalize this relation by fitting a third order polynomial to E(B − V ) nebular versus x ≡ logΣ H (shown in Figure 8 with magenta dashed line). (3) It is important to note again that, the sample size is small and we might be missing a population of galaxies that could alter this fit. In figure 8, we over plot the relation of nebular color excess as a function of dust mass surface density (multiplied by a factor of hundred) derived by Kreckel et al. (2013), from resolved FIR (Herschel and Spitzer) and optical integral field observations of a sample of eight nearby disk galaxies. The nebular color excess versus dust mass surface density relation introduced by Kreckel et al. (2013) (solid cyan line in Figure 8 ), is well bracketed by the two extremes of dust geometry discussed in Calzetti et al. (1994) (a uniform foreground dust screen model and a mixed media model), suggesting a combination of the two effects. The shape of the color excess versus dust mass surface density resembles that with gas surface density at higher gas surface densities. This implies that the change in the dust to gas ratio is insignificant in these regions with different amounts of extinction, while this is not the case at lower gas mass surface densities. Many previous studies have assumed a fixed dust to gas ratio in galaxies (e.g. Leroy et al. 2011, Sandstrom et al. 2013, this result certifies the fairness of those assumption at high gas mass surface densities. The difference in the lower gas mass surface densities, however is suggestive of a variable gas to dust ratio.
SUMMARY AND DISCUSSION
In this work, we have measured resolved kpc-scale dust reddening along the major axis of eight emission-line disky galaxies at z ∼ 0.4. We have used pixel-by-pixel SED fitting and Keck/DEIMOS spectra to infer stellar and nebular dust extinction at kpc-scales, respectively. While the sample size redshift range probed are small to draw robust statistical inferences about galaxy populations in general, we developed the methodology that can be practically used on much larger samples to measure dust inside galaxies at different radii from the center, using optical spectra from local galaxies to z ∼ 0.5 and Figure 8. Variation of nebular color excess as a function of gas surface density in all resolution elements of the galaxies in the sample. Magenta stars show the median in bins of gas surface density and the magenta dashed line is a third order polynomial fit to blue data points. Cyan solid line shows the relation between the nebular E(B-V) and dust mass surface density (multiplied by a factor of 1500), from Kreckel et al. (2013).
infrared spectra at higher redshifts. These kind of studies could then be used to address the variation of results for integrated galaxies at high-z found in the literature.
The integrated nebular to stellar color excess ratio for the galaxies in our sample are larger than unity with median (E(B − V ) nebular /E(B − V ) stellar ) = 2.5 and standard deviation 1.0. Due to the small sample size a clear trend of nebular to stellar color excess ratio was not seen with either stellar mass or the sSFR. However the nebular to stellar color excess ratio dependence can be investigated for individual galaxies. We specifically compared two of the galaxies in the sample (ID = 5&7) with the same redshift (z = 0.38) and comparable stellar mass (Log(M * /M ∼ 9.65). Between the two, the one with higher SFR has smaller nebular to stellar color excess ratio. This is in agreement with the work of Price et al. (2014) but contrary to the overall trend seen in Reddy et al. 2015 with integrated measurements for larger samples of galaxies at higher redshifts. The difference here can be explained due to differences in the orientations (inclinations) between the two galaxies, a parameter that is often overlooked in many studies.
We also compared the integrated and resolved nebular color excess in galaxies and found a good agreement between the two, with the integrated value, being equal or slightly less than the median of resolved measurements (median ∆E(B − V ) median−integrated ∼ 0.05 mag). This small offset is mostly due to the signal to noise criteria applied to Hα and Hβ emission lines at resolution elements, which exclude the outer parts of the emission line (which also appear to have lower extinction) from the resolved measurements. We found that the resolved stellar continuum color excess profiles show higher extinction towards central regions of galaxies compared to outer parts of the disk. This is contrary the nebular color excess profiles in the lower mass galaxies which show almost flat radial profiles.
The relation between the stellar mass and color excess has been studied at various redshifts (e.g. Garn & Best 2010, Domínguez et al. 2013). Here, we extended this relation to substructures inside galaxies, by studying the variation of nebular color excess as a function of stellar mass surface density and found an increasing amount of nebular color excess in regions with higher stellar mass surface density. This also explains to some extent, the lack of a correlation in the nebular attenuation as a function of distance from the center in lower mass galaxies in which the stellar mass surface density range covered is lower, when compared to higher mass galaxies.
We also examined the relation between the nebular color excess and the gas mass surface densities, converted from the SFR surface densities. The shape of the attenuation relation for high gas mass surface densities resembles the attenuation versus dust mass surface density relation found in Kreckel et al. (2013), implying that the dust to gas mass ratio is not changing significantly as a function of extinction at physical scale probed in this study. It is however important to note the assumptions made in deriving this result, such as the conversion from SFR surface density to gas surface density and the dust mass surface densities which are all measured from local observations and might not hold true at higher redshifts. Resolved FIR observations with the Atacama Large Millimeter/submillimeter Array (ALMA) will be essential to examine the validity of these assumptions. | 2015-10-08T20:52:04.000Z | 2015-10-08T00:00:00.000 | {
"year": 2015,
"sha1": "1da6a5f8e6b9794e59a3f4afbed136640526552f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "7f6bdda028b7370b1058a6d9566fbcf198b90b3f",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
232075133 | pes2o/s2orc | v3-fos-license | VOC 202012/01 Variant Is Effectively Neutralized by Antibodies Produced by Patients Infected before Its Diffusion in Italy
The coronavirus disease 2019 (Covid-19) pandemic is caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) and presents a global health emergency that needs urgent intervention. Viruses constantly change through mutation, and new variants of a virus are expected to occur over time. In the United Kingdom (UK), a new variant called B.1.1.7 has emerged with an unusually large number of mutations. The aim of this study is to evaluate the level of protection of sera from 12 patients infected and later healed in Apulia Region (Italy) with Covid-19 between March and November 2020, when the English variant was not circulating in this territory yet, against the new VOC 202012/01 variant by seroneutralization assay. The sera of patients had already been tested before, using a virus belonging to the lineage B.1 and showed an antibody neutralizing titer ranging between 1:160 and 1:320. All the 12 sera donors confirmed the same titers of neutralizing antibodies obtained with a strain belonging to the lineage B.1.1.7 (VOC 202012/01). These data indicate that antibodies produced in subjects infected with variants of Sars-CoV-2 strain before the appearance of the English one, seem to have a neutralizing power also against this variant.
Introduction
Viruses, in particular RNA viruses, are often subject to mutations, and the appearance of new variants is a quite a common event. The SARS-CoV-2 variant virus that has emerged in the UK in December 2020, the so-called "English variant" (VOC 202012/01), attracts particular attention due to its high contagiousness [1]. This variant presents multiple spike protein mutations (deletion 69-70, deletion 144, substitutions N501Y, A570D, D614G, P681H, T716I, S982A, and D1118H) as well as mutations in other genomic regions [2]. Among them, N501Y is of major concern because it involves one of the six key amino acid residues determining a tight interaction of the SARS-CoV-2 receptor-binding domain (RBD) with its cellular receptor angiotensin-converting enzyme 2 (ACE2) [3,4]. This mutation has also appeared, independently, in a rapidly spreading variant in South Africa [5].
Preliminary reports by the United Kingdom indicate that this variant is more transmissible than previous circulating strains, with an estimated increase of between 40% and 70% in transmissibility [6].
Laboratory studies are ongoing to determine whether these variant viruses have different biological properties or may alter vaccine efficacy [7].
Currently, there is no evidence that these variants cause more severe illness or increased risk of death. On the other hand, the risk that substantial genomic alteration may lead to test failure or escape from antibody response or vaccine efficacy must be carefully considered, and new information about the virologic, epidemiologic, and clinical characteristics of this variants is rapidly emerging [8]. As a matter of fact, a better understanding of how easily these variants might be transmitted and whether currently authorized vaccines will protect people against them is needed.
In addition, convalescent plasma (hyperimmune plasma) is one of the most used therapeutic strategies to reduce mortality and severity of the disease in patients infected with Covid-19 [9], but there are still no data about the efficacy of hyperimmune plasma obtained by patients infected and then healed before December 2020 against this variant that is rapidly spreading across Europe.
The aim of this study is to demonstrate that neutralizing antibodies produced by Italian patients who acquired the infection from March to November 2020, before the spread of VOC 202112/01, are fully protective also against this variant.
Patients
Serum samples collected from 12 patients who acquired the Covid-19 infection between March 2020 and November 2020, were recruited as hyperimmune plasma donors for the Italian project TSUNAMI and used in the present study. All serum samples had been previously analysed by seroneutralization test using a strain isolated in Italy during the first stage of the pandemic, and all of them showed an antibody neutralizing titer ranging between 1:160 and 1:320.
The lineage of the infecting virus was established for two patients, for whom the original diagnostic sample collected at the time of diagnosis could be retrospectively analysed by whole genome sequencing. For the other patients, for whom the infecting viral strain could not be retrospectively established, it is reasonable to assume that the strains were different from VOC 202012/01, since during the period in which the patients became infected, this variant did not circulate in the Apulia Region.
The seroneutralization test was run in parallel with the VOC 202012/01 strain isolated in a biosafety level 3 (BSL-3) laboratory of the Istituto Zooprofilattico Sperimentale of Puglia and Basilicata (Foggia, Italy), and with the virus isolate that has been used for the standardization of the test protocol that was established within the Italian network for SARS-CoV-2 seroneutralization test (NeuCoV-Net) established for the TSUNAMI protocol. The first one (GISAID accession number: EPI_ISL_745193), belonging to the lineage B.1.1.7 virus (VOC 202012/01), was isolated from a patient returning from Great Britain on 22nd of December 2020; the second one, belonging to the lineage B.1 clade G (GISAID accession number: EPI_ISL_568579), was isolated in March 2020 in Italy and was provided by Prof. Fausto Baldanti, Fondazione IRCCS Policlinico San Matteo (Pavia, Italy).
Cells and Virus Stock
African green monkey kidney Vero E6 cells were used for both the propagation of SARS-CoV-2 and the neutralization assay. Cells were cultured into a 25 cm 2 cell culture flask in Eagle's minimal essential medium (EMEM) (Life Technologies, Carisbad, CA, USA) supplemented with 10% (v/v) fetal bovine serum (FBS) (Life Technologies, Carisbad, CA, USA), and 100 U/mL penicillin and streptomycin (Life Technologies, Carisbad, CA, USA) in 5% CO 2 at 37 • C.
All the procedures handling the SARS-CoV-2 and infected cell cultures were held in a BSL-3 laboratory. To produce the viral stock, the virus was propagated in Vero E6 cells, and culture medium was collected and centrifugate to remove the cell debris [10].
Titration of SARS-CoV-2
Virus infectious titers were established by the Reed and Muench tissue culture infective dose (TCID 50 ) end point method [11]. For titration, 2 × 10 4 Vero E6 cells (in 50 µL) were plated into 96-well plates. Then the stock solution of SARS-CoV-2 was diluted serially from 10 −1 to 10 −8 , and 25 µL of each dilution was added to the cells and incubated in 5% CO 2 at 37 • C for 72 h. Eight replicates were performed for each dilution and used to quantify the virus titer and statistically determine the TCID 50 end point.
Cytopathic Effect Based Micro-Neutralization Assay
A cytopathic effect (CPE)-based micro-neutralization assay was conducted in 96-well microtiter plates [12]. Briefly, eight-fold dilutions (from 1:10 to 1:640) of human serum samples were tested in triplicate wells for the presence of antibodies that neutralized the infectivity of SARS-Cov-2 in Vero E6 cell monolayers. 100 TCID 50 of virus in 25 µL/well were incubated with 25 µL of each dilution of serum in EMEM with 6% FBS for 1 h at 37 • C. After the incubation, 2 × 10 4 Vero E6 cells (in 50 µL) were added to each well. The results of the seroneutralization test were determined by the appearance after 72 h of CPE, which was observed under an inverted microscope Axiovert 25 (Zeiss, Oberkocken, Germany). The neutralizing antibody titer was defined as the highest serum dilution at which no CPE breakthrough in any of the testing wells was observed. Table 1 shows the comparison of seroneutralization titer of serum samples from 12 donors, who had been infected with SARS-COV-2 strains different from VOC 202012/01, assessed against two different SARS-CoV-2 strains: i. lineage B.1 and ii. lineage B.1.1.7 (VOC 202012/01). As can be seen, the seroneutralization titer was identical against both viral strains (Table 1). Table 2
Discussion
The occurrence of new variants in the evolution of Sars-CoV-2 is a natural event, but it raises concern as mutations may determine altered biologic characteristics of the virus, including increased contagiousness, as it is the case of VOC 202012/01, which appeared in the UK and spread to several countries. In fact, this variant harbors multiple mutations in the spike protein, improving the ability of the virus to bind and penetrate cells. One major concern is the possibility that a profoundly mutated spike protein may alter the recognition by neutralizing antibodies, raising concerns about the protective ability of the recently issued vaccines that are based on original spike protein sequences. The evidence provided in the present study, according to a recent report [13], indicates that the VOC202012/01 variant is sensitive to the neutralizing activity of antibodies produced by patients in response to previously circulating viral strains, and the neutralizing titers are identical to those established by using as virus challenge a different strain that was isolated several months ago and is used nationwide to establish the neutralization titer of hyperimmune convalescent plasma preparations in Italy. These data are reassuring, in that all those who have overcome the disease and have produced good levels of protective antibodies may be protected against a possible reinfection sustained by an even distant SARS-CoV-2 strain. Furthermore, the administration of hyperimmune plasma may be considered a good protective opportunity against distant strains of SARS-CoV-2, irrespectively of the characteristics of the strains that had induced the antibody response. In line with these considerations, Xuping Xie et al. [14] have recently shown that antibodies produced by the mRNA-based COVID-19 vaccine BNT162b2 are effective against the isogenic Y501 SARS-CoV-2 developed on the genetic background of the N501 clinical strain USA-WA1/2020, which also provided the genetic background of the BNT162b2-encoded spike antigen. With respect to this study, our evidence adds important information, as the viruses used in the present study are authentic circulating strains that are both sensitive to the same extent to antibodies elicited by different viral strains.
Conclusions
The data of this study indicate that the antibodies produced in subjects infected with variants of Sars-CoV-2 strain circulating before the appearance of the VOC 202012/01 variant possess the same neutralizing power against this variant. Genomic surveillance aimed at performing an updated molecular map of circulating Sars-CoV-2 strains plays a fundamental role, since this virus can easily undergo genetic variations. We cannot exclude a priori that virus mutation(s) in the future might induce such a relevant structural change of the spike protein to require vaccine update and modification. | 2021-02-18T14:09:38.087Z | 2021-02-01T00:00:00.000 | {
"year": 2021,
"sha1": "6b73b71fa8060be5ebe3d5a8329c182b4e4cd98a",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1999-4915/13/2/276/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "84e80527f2bbdf0ba8afd909d0a3e3a3172a9c0f",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
235656792 | pes2o/s2orc | v3-fos-license | Genome-Wide SNP Markers for Genotypic and Phenotypic Differentiation of Melon (Cucumis melo L.) Varieties Using Genotyping-by-Sequencing
Melon (Cucumis melo L.) is an economically important horticultural crop with abundant morphological and genetic variability. Complex genetic variations exist even among melon varieties and remain unclear to date. Therefore, unraveling the genetic variability among the three different melon varieties, muskmelon (C. melo subsp. melo), makuwa (C. melo L. var. makuwa), and cantaloupes (C. melo subsp. melo var. cantalupensis), could provide a basis for evolutionary research. In this study, we attempted a systematic approach with genotyping-by-sequencing (GBS)-derived single nucleotide polymorphisms (SNPs) to reveal the genetic structure and diversity, haplotype differences, and marker-based varieties differentiation. A total of 6406 GBS-derived SNPs were selected for the diversity analysis, in which the muskmelon varieties showed higher heterozygote SNPs. Linkage disequilibrium (LD) decay varied significantly among the three melon varieties, in which more rapid LD decay was observed in muskmelon (r2 = 0.25) varieties. The Bayesian phylogenetic tree provided the intraspecific relationships among the three melon varieties that formed, as expected, individual clusters exhibiting the greatest genetic distance based on the posterior probability. The haplotype analysis also supported the phylogeny result by generating three major networks for 48 haplotypes. Further investigation for varieties discrimination allowed us to detect a total of 52 SNP markers that discriminated muskmelon from makuwa varieties, of which two SNPs were converted into cleaved amplified polymorphic sequence markers for practical use. In addition to these markers, the genome-wide association study identified two SNPs located in the genes on chromosome 6, which were significantly associated with the phenotypic traits of melon seed. This study demonstrated that a systematic approach using GBS-derived SNPs could serve to efficiently classify and manage the melon varieties in the genebank.
Introduction
The melon, Cucumis melo L. (Cucurbitaceae), is an economically valuable horticultural fruit crop that is highly important in Mediterranean and East Asian countries. The world production of melons was estimated to be about 27.3 million tons from 1.04 million ha [1]. Based on the availability of many wild Cucumis specimens, Africa is believed to be the geographical origin of the melon [2]. However, based on recent taxonomical studies, both Africa and Asia have been proposed for species origins [3]. Similarly, the history of melon domestication and diversification is not yet clear [4]. African and Asian cultivars/landraces According to the International Code of Nomenclature for Cultivated Plants (ICNCP), approximately 522 synonyms of C. melo have been recognized [29]. Therefore, there are many different types of melon available in the seed catalog that commonly come under the species name "C. melo". Similarly, in the Korean genebank, all the melon accessions were commonly recorded as "C. melo". Hence, it is impossible to differentiate seed accessions without standard passport descriptors. Moreover, all over the world, different melon varieties have been identified with the common name "melon". However, in Korea, makuwa (C. melo L. var. makuwa) is generally called "chamoe" and treated as a different fruit from other melon species; people consume the fruit flesh and seeds of makuwa, which has smaller seeds than that of the melon. The seed size is an important characteristic to differentiate the makuwa from other melon varieties in terms of edibility. Hence, in the present study, we also attempted a GWAS to identify the SNPs associated with the phenotypic traits of melon seeds. The generated data provide new insights into the identification of candidate genomic regions that could be used to differentiate all three melon varieties in order to efficiently classify the melon accession resources in the genebank.
GBS Analysis
To understand the genetic relationship between the three different melon varieties, 72 melon accessions consisting of muskmelon, makuwa, and cantaloupe were sequenced using GBS technology (Supplementary Materials Table S1). The sequencing data are presented in Supplementary Materials Table S2. Sequencing of the GBS library yielded 217.99 million (M) raw reads. After quality filtering, a total of 184.5 M clean reads with an average of 2.56 M reads per sample (ranging from 1.06 to 5.62 M) were generated from the raw reads. Statistical analysis of the sequence data further showed that the average quality value 30 (Q30) was ≥82.2%, indicating that the GBS library was sufficient for melon germplasm characterization. Each of the 72 sample reads was mapped to Cucumis melo L. cv. DHL92 v.3.5.1.
Among the GBS sequence reads, a total of 153.4 M reads with an average of 2.13 M (84.1%) reads were aligned to the reference genome. Among them, melon35 had the highest mapping rate (88%) and melon34 had the lowest rate (76.4%). Considering only the successfully mapped reads from the 72 melon accessions, SNPs were discovered by analyzing the single master alignment file and genotypes were named with GATK [30]. A total of 39,034 GBS-derived SNPs were identified and a total of 32,628 high-quality SNPs were filtered out from duplicated reads. Among them, 6406 SNPs with <5% missing data were selected. The homozygote and heterozygote SNP ratio across chromosomes showed that muskmelon (C. melo subsp. melo) has higher heterozygote SNPs (Supplementary Materials Figure S1). The number of homozygous SNP loci ranged from 1199 (melon18) to 3091 (melon59) and the heterozygote SNPs ranged from 90 (melon34) to 2835 (melon18) among the tested varieties (Supplementary Materials Table S3).
Genetic Structure and Molecular Diversity
An admixture-based clustering implemented in the STRUCTURE software and the DAPC were performed to infer the genetic structure of a germplasm collection. The STRUC-TURE analysis results (Supplementary Materials Figure S2) revealed the best grouping number (K = 2) based on the delta K. Population 1 and 2 consisted of 44 and 17 accessions, respectively, and 11 accessions were identified in the admixed population (Supplementary Materials Figure S2).
Further, the DAPC was carried out to detect the possible number of clusters among the 72 accessions (Figure 1). The number of detected clusters was three, which coincided with the lowest BIC value obtained from the find.clusters function. Eight first PCs (53% of the variance conserved) of the PCA were retained, and three discriminant eigenvalues were confirmed by the cross-validation analysis. Clusters 1, 2, and 3 consisted of 18, 48, and six accessions, respectively. The distribution of the accessions in the three populations was fully matched with the classification of varieties as makuwa, muskmelon, and cantaloupe. Thus, each population was considered for genetic diversity analysis. . Eight first principal components (PCs) and three discriminant eigenvalues were retained during the analyses to describe the relationship between the clusters. The axes represent the first two linear discriminants (LD). Each circle represents a cluster and each color represent the different subpopulations identified by the DAPC.
To quantify the genetic diversity of the three melon populations, Shannon's diversity index (I) was employed using the GBS dataset. The I was 0.31 for pop1, 0.48 for pop2, and 0.47 for pop3 (Table 1). The number of effective alleles (Ne) was 1.31, 1.55, and 1.52, while the expected heterozygosity (He) was 0.19, 0.32, and 0.31 for pop1, pop2, and pop3, respectively. The percentage of polymorphic loci (%P) ranged between 83.7 and 96.8. The distribution of molecular variance among and within population clusters was estimated using AMOVA. The results reveal that based on pairwise PhiPT values, the genetic variability within clusters (54%) was greater than the variability among the clusters (46%) ( Table 2). Pairwise PhiPT genetic distances (Table 3) ranged from 0.065 (Cluster 2/Cluster 3) to 0.549 (Cluster 1/Cluster 2), with a mean PhiPT value of 0.463, indicating significant variation among the population clusters (Table 2). The genetic diversity among the 46 makuwa accessions was also assessed using diversity indices. The I was 0.11 for cultivar, and 0.10 for landrace varieties (Supplementary Materials Table S4). The number of alleles (Na) was 0.89 and 0.70, while the He was 0.07 and 0.06 for the cultivar and landrace varieties, respectively. The percentage of polymorphic loci (P%) was higher in the cultivar varieties (33.5%) than in landrace varieties (23.6%). The AMOVA results reveal that based on pairwise PhiPT values, the genetic variability within clusters (93%) was greater than the variability among the clusters (7%) (Supplementary Materials Table S5), with a mean PhiPT value of 0.074, indicating considerable variation between clusters (Supplementary Materials Table S5).
LD Decay
This study showed three distinct populations in the C. melo germplasm collection; we decided to estimate the LD decays separately ( Figure 2). The LD was highly variable among the different genomic windows. The LD decay was clearer with the pairwise distance, in which the threshold value reached r 2 < 0.4 at 100 kb when the LD was analyzed among all varieties. The LD was also calculated separately for the makuwa, muskmelon, and cantaloupe varieties defined by the DAPC. As the LD varied significantly among the three varieties ( Figure 2 and Table 4), the LD decay distance (to r 2 = 0.5) for makuwa and cantaloupe was approximately 200 and 100 kb, respectively. For muskmelon, the LD decay distance was approximately 50 kb (r 2 = 0.25). As the threshold value of the LD decay was very high for the makuwa (r 2 < 0.5), we decided to estimate the LD decays between cultivar and landraces. The results reveal that the landraces decayed faster than the cultivar varieties ( Figure 2 and Table 4).
Phylogeny for Discrimination of Varieties
A Bayesian phylogenetic tree for all 72 accessions was constructed for a better visualization of their relationships. The Bayesian phylogenetic reconstruction of melon varieties showed a highly resolved phylogeny ( Figure 3). In the Bayesian tree, all three melon varieties (makuwa, muskmelon, and cantaloupe) formed individual clusters where a single melon (subsp. melo) accession (melon56) was clustered together with the makuwa clade, and a dudaim melon accession (melon54) was located closer to the makuwa clade; this was similar to the results of the ADMIXTURE (Figure 3). The phylogenetic tree clearly provided the intraspecific relationships between the three melon varieties. As expected, based on the posterior probabilities, the three examined varieties were clustered separately from each other, while muskmelon and cantaloupe varieties were found in the same clade.
Haplotype Network
The concatenated SNP matrix exhibited a total of 48 haplotypes among the varieties. Using an integer neighbor-joining network [31], we attempted to draw the three observed haplotypes from those of the extant neighboring populations. The integer neighbor-joining haplotype network revealed three major networks (Figure 4), with a clear distinction among makuwa, muskmelon, and cantaloupe haplotypes.
Evaluation of SNP Markers for Varieties Discrimination
The difference between muskmelon and makuwa varieties were found to be difficult to distinguish because of their genotypic relationship. Initially, a total of 6406 SNPs were filtered from the raw variants to discriminate the muskmelon, makuwa, and cantaloupe varieties. Furthermore, in a Pearson's chi-squared test, a total of 52 SNPs specific to each variety were detected based on allele frequencies. The concatenated consensus SNP markers showed clear discrimination of makuwa, muskmelon, and cantaloupe varieties (Supplementary Materials Figure S3 and Table S6), where the var. cantalupensis was also located closer to muskmelon accessions as a result of phylogeny. Similarly, a dudaim melon accession (melon54) and a misidentified accession (melon56) in the Bayesian phylogeny also showed a clear variation between makuwa and muskmelon accessions.
Development of CAPS Markers
To discriminate the melon varieties for the efficient management of melon accessions in the genebank, the SNPs were converted into CAPS markers. The developed CAPS markers were tested on 23 representative makuwa accessions along with muskmelon and cantaloupe accessions ( Figure 5). Among the 52 SNPs selected, two SNP positions identified in the intergenic region, namely 27,668,340 bp in chromosome 3 (MELO3C010934) and 22,254,315 bp in chromosome 9 (MELO3C005675), were successfully recognized with the restriction enzyme. The SNP located on chromosome 3 had a recognition site (GGTAG) for the BccI restriction enzyme, which was developed as CAPS_10. The PCR product (556 bp) was digested with the BccI enzyme, which produced an uncut allele (556 bp) pattern in muskmelon and cantaloupe, whereas a digested allele (152 and 404 bp) pattern was produced in makuwa ( Figure 5a). Moreover, the muskmelon accession (melon56) had a similar digested allele pattern to makuwa. Similarly, the SNP located on chromosome 9 had a recognition site (TGACC) for the BsrI restriction enzyme, which was developed as CAPS_33. The PCR product (585 bp) was digested with BsrI, which produced the opposite allele pattern of CAPS_10. Muskmelon and cantaloupe varieties produced a digested allele (115 and 470 bp), whereas makuwa produced an uncut allele (585 bp). Likewise, the muskmelon accession melon56 produced an uncut allele pattern similar to makuwa (Figure 5b).
Identification of Genes or Loci Related to Agronomic Traits
To identify the causative genes for the agronomic traits, we performed an association study with mixed models using a panel composed of 72 accessions for the agronomic traits of seed phenotypic descriptors (Supplementary Materials Table S1). When we compared the phenotypic characteristics of the seeds (such as TSW, length, and width) among varieties, makuwa was smaller than muskmelon and cantaloupe accessions ( Figure 6). However, based on the phenotypic characteristics, dudaim melon (melon54) and muskmelon (melon56) accessions were smaller in size when compared with other muskmelon accessions (Supplementary Materials Table S1), which creates more deviation in TSW within muskmelon accessions ( Figure 6). Manhattan plots of the association analysis revealed strong signals for the phenotypic traits of melon seeds (Supplementary Materials Figure S4). Among the signals, a total of four SNPs (S6_875904, p = 0.00016; S6_5912593, p = 0.00042; S8_11953060, p = 0.00002; S9_23627273, p = 0.00009) on chromosomes 6, 8, and 9 were significantly associated with the TSW of melon seeds. However, SNPs located on chromosomes 8 and 9 were predicted to be a hypothetical protein and S-type anion channel SLAH2, splicing intron, respectively, whereas the SNPs S6_875904 and S6_5912593 were located in the genes annotated as protein ABIL 1 and titin homolog isoform x2, respectively. A set of 11 and 12 associated signals which spanned 275.25 (from 615,394 to 890,641 bp) and 369.07 kb (from 5,600,665 to 5,969,733 bp), respectively, were also identified around these two SNP regions in the melon DHL92 V3.5.1 reference genome (Supplementary Materials Figure S5). Various genes predicted for the seed ontology were detected in these regions. The LD haplotype analysis with the 72 accessions showed associations of S6_875904 and S6_5912593 with 11 and 12 other loci, respectively, which revealed LD blocks in these regions (Supplementary Materials Figure S5a,b).
Evaluation of SNP Characteristics
Various molecular markers have been used extensively in genetic diversity analyses to characterize the plant germplasm over the past two decades [32]. Recently, GBS technology has become a powerful method for studying the genetic characteristics of plant species [33]. Similarly, the GBS based genotyping strategies have also been used for the analysis of melon genotypic variability [11,23]. In the present study, we analyzed the genetic variability of 72 melon accessions using 32,628 GBS-derived SNPs. Compared with the GBS-derived SNP results in previous studies [27,28], this study collected three different varieties that yielded more SNPs. Finally, 6406 SNPs with < 5% missing data were selected for further study. Recently, Moing et al. [34] reported the infraspecific classification of C. melo cultivar groups based on a combination of about > 80,000 metabolomic features together with >20,000 SNPs. Likewise, GBS combined with SNP validation assays has also been tested in commercial melon cultivars identification using 9018 GBS-derived SNPs [35]. Similarly, in the present study, 32,628 GBS-derived SNPs allowed us to infer infraspecific classification among the C. melo varieties of the widely cultivated species. The heterozygous SNP ratio across chromosomes showed that the muskmelons (C. melo subsp. melo) have a greater range of heterozygous markers than the makuwa (C. melo L. var. makuwa) variety (Supplementary Materials Figure S1). More heterozygous markers in the muskmelon variety could be due to their outcrossing nature, as reported previously [36].
Population Structure and Genomic Variability
The model-based STRUCTURE analysis classified 72 melon accessions into two groups (Supplementary Materials Figure S2). In addition, 72 melon accessions were divided into three well-defined clusters, which were clearer than their genetic structure in the result of the DAPC (Figure 1). The fact that muskmelon, makuwa, and cantaloupe accessions were grouped distinctly indicated that these accessions had the highest level of genetic variability. Our study clearly differentiated muskmelon, makuwa, and cantaloupes into different clusters, which is in agreement with previous studies [12,13,16,[37][38][39][40][41]. Recently, Nimmakayala et al. [22] analyzed 120 melon accessions, which contained a good representative collection of melon species and showed clear differentiation of melon species in which makuwa and cantaloupe varieties were clustered separately.
To resolve the differentiation among muskmelon, makuwa, and cantaloupe, we estimated pairwise (PhiPT) values across all polymorphisms with MAF ≥ 0.05 (Table 1). All PhiPTs were highly significant (p < 0.001). The PhiPT value between accessions of muskmelon and makuwa was 0.549, whereas that between makuwa and cantaloupe was 0.301. Cantaloupe was found to be much closer to muskmelon than the makuwa varieties, with a PhiPT value of 0.065. The AMOVA revealed that based on pairwise PhiPT values, the genetic variability within clusters (54%) was greater than the variability among the clusters (46%) ( Table 2). Pairwise PhiPT genetic distances (Table 3) between populations indicated significantly high variation among population clusters (Table 2). Similarly, the He varied from 0.191 to 0.318, which suggests the extent of variation between the three varieties, as reported in previous studies [22,42].
LD Decay and Haplotype Network
To conclude the genome-wide LD, a high-density SNP array should be analyzed [43]. The current results reveal that the LD was high, in the range of kb, when analyzing samples from all melon accessions (Figure 2 and Table 4). For an in-depth analysis of genome-wide experiments, one SNP per every kb or lower density would be necessary to ensure the detection of LD decay. Therefore, the extent of LD in melon is similar to that reported in other species, such as tomato [44,45], wheat [46], peach [47], barley [48], and rice [49]. On the other hand, the current results show that muskmelon alone decays rapidly, within 50 kb. Previous studies on the LD decay for melon populations showed that the LD decays more rapidly within a few kilobases, which might be due to the use of very different germplasms [5,27,28].
In general, the LD declines more slowly in self-pollinated crops, where recombination is less effective than in cross-pollinating species [50,51]. Higher LD levels were also found in flax [52] and sesame [53] because of self-pollination. We found the slowest LD decay was in makuwa, as the level of genetic variation found within the varieties influenced the extent of LD, where LD decay was rapid in landrace accessions compared to related cultivars, as reported in other species [54]. The much lower r 2 value and longer LD distance for the makuwa and cantaloupe suggested that these varieties may have undergone a severe bottleneck.
To describe the genetic structure of the melon populations, we performed haplotypes using integer neighbor-joining network analysis (iNJ). The iNJ network revealed three major networks (Figure 4), with a clear distinction between muskmelon, makuwa, and cantaloupe haplotypes. The concatenated SNP matrix exhibited a total of 48 haplotypes, in which makuwa showed higher haplotype frequencies (52%), followed by muskmelon (37.7%) and cantaloupe (10.4%) varieties. Esteras et al. [6] also recognized three clades based on the median-joining network with an ITS dataset containing wild and cultivar melon accessions from Africa, Asia, the Mediterranean, and Australia.
Phylogeny for Discrimination of Melon Varieties
In general, the evolutionary relationship between species is revealed through phylogenetic analysis. In an earlier study, Pitrat [8] divided the subspecies melo into ten groups, and the subsp. agrestis into five groups. However, some of these accessions displayed intermediate features and were difficult to classify. Early taxonomic work failed to separate the cultivated species from wild species, resulting in approximately 522 synonyms of C. melo species in the seed catalog [29]. Therefore, around the globe, wild as well as cultivated melon varieties have been recorded as "C. melo" and commonly identified as "melon". Similarly, in the Korean genebank, all the melon accessions were recorded as "C. melo", which makes them difficult to differentiate without standard passport data. Therefore, the present study aimed to test the accuracy of varieties discrimination for a total of 72 accessions, including makuwa (C. melo L. var. makuwa), muskmelon (C. melo subsp. melo), and cantaloupe (C. melo subsp. melo var. cantalupensis) with 6406 genome-wide SNP markers. The Bayesian phylogeny tree clearly showed that makuwa and muskmelon accessions were clustered in an individual clade (Figure 3). The Bayesian tree showed cantaloupe accessions clustered together with the muskmelon (C. melo subsp. melo) population, which was similar to the STRUCTURE and iNJ networks. In a recent study, a total of 23,931 GBS derived SNPs successfully classified 44 melon accessions into two well-defined clusters, which clearly distinguished between the subspecies agrestis and melo [34]. However, a charentais-type Cantalupensis melon accession was placed closer to the subspecies melo, and a dudaim melon accession was placed between the subspecies, which coincides with the present study, where the variety of cantaloupes and a dudaim melon accession clustered, respectively. It is believed that cantaloupe melons originated from South Asia to Africa and spread to Europe [55]. Moreover, the cantaloupes, comprising many cultivated varieties from Europe, Asia and America [56], are more diverse than makuwa. However, based on the present study, a muskmelon accession (melon56) was misidentified or incorrectly classified, which requires critical evaluation.
SNP Markers for Varieties Differentiation
Single nucleotide polymorphisms are valuable markers for discovering species relationships; however, it is highly challenging to validate a subset of melon accessions [57]. Various high-throughput genotyping assay techniques have been developed and successfully used in land plants [58][59][60]. However, these high-throughput methods may not be suitable for germplasm management in genebanks where a large number of accessions are conserved. To minimize the effort of high-throughput genotyping assays, developing SNP markers could be a better solution.
Analysis revealed a total of 52 SNPs based on allele frequencies. The concatenated consensus SNPs clearly discriminated between makuwa and muskmelon accessions (Supplementary Materials Figure S3 and Table S6). Similarly, the cantaloupe accessions also showed 96% variation with makuwa, whereas there was only 4% variation with muskmelon varieties, which was similar to STRUCTURE and Bayesian phylogeny. Moreover, the two melon accessions (melon54 and melon56) also revealed the reason behind their clustering with the varieties of makuwa. Among the 52 SNPs, the melon54 accession showed 51 non-matching SNPs with makuwa (98%), whereas there were only six non-matching SNPs with muskmelon varieties (11.5%). Similarly, melon56 accessions showed five nonmatching SNPs with makuwa (9.6%), whereas there were 49 non-matching SNPs with muskmelon varieties (94.2%), which was repeated in the Bayesian phylogeny. Hence, the present study suggests that the melon54 accession (dudaim melon) is more appropriate for muskmelon (C. melo subsp. melo), whereas the melon56 accession belongs to the variety makuwa (C. melo L. var. makuwa), which requires more critical evaluation at the field level.
Validation of CAPS Markers
Various SNP-based molecular markers have been developed and successfully used in plant species identification [5,26,53,60]. Among these, CAPS markers have been found to be promising for detecting the intra-and interspecies variation of different species [61]. Similarly, in the present study, two intergenic SNP positions, CAPS_10 and CAPS_33, were found to be promising loci for discriminating the melon varieties. In mammalian systems, most intergenic transcripts were found to be un-spliced and associated with nearby gene expression [62]. Moreover, the intergenic transcribed regions found to be more divergent in expression tended to be more species-specific when compared to annotated genes across plant species [63]. When the PCR products (556 and 585 bp) were digested with the respective enzymes (BccI and BsrI), they produced distinct allele patterns between the muskmelon and makuwa populations ( Figure 5). Interestingly, the muskmelon accession melon56, which was clustered together with the makuwa accessions in the phylogenetic tree, showed a very similar allele pattern to that of makuwa, which requires critical evaluation within the GMS in order to manage the melon accessions correctly in the genebank.
Identification of Agronomic Traits
Certainly, all over the world, seedless characteristics always improve the economic value of fruits. In Korea, people prepare makuwa (chamoe) to consume the fruit flesh, including the seeds, as the seeds are tiny compared with melon seeds. Hence, seed size is an important characteristic for differentiating the makuwa variety from muskmelon in terms of edibility. Various studies on genes or loci underlying agronomic traits have been reported in melon [10,22,23]. In a previous study, Pavan et al. [28] detected significant associations between seed width and flowering time. Interestingly, a candidate gene (MSI1) associated with seed development [64] was detected for seed width. Similarly, in the present study, to identify the causal genes for an agronomic trait, we performed an association study with the phenotypic traits of melon seeds. Functional analysis of candidate genes identified in this study could be useful to confirm the link of phenotypic variation in melon.
Based on phenotypic traits, makuwa has a smaller seed weight than the muskmelon and cantaloupe accessions ( Figure 6). However, interestingly, the TSW of dudaim melon (melon54) and muskmelon (melon56) accessions showed smaller seed weights (13 and 8.4 g), which were similar to makuwa TSW (Supplementary Materials Table S1). The GWAS results reveal phenotypically associated signals within the melon genome (Supplementary Materials Figure S4), in which two SNPs on chromosome 6 (S6_875904 and S6_5912593) were significantly associated with the phenotypic traits of melon seeds (Supplementary Materials Figure S5a,b). Both SNPs were located in the genes predicted for seed ontology, in which the annotated protein ABIL1 and titin homolog isoform x2 were reported for seed development [65,66].
The SNP (S6_875904) located on the protein ABIL1 coding region was found to be missense, which makes the stop codon of the transcript, whereas the SNP (S6_5912593) located on the titin homolog isoform x2 was found to be synonymous. Genome-wide analysis of the maize genome reveals that synonymous mutations change tRNA adaptation, which affects the local translation rate [67]. There is much experimental evidence of the synonymous mutation effects on the phenotypes of different organisms [68][69][70]. Moreover, the SNP S6_5912593, identified to be an A/A haplotype similar to makuwa within the dudaim melon (melon54) and muskmelon (melon56) accessions, revealed that the gene titin homolog isoform x2 could play a crucial role in seed development, as reported earlier in Arabidopsis. These markers, combined with SNP markers, can be used to manage the melon accessions in order to provide accurate information in the genebank.
Plant Materials
For this study, a total of 72 melon accessions were obtained from the Korean genebank of the National Agrobiodiversity Center at the Rural Development Administration in South Korea (Supplementary Materials Table S1). Based on seed metadata information, 21 muskmelon and a dudaim melon, also called "wild muskmelon", accessions were introduced from USDA-ARS and were collected from different countries. Since all the melon accessions were commonly recorded as "C. melo", hereafter we would like to classify these melon accessions into the following three varieties: muskmelon (C. melo subsp. melo), makuwa (C. melo L. var. makuwa), and cantaloupe (C. melo subsp. melo var. cantalupensis), which could be further classified into cultivar and landraces according to the germplasm introduction information. Seed phenotypic data such as 1000-seed weight (TSW), length (cm), and width (cm) were retrieved from the genebank management system (GMS) of the genebank for the association mapping analysis with each accession.
DNA Extraction
For the GBS of the melon collection, 30 mg of freeze-dried leaf tissue was taken from the 72 accessions listed in Supplementary Table S1. Total genomic DNA isolation was performed according to the manufacturer's recommendation using the QIAGEN plant mini kit (Qiagen, Valencia, CA, USA). The quality of DNA in each sample was determined using 1% (w/v) agarose gel electrophoresis and quantified by spectrophotometry.
Preparation of Genotyping-by-Sequencing Libraries
The extracted DNA was quantified and normalized to 12.5 ng/µL using Quant-iT PicoGreen dsDNA Assay Kit (Molecular Probes, Eugene, OR, USA) with a Synergy HTX Multi-Mode Reader (Biotek, Winooski, VT, USA). The DNA was treated with the restriction enzyme ApeKI (New England Biolabs, Ipswich, MA, USA) at 75 • C for 3 h. Sequencing libraries for GBS were constructed according to previously described procedures [25]. The DNA samples were digested and ligated with adapters, which contained different barcodes for tagging individual samples. Ligated samples were pooled and purified with a NucleoSpin ® Gel and polymerase chain reaction (PCR) Clean-up Kit (MACHEREY-NAGEL GmbH & Co. KG, Duren, Germany). The purified samples were PCR amplified in a 50 µL reaction and the amplified products were evaluated for fragment sizes using BioAnalyzer 2100 (Agilent Technologies, Santa Clara, CA, USA). Illumina NextSeq500 (Illumina, San Diego, CA, USA) was used to sequence the GBS libraries with a read length of 150 bp single-end reads.
Population Structure and Genetic Diversity
The discriminant analysis of principal components (DAPC) was used to assign the individual accessions to the population clusters [75]. The DAPC requires the construction of prior groups; therefore, the most likely number of clusters in each melon was identified by the "find.clusters" function in the R package adegenet, based on the Bayesian Information Criterion (BIC). In the DAPC analysis, a two-step procedure was followed in which the original data were transformed and submitted to a principal component analysis (PCA), and the principal components (PCs) were passed to a linear discriminant function analysis based on the groups identified in the K-means clustering. The PCs retained well with the population numbers, leading to accurate discriminant functions, resulting in perfect discrimination [76]. Hence, "optim.a.score" function was used to assesse the quality of discrimination that served as the best criteria to choose the optimal number of PCs in the DAPC [75]. The resulting clusters were plotted as a DAPC scatterplot with the first and second linear discriminants.
To investigate the population structure, admixture analysis was performed on the 72 individuals using the ADMIXTURE tool (available from: http://software.genetics.ucla. edu/admixture/index.html, accessed on 12 April 2021). The admixture-linux-1.3.0 was run with default parameters in an unsupervised mode of K = 1 to 21. The cross-validation error for each K was computed with the -cv option (10 folds), which identified K = 2 as the most suitable modeling choice.
For each melon population, we used hierarchical analyses of molecular variance (AMOVA) to investigate the molecular variation within and among the groups defined by the DAPC function. The AMOVA and the pairwise genetic differentiation (PhiPT) between and among melon varieties were calculated using GenAlEx software (6.5 version) with 999 permutations [77]. Expected heterozygosity (He), unbiased expected heterozygosity (uHe), and the percentage of polymorphic loci were also calculated using GenAlEx software.
Linkage Disequilibrium Decay and Haplotype Analysis
To understand the genome-wide variability among the three varieties, pairwise estimates of linkage disequilibrium (LD) were measured by the squared correlation analysis of allele frequencies (r 2 ) with minor allele frequency (MAF) > 0.05. In addition, an LD threshold (r 2 ) of 0.20 with a window size of 100 kb was used to calculate the correlation coefficient (r 2 ) of alleles using the software PopLDdecay [78]. The LD was analyzed for different sub-datasets: total population, makuwa, muskmelon, and cantaloupe-related groups as defined by DAPC. The LD decay with distance in base pairs (bp) between sites within the candidate locus was evaluated using a regression curve. The haplotype frequency within the population groups was calculated using Arlequin software Ver. 3.5.2.2. The phylogenetic network was performed using an integer neighbor-joining network [31] with popART (http://popart.otago.ac.nz, accessed on 7 April 2020).
Phylogenetic Relationships
All SNPs were concatenated into a single alignment. Beast v2.1 was used to calculate the score for the substitution of SNPs, and Bayesian analyses were conducted with the GTR + G nucleotide substitution model using MrBayes version 3.2.6. The GTR + G model was chosen in both the AIC and hLRTs models for the model estimation. The model was estimated by MrModelTest version 2.4, using the calculated score as the input value [79]. In the Bayesian analyses, trees were sampled every 1000 generations using MrBayes until the average deviation of the split frequencies fell below 0.01 [80].
Evaluation of SNP Markers for Varieties Discrimination
Initially, to identify the SNP markers, fine SNPs were filtered from the raw variants to discriminate the muskmelon, makuwa, and cantaloupe varieties. Further specific SNPs were filtered based on the allele frequency between varieties to discriminate them. Pear-son's chi-squared test was performed to identify the significant SNPs that discriminated melon varieties.
CAPS Marker Development
To validate the SNPs for variety discrimination, cleaved amplified polymorphic sequence (CAPS) markers were developed with the information of 52 SNPs. The web-based program dCAPS finder 2.0 (www.helix.wustl.edu/dcaps, accessed on 11 June 2020) was used to find the restriction enzyme sites within the SNP position. To detect SNPs retained in the melon varieties, amplification reactions were carried out using appropriate primers. The PCR product obtained from the amplification of specific SNP regions was digested with 1U of restriction enzyme (New England BioLabs, Ipswich, MA, USA). Digestion was performed at 37 • C (BccI) or 65 • C (BsrI) for 1 h, and the fragments were analyzed with 2.0% agarose gel electrophoresis.
Genome-Wide Association Mapping
Genome-wide association analysis for phenotypic traits of seeds was performed with 6406 high-quality SNPs (MAF > 0.05). Here, GAPIT implemented a series of methods for GWAS and genomic selection for high statistical power, high prediction accuracy, and high computing speed [81]. To perform the GWAS with maximum accuracy, mixed models were chosen, which included the general linear model (GLM), mixed linear model (MLM), multilevel mixed model (MLMM), fixed and random model circulating probability unification (FarmCPU), and Bayesian-information and linkage-disequilibrium iteratively nested keyway (BLINK). The p-values of the correlation association of each SNP with agronomic traits were calculated with the GAPIT R package [82]. The LD heatmap and regional association statistics for TSW were analyzed with LDBlockShow [83] together with the publicly available Cucumis melo L. cv. DHL92 v.3.5.1. genome browser (http://cucurbitgenomics.org/, accessed on 15 June 2021).
Conclusions
Highly informative SNP markers were developed in the present study through GBS analysis. The identified SNP markers provide a clear picture of the genomic relationships among the collection of the 72 melon accessions using a set of 6406 genome-wide SNPs. The DAPC and population structure seem to be defined mainly by their varieties. The cantaloupe (C. melo subsp. melo var. cantalupensis) varieties were closer to muskmelon (C. melo subsp. melo) than makuwa (C. melo L. var. makuwa). Bayesian phylogeny of the melon varieties showed a highly resolved phylogeny, and the developed SNP markers clearly discriminated the corresponding varieties more accurately. The SNP markers could be standardized easily with a very low cost and minimum equipment for quick operation in a genebank. In association mapping, two SNPs on chromosome 6 were significantly associated with the phenotypic traits of melon seeds. The SNP variations of protein ABIL1 and titin homolog isoform x2 could be used in molecular breeding to develop commercially improved cultivars/varieties. Overall, this study provides a systematic approach for the efficient classification of melon seed accessions using genome-wide information. Information on genomic variability between melon varieties will facilitate the efficient classification and utilization of these resources in the genebank.
Supplementary Materials: The following are available online at https://www.mdpi.com/article/10 .3390/ijms22136722/s1, Figure S1: The homozygote and heterozygote single nucleotide polymorphism (SNP) ratios across the chromosomes, Figure S2: ADMIXTURE results assuming two ancestral populations. Colors represent ancestry components. Stacked bars represent samples. Samples are arranged according to taxonomy as indicated on the x-axis, Figure S3: Single nucleotide polymorphism (SNP) markers to discriminate the muskmelon, makuwa, and cantaloupe varieties, Figure S4: Manhattan plots of the genome-wide association study for the phenotypic traits of 1000-seed weight (TSW), length, and width in the melon populations, Figure S5: The candidate genes (A) protein ABIL1 and (B) titin homolog isoform x2 underlying the 1000-seed weight (TSW) variation in melon accessions. Each panel shows the association statistics for TSW in melon accessions, the location of the genome-wide association study (GWAS)-associated region on chromosome 6, and candidate genes and the LD haplotype heatmap for the GWAS region harboring genes and the segregating single nucleotide polymorphisms (SNPs) (S6_875904 and S6_5912593). The presence of genes and the segregating SNPs are indicated by the red cursor in each location of the figure, Table S1: Sampling information and RDA accession numbers of melon varieties, Table S2: Demultiplexing, adaptor trimming, and read mapping, Table S3: Information about homozygote and heterozygote single nucleotide polymorphisms (SNPs) based on genotyping-by-sequencing (GBS) in melon accessions, Table S4: Statistics of genetic variation for the makuwa cultivars and landraces, Table S5: Results of analysis of molecular variance (AMOVA) and F-statistics within the makuwa cultivars and landraces, Table S6: Pairwise distance between single nucleotide polymorphisms (SNPs). | 2021-06-28T05:21:08.004Z | 2021-06-23T00:00:00.000 | {
"year": 2021,
"sha1": "8e8d7eb2a4f2d3411f171db40ad0cbb88e602e0c",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/22/13/6722/pdf",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "8e8d7eb2a4f2d3411f171db40ad0cbb88e602e0c",
"s2fieldsofstudy": [
"Agricultural And Food Sciences",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
12282961 | pes2o/s2orc | v3-fos-license | Significant Reduction in Helicobacter pylori Load in Humans with Non-viable Lactobacillus reuteri DSM17648: A Pilot Study
Reducing the amount of Helicobacter pylori in the stomach by selective bacterial–bacterial cell interaction was sought as an effective and novel method for combating the stomach pathogen. Lactobacillus reuteri DSM17648 was identified as a highly specific binding antagonist to H. pylori among more than 700 wild-type strains of Lactobacillus species. Applying a stringent screening procedure, the strain DSM17648 was identified as selective binder to H. pylori cells under in vivo gastric conditions. The strain DSM17648 co-aggregates the pathogen in vivo and in vitro. The specific co-aggregation occurs between Lact. reuteri DSM17648 and different H. pylori strains and serotypes, as well as H. heilmannii, but not with Campylobacter jejuni or other commensal oral and intestinal bacteria. Lact. reuteri DSM17648 was shown in a proof-of-concept single-blinded, randomized, placebo-controlled pilot study to significantly reduce the load of H. pylori in healthy yet infected adults. Reducing the amount of H. pylori in the stomach by selective bacterial–bacterial cell interaction might be an effective and novel method for combating the stomach pathogen. Lact. reuteri DSM17648 might prove useful as an adhesion blocker in antibiotic-free H. pylori therapies.
Introduction
Helicobacter pylori is a recognized pathogen and carcinogen causing gastritis, ulcers and gastric cancer. More than 50 % of the world population is infected with this stomach bacterium [1,2]. Severity of the clinical manifestations of the infection is associated with bacterial load [3][4][5]. Currently, the only therapeutic option is eradication of the pathogen by a combination of several antibiotics and a proton-pump inhibitor (triple therapy; Maastricht IV/Florence Consensus Report) [6]. Eradication therapy is associated with severe side effects and development of antibiotic resistances [7]. Reducing the amount of H. pylori in the stomach by selective bacterial-bacterial surface interaction represents an alternative method for combating the stomach pathogen. Specific co-aggregation has been widely discussed as a means to return to homeostasis in diseased states [8][9][10]. While H. pylori resides in the mucus where it is present in its motile form, mucus is constantly produced by the epithelium and shed into the stomach lumen. This continuously releases planktonic H. pylori cells into the stomach. The Lactobacillus strain identified in this study specifically captures such H. pylori cells. As spray-drying or freeze-drying procedures allow the retention of binding activity, structures in the cell wall unaffected by the drying procedures are supposed to be responsible for the aggregation activity.
Previous papers describing the use of Lact. reuteri in H. pylori-related clinical studies show a reduction in H. pylori-associated urease activity by Lact. reuteri strain ATCC55730 [11]. Dore et al. [12] showed that the oral application of Lact. reuteri strain DSM17938, when used in combination with pantoprazole twice a day for 8 weeks, resulted in significant reduction in the urease breath test. Emara et al. [13] used a Lact. reuteri preparation (a mixture of strains Lact. reuteri DSM17938 and Lact. reuteri ATCC PTA6475 in combination with a triple therapy). The Lactobacillus supplementation increased the Gastrointestinal Symptom Rating Scale (GSRS) score significantly, but did not improve the eradication rate.
The aim of the present study was to characterize the binding activity of Lact. reuteri strain DSM17648 to H. pylori in vitro and to determine the impact of 14 days of oral intake of lyophilized Lact. reuteri DSM17648 cells (non-viable) on H. pylori load in a single-blinded, placebocontrolled study.
The taxonomic identification of the Lactobacillus strains to the species level relied on 16 S-rDNA sequence analysis (sequencing done LCG Genomics, taxonomic classification done by Nadicom, Karlsruhe, Germany) using the primers 27f (5 0 -AGAGTTTGATCMTGGCTCAG-3 0 ) and 1492r (5 0 -ACGGYTACCTTGTTACGACTT-3 0 ) [16] and on phenotypic characterization using the API 50 CH system and apiweb TM software (bioMerieux, France). Bacterial counts were determined from calibration curves of optical density versus microscopic cell counts using a Neubauer chamber (Carl Roth, Karlsruhe, Germany).
Chemicals and Enzymes
Sugars, sugar substitutes and inorganic chemicals were reagent grade (Merck, Darmstadt; Carl Roth, Karlsruhe; Germany), and proteases (protease from Streptomyces griseus, proteinase K from Tritirachium album, trypsin from bovine pancreas and pepsin from bovine pancreas) were of the highest commercially available grade (Sigma, Taufkirchen, Germany).
Screening for Co-aggregates Co-aggregation was performed with stationary-phase cells of lactobacilli (A600 = 4, in PBS) and H. pylori (A600 = 2, in artificial gastric juice pH 4, 0.3 % (w/v) pepsin, 0.5 % (w/v) sodium chloride [17]). Cells were mixed and immediate flocculation was observed. Coaggregates could be observed visually as flocking structures, whereas no such structures were present in controls of the single strains (see also Fig. 1). If no aggregates were detected after 10 min, pairs were judged as non-coaggregating.
For some experiments, cells were stained separately using either hexidium iodide (HI, 10 lg/mL) or carboxyfluorescein diacetate succinimidyl ester (CFDA-SE 1 lg/mL) (Invitrogen, Carlsbad, CA, USA) according to the manufacturer's instructions. Excess dye was removed by extensive washing with PBS. Equal amounts of cells were mixed and vortexed for 10 s prior to phase-contrast and fluorescence microscopy [10].
Flow Cytometry
Lactobacillus cells (without staining) and H. pylori cells stained with CFDA-SE were used for co-aggregation by mixing suspensions of the strain DSM17648 and H. pylori strain DSM21031 T in a ratio of 1:1 (cell/cell) to a final volume of 100 lL and subsequent shaking for 15 min. The mixture was added to 990 lL 0.5 % (v/v) sodium chloride (pH 4) in FACS tubes (BD, Heidelberg, Germany). A nonco-aggregating Lactobacillus strain was used as a negative control. Samples were analyzed using a flow cytometer (FACSCalibur, BD, Heidelberg, Germany). Cell coaggregation was quantified by determining events with a high fluorescence intensity ([5 9 10 2 ) via channel FL1-H (Ex 488 nm, Em 530/30) [18, 19].
Scanning Electron Microscopy (SEM) Cells were prepared as described above and re-suspended in PBS. Co-aggregation was induced by mixing suspensions of the strain DSM17648 and H. pylori strain DSM21031 T at a ratio of 1:1 (cell/cell). After 20-min incubation at room temperature, the resulting co-aggregates were pelleted by centrifugation (7,1509g, 1 min, Hettich Mikro 22R, Tuttlingen, Germany). The supernatant was carefully discarded, and co-aggregates were either frozen in liquid nitrogen, freeze-dried and sputtered with palladium (1,8009 picture) or fixed in 4 % glutaraldehyde, dehydrated in graded ethanol solutions, dried in liquid CO 2 and sputtered with palladium (11,0009 picture) before SEM. SEM was done using a FEI Quanta 200 FEG Field emission scanning electron microscope. Some images were colorized according to the bacillary or spiral shape to facilitate viewing (eye of science Meckes and Ottawa GbR, Reutlingen, Germany).
Sugar and pH Effects on Co-aggregation
Co-aggregation was tested in the presence of 25 mM sucrose with known co-aggregating pairs. Analogous simultaneous incubations were done testing lactose, glucose, maltose, iso-maltose, fructose or sorbitol to detect possible interference with the ability of the strain DSM17648 to co-aggregate H. pylori. To evaluate the pH dependency of co-aggregation, DSM17648 cells were resuspended in 0. The strain DSM17648 and H. pylori were grown separately to stationary phase, harvested by centrifugation, washed in PBS, and 1 mL aliquots adjusted to A600 = 4 (for DSM17648) or A600 = 2 (for H. pylori) in monopotassium phosphate/calcium chloride buffer (pH 7.0) containing either one of four proteases: protease Strep. griseus Type XIV (5.7 U/mg), proteinase K (51 U/mg), trypsin (40 U/ mg) or pepsin (2,950 U/mg) at a final concentration of 2.0 mg/mL. After incubation for 1 h at 37°C, cells were washed, re-suspended again in PBS (pH 7.0), and 500 lL aliquots of each preparation mixed, and co-aggregation assayed visually and microscopically.
Study Population
The original setting of the study was a placebo-controlled co-twin control design with one twin receiving the active treatment while the co-twin received a placebo. Concordance rates for H. pylori infection in monozygotic twins have been reported at 80 % [22], whereas in dizygotic twins they are 60 %. Heritability for quantitative levels of H. pylori colonization has been estimated at 0.8. Historical prevalence of infection by H. pylori for the general population was reported as 45 %, although more recent studies suggest a reduction to approximately 25 % in the Western world [23,24]. For Germany, a prevalence of 39 % was reported in 1996 [25]. Based on those figures, the first screening phase was planned to include analysis for 64 twin pairs, expecting 29 pairs with at least one affected twin, and 23 concordant pairs, i.e., pairs with positive findings for both twins. As incidence rates found in the screening phase were lower than expected from published figures, the original design was then adapted to include singletons in a pre-post design. A second screening phase included twins as well as singletons. Subjects were included if they had reached the age of 18 and had a positive H. pylori finding in the 13 C urea breath test (Helicobacter Test INFAI Ò , Dd C 4 %). Informed consent was obtained from all persons for being included in the study. Additional informed consent was obtained from all patients for which identifying information is included in this article. Exclusion criteria were any medication interfering with the action of the lactobacilli, previous surgical procedures affecting stomach or small intestine with potential interference with the study, e.g., gastrectomy or gastric bypass, diabetes type 1 or 2, familiar lipid metabolism diseases, any other major disease, weight changes [3 kg over the last 3 months, pregnancy or lactation, alcohol or drug abuse, or psychiatric diseases.
Study Protocol
The study was approved by the local ethics advisory committee (Charité, Berlin, Germany) and was conducted according to the Declaration of Helsinki [26]. As the trial was not a clinical trial, the trial was not registered, as at the time of the trial in Germany it was not customary for pilot type trials to be registered. The test product (active ingredient) consisted of lyophilized dead cells of the strain DSM17648, prepared as solid tablets for oral application. Each tablet contained 5 9 10 9 cells (determined by counting in Neubauer chamber), and the daily dosage of four tablets translates into 2 9 10 10 cells. Verum and placebo tablets were identical in weight (250 mg), size, color and flavor. Within concordant affected twin pairs, treatment was randomized in parallel for a time period of 14 days. In singletons, active treatment and placebo were given in a single-blinded non-randomized crossover design. The first period of 14 days was the placebo phase; after a second breath test, active treatment was given for another 14 days, followed by a breath test. Four to six weeks after the treatment phase, a follow-up breath test was conducted. Subjects were instructed to take two tablets after breakfast as well as after their evening meal. During the treatment phase, no lifestyle or dietary changes were to be initiated and no probiotic food products or cranberries were to be used. Subjects were asked to fill in a study-specific questionnaire to document well-being, any potential side effects, smoking, alcohol use, nutrition and medication.
Measurements
Detection of H. pylori infection in the screening phase and quantification of colonization to verify effects of the strain DSM17648 were accomplished by a breath test, as this diagnostic approach is best suited to screening as well as detection of intra-individual changes [27]. Helicobacter Test INFAI Ò is a breath test for direct noninvasive quantitative detection of the bacterium H. pylori [28]. The test is based on urease activity of H. pylori. Specificity (98.5 %) and sensitivity (97.9 %) of Helicobacter Test INFAI Ò are comparable to traditional invasive diagnostic methods (endoscopy or biopsy). As the breath test reflects the current status of colonization by H. pylori, it is well suited to detect reduction in or eradication of the bacteria [29,30].
The test is based on the hydrolysis of 13 C urea to ammonium and 13 C-enriched carbon dioxide, which is detectable in the breath. Patients ingest a small amount of the 13 C urea isotope. Carbon dioxide resulting from the degradation of urea contains this isotope, detectable by mass spectrometry. As there is a small amount of naturally occurring 13 C even in the absence of urease activity, breath samples are taken before and after the ingestion of 13 C urea. If there is no difference, the test is negative, indicating no infection with H. pylori. There is a quantitative relation between urease activity and amount of 13 C in breath that indirectly relates to the level of colonization by H. pylori.
Statistics
All historical and clinical data were entered into a dedicated trial database. Statistical analysis was conducted using SPSS version 16.0.2. We computed differences in 13 C urea breath test (UBT) values against initial measurements: DActive = 13 C UBT Active -13 C UBT Initial, DPlacebo = 13 C UBT Placebo -13 C UBT Initial, DWashout = 13 C UBT Wash-out -13 C UBT Initial. Additionally, the absolute test values between the various study time-points were compared: 13 C UBT Initial, 13 C UBT Verum (after 14-day verum treatment), 13 C UBT Placebo (after 14-day placebo treatment), 13 C UBT Wash-out (4-6 weeks after verum treatment). Data from twin pairs were combined and analyzed as in singletons. The co-twin control design is comparable to a crossover design, but without any potential carry over effects. There was no randomized order for verum/placebo treatment, as no continuing placebo effect was expected.
All data were tested for deviations from normal distribution by Kolmogorov-Smirnov test. Mean differences were computed by pairwise t test. Potential relations between response to treatment and initial level of colonization were explored by linear regression. An error level of 5 % was set as threshold for significance. Results are reported as mean ± standard deviation (SD); figures present the standard error of the mean (SEM).
Co-aggregation Analysis of Lact. reuteri DSM17648
Lactobacilli that co-aggregate H. pylori were sought among a large Lactobacillus strain collection. The in-house, private strain collection has been assembled from wild-type strains of diverse origin, such as food sources, plants, vegetables or human skin. Strains are classified according to the physiological characteristics prior to being included in the screening process. Among 700 Lactobacillus strains tested, only eight were found to co-aggregate with spiral forms of H. pylori strain DSM21031, without exhibiting any auto-aggregation (Fig. 1). Three of the co-aggregating lactobacilli (strains DSM17648, DSM17647 and DSM17651) were identified as Lact. fermentum (API method). One of these-Lact. fermentum DSM17648 [classified as Lact. reuteri by 16 S-rDNA sequencing and sequence alignment (100 % identical to accession numbers CP000705, CP006603, CP006011 (at 99 % coverage))]was analyzed in depth (Table 1). Numerous other Lact. fermentum and Lact. reuteri strains were tested in parallel for auto-aggregation and co-aggregation under the described conditions. None of them formed co-aggregates with H. pylori. Lactobacillus and Helicobacter strains did not auto-aggregate (Fig. 1). To confirm that both species were present in the aggregates, cells were stained separately using either hexidium iodide or carboxyfluorescein diacetate succinimidyl ester. Both the strain DSM17648 and H. pylori DSM21031 participated in the aggregation (Fig. 1). Co-aggregation occurs within seconds after mixing the strains. Quantification of co-aggregate formation between Lact. reuteri DSM17648 and H. pylori DSM21031 by flow cytometry (Fig. 2) shows that one Lactobacillus cell binds 2-3 Helicobacter cells. Interestingly, the co-aggregation activity is preserved during lyophilization or spray drying of whole cells of Lact. reuteri DSM17648 and persists in non-viable cells (Table 3). Spray-dried or lyophilized cells of strain DSM17648 induced co-aggregate formation with the same sensitivity as untreated cells. Expression of coaggregation activity is dependent on the growth phase of Lact. reuteri DSM17648, and it is present at entry into stationary growth and during stationary phase. SEM images of co-aggregates were prepared to analyze cellular sites of the attachment. Figure 3 shows that single cells of the strain DSM17648 bind several H. pylori cells resulting in cross-linking of the co-aggregates. Binding sites on the cells of the strain DSM17648 appear evenly distributed over the cell surface, and binding sites on H. pylori cells do not seem to be present on flagellar structures.
As the interaction between cells of Lact. reuteri DSM17648 and H. pylori DSM21031 involves cell surfaces, we tested for possible interference by surface modulating treatments. Co-aggregation persists in the presence of sugar (sucrose, lactose, glucose, fructose, maltose, isomaltose, and sorbitol). It occurs at comparable efficiency at room temperature and at 37°C, and co-aggregation Fig. 2 Co-aggregation of Lact. reuteri DSM17648 with H. pylori DSM21031 was analyzed by flow cytometry (e). H. pylori cells were CFDA stained. Samples were analyzed using flow cytometry, and cell co-aggregation was quantified by determining the events with a high FL ([5 9 10 2 , area within green frame). Co-aggregation was not observed when strains were analyzed separately (a-c) nor when a non-aggregating Lactobacillus strain was used as a control (d) Fig. 3 Scanning electron microscopy of co-aggregates of Lact. reuteri DSM17648 (blue) and H. pylori (red), a 91,800 magnification; b 911,000 magnification. Some images were colorized according to the bacillary or spiral shape to facilitate viewing activity is observed over a wide pH range (pH 2.0-corresponding to empty stomach conditions-up to pH 8, including typical pH values after meals). No pure cultures evidenced auto-aggregation within this pH range. Slightly smaller co-aggregates formed at pH 2 compared with pH 8 in vitro. Thus, aggregation of H. pylori by Lact. reuteri DSM17648 occurs at pH values and conditions encountered in the human stomach. The susceptibility to protease inactivation of the coaggregation determinants on the surfaces of both the strain DSM17648 and H. pylori DSM21031 was tested after treatment with protease Strep. griseus Type XIV, proteinase K, trypsin or pepsin. Incubating Lact. reuteri DSM17648 with any protease before co-aggregation reduced binding to H. pylori DSM21031 by 30 %, but did not eliminate it completely. H. pylori required pretreatment with the protease pepsin (as is naturally present in gastric fluids) to be fully active in co-aggregation with the strain DSM17648.
Lactobacillus reuteri DSM17648 does not co-aggregate with common non-Helicobacter members of the human flora. Neither the major intestinal commensals nor C. jejuni detectably co-aggregate with Lact. reuteri DSM17648 (Table 2), and no auto-aggregation was observed. Coaggregation is active with different H. pylori strains (type I and type II strains) as well as with H. heilmannii strains (type I and type II) and with H. canis of animal source. Thus, the strain DSM17648 specifically co-aggregates H. pylori without interfering with other bacteria of the commensal intestinal flora.
Pilot Study
The strain DSM17648 was used in a placebo-controlled pilot study to evaluate the effect of the strain DSM17648 in asymptomatic Helicobacter-positive test persons after a twoweek application. Screening included 128 subjects, 47 twin pairs and 34 singletons; 27 subjects had a positive breath test result. Overall Helicobacter prevalence was 21 %; 6 twin pairs were concordant and 10 pairs discordant positive. Fourteen independent treatments were started with no dropouts during the trial phase. All 6 concordant twin pairs participated in the study, as well as 4 discordant twin pairs and 4 singletons. Due to the large inter-individual variability of quantitative measures of colonization ( 13 C UBT Initial), analysis of H. pylori reduction by the strain DSM17648 was primarily based on intra-individual changes after active treatment or placebo (Dverum vs. Dplacebo). Treatment by placebo did not result in a significant change in 13 C UBT (Dplacebo -0.6 ± 5.3), whereas verum treatment significantly reduced 13 C UBT values (Dverum -4.9 ± 7.8, p = 0.026 vs. placebo), indicating significant reduction in H. pylori. Absolute values of 13 C UBT at baseline measurement, after placebo and after verum treatment were 14.1 ± 9.9, 12.7 ± 7.2 (ns vs. initial) and 11.9 ± 5.9 (p vs. initial 0.01, p vs. placebo 0.03), respectively. To allow for a detailed evaluation of response to the strain DSM17648, individual values for 13 C UBT are plotted in Fig. 4. After verum treatment, the majority of subjects showed a reduction in H. pylori colonization. Responses showed some variability, from no reduction to a delta of more than 20. In comparison, after 2 weeks of placebo, some subjects had lowered values while others had increases in the same magnitude, indicating no systematic effect.
Values of 13 C UBT after wash-out (x ± y) are not significantly different from verum treatment values. The effect of reduced Helicobacter values lasts over the actual treatment. There was some dependency of treatment response on initial values (r 2 = 0.66, p = 0.01, Fig. 5). With increasing level of colonization, the lowering effect caused by the strain DSM17648 becomes stronger. For placebo treatment, the same dependency was found albeit to a lesser degree (r 2 = 0.35, p = 0.02), probably reflecting regression to the mean effects. A direct placebo effect on immune response cannot be ruled out. This potential effect is significantly lower than the specific action of the strain DSM17648. During the course of the study, there was neither any change in life style, e.g., in physical activities or diet, nor health as indicated by the questionnaire. No side effects were reported in either study group.
Discussion
The specific and fast co-aggregation of defined Lactobacillus strains of the species Lact. reuteri/fermentum, Lact. Co-aggregation activity is not affected by sugars and pH (over a wide range between 2 and 8), and it requires a pepsin pretreatment of spiral form cells of H. pylori. It does not occur with far less infective coccoidal H. pylori cells [31], and it is dependent on a Lact. reuteri DSM17648 cell surface factor that is present at the end of the exponential growth phase and during stationary phase. Co-aggregation between H. pylori and the strain DSM17648 occurs within seconds. While Chen et al. [32] observed a very slow interaction between some lactobacilli from food source, the present paper is to our knowledge the first description of a rapid and efficient co-aggregation of H. pylori by a specific Lactobacillus strain under gastric conditions. It is proposed that Lact. reuteri DSM17648 interferes with mobility of H. pylori and its adherence to the gastric mucosa by entangling the cells into aggregates and masking H. pylori surface sites that are ordinarily available for binding to human epithelium. Once bound, co-aggregates will be flushed from the stomach by natural bowel movement. Interestingly, the aggregation activity was preserved when the cells were killed by freeze drying or spray drying. It can be assumed that binding is due to specific surface molecules on the Lact. reuteri DSM17648 cells which are strain specific and are resistant to such process steps. Such surface molecules might include lipoteichoic acid and carbohydrate structures. This novel anti-H. pylori activity has not been described previously as a mode of action for probiotic treatment of H. pylori infections. This hypothesis was tested in a proof-ofconcept in vivo study. Our data demonstrate the significant decrease in H. pylori load by a two-week application of Lact. reuteri DSM17648 in healthy subjects with detectable H. pylori infection in a general population sample. The principal outcome criterion was the reduction in H. pylori as measured by 13 C urease breath test (Helicobacter Test INFAI Ò ) after a 14-day supplementation period of Lact. reuteri DSM17648 at a daily dose of 2 9 10 10 non-viable lyophilized cells. Data obtained in a parallel clinical study support the data reported in this paper [33]. Previous studies with non-specific probiotics require the application of live microorganisms while Lact. reuteri DSM17648 is active as non-viable cell preparation. This will greatly reduce any potential side effects and will help ensure stable activity in a potential consumer product and in pharmaceutical and medicinal formulations.
Our study reveals a novel Lact. reuteri strain (the strain DSM17648) that features unique properties as it specifically aggregates with planktonic H. pylori in the stomach. Freeze-dried (and spray-dried) preparations significantly reduce the H. pylori load (measured by urease breath test) after a 14-day oral treatment period in H. pylori-positive test persons.
Lact. reuteri strain DSM17648 can become a central part of a strategy to avoid using antibiotics and combating antibiotic resistances in H. pylori infections, in reducing H. pylori load, either as a prophylactic food additive or a medical cure to treat H. pylori-induced stomach diseases. GJ | 2017-07-14T12:23:18.195Z | 2014-12-07T00:00:00.000 | {
"year": 2014,
"sha1": "5eb4a381179c2bd102627c8fa97f8afda17bda07",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s12602-014-9181-3.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "5eb4a381179c2bd102627c8fa97f8afda17bda07",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
248609985 | pes2o/s2orc | v3-fos-license | PRDX2 Knockdown Inhibits Extracellular Matrix Synthesis of Chondrocytes by Inhibiting Wnt5a/YAP1/CTGF and Activating IL-6/JAK2/STAT3 Pathways in Deer Antler
Although peroxiredoxin 2 (PRDX2) plays a vital role in relieving oxidative stress, its physiological function in cartilage development remains almost unknown. In this study, we found that the expression of PRDX2 significantly increased in the chondrocytes compared with pre-chondrocytes. PRDX2 knockdown significantly decreased the expression of extracellular matrix (ECM) protein (Col2a and Aggrecan), which led to blocked cartilage formation. Moreover, PRDX2 knockdown also inhibited the expression of connective tissue growth factor (CTGF). CTGF is an important growth factor that regulates synthesis of ECM proteins. We explored the possible regulatory mechanism by which PRDX2 regulated the expression of CTGF. Our results demonstrated that PRDX2 knockdown downregulated the expression of CTGF by inhibiting Wnt5a/Yes-associated protein 1 (YAP1) pathway. In addition, PRDX2 knockdown promoted the expression of interleukin 6 (IL-6), indicating PRDX2 expression had an anti-inflammatory function during antler growth. Mechanistically, PRDX2 knockdown promoted cartilage matrix degradation by activating the IL-6-mediated Janus Kinase 2/Signal Transducer and Activator of Transcription 3 (JAK2/STAT3) signaling pathway. These results reveal that PRDX2 is a potential regulator that promotes cartilage extracellular matrix synthesis.
Introduction
Deer antler, a bony organ, grows in vitro, is easy to observe and can periodically regenerate. Antler cartilage is formed by endochondral ossification [1]. During this period, the tip of the antler tissue is stored through a transformation of perichondrium into mesenchymal cells, which differentiate to pre-chondrocytes and finally transform into chondrocytes ( Figure 1) [2,3]. During early chondrogenesis, chondrocytes continue to divide and specifically express ECM proteins including type II collagen alpha (Col2a), cartilage oligomeric matrix protein (COMP) and Aggrecan, which promote cartilage ECM formation [4]. In the meantime, parts of chondrocytes step down from the cell cycle and continue to differentiate into hypertrophic chondrocytes. In hypertrophic chondrocytes (during late chondrogenesis), Runt-related transcription factor 2 (Runx2) is dominantly expressed and regulates type X collagen alpha (Col10a) expression [5]. Runx2 interacts with Osterix to induce the expression of matrix metalloproteinases 13 (MMP13), which results in calcification of matrices [6].
The ECM, the living environment of chondrocytes, is a complex network composed of collagen, chondroitin sulfate, proteoglycans and various signaling molecules. It provides chondrocytes with a three-dimensional pore structure to facilitate their attachment, proliferation and secretion of the matrix [7,8]. Once the cartilage extracellular matrix is degraded, it can cause cartilage diseases such as osteoarthritis [9]. Therefore, exploring
Antler Chondrocytes Can Highly Express a Variety of Peroxiredoxins, and the Expression of PRDX2 Upregulates as the Chondrocytes Mature
Although the growth rate of an antler is extremely quick, the consumption of large amounts of oxygen does not cause oxidative stress. We analyzed the expression level of the PRDX family protein with qRT-PCR in antler chondrocytes. We found that PRDX1-6 were expressed in the antler chondrocytes, and the expression levels of PRDX1 and PRDX2 were high (Figure 2A). In a previous study, we successfully isolated and identified antler pre-chondrocytes and chondrocytes [36]. Here, we detected the expression levels of PRDXs between the pre-chondrocytes and the chondrocytes. Chondrocytes can highly express Col2a (chondrocytes marker molecule) compared to pre-chondrocytes ( Figure 2B). We found that the expression of PRDX2 significantly increased as the chondrocytes matured ( Figure 2C). We detected the location and expression of PRDX2 in the pre-cartilage layer and cartilage layer with immunohistochemistry staining. PRDX2 protein showed a strong signal in the cartilage layer of deer antler ( Figure 2D). PRDX2 is a member of the peroxiredoxin family which protects cells from oxidative stress by removing H 2 O 2 and controlling reactive oxygen species (ROS) level [10]. Furthermore, PRDX2 is also involved in mediating other biological functions. It has been reported that PRDX2 as a potential inflammatory mediator regulates inflammatory response [11]. The high expression of PRDX2 is also related to the survival and proliferation of cancer cells, including gastric cancer, lung cancer and colon cancer [12,13]. A recent study has indicated that PRDX2 mediates atherosclerosis progression [14]. This also implies that PRDX2 has a wide range of biological activities in different types of cells. However, the biological function of PRDX2 for cartilage development remains unexplored.
Wnt signaling plays a significant role in various biological processes, such as angiogenesis, tumor growth, immune response and cartilage development [15][16][17][18]. The canonical Wnt pathway relies on the stabilization and nuclear translocation of β-catenin to regulate the expression of downstream genes. The activation of the canonical Wnt signal prevents the degradation of β-catenin by the APC complex, and β-catenin accumulates in the cytoplasm, thus realizing the transfer from the cytoplasm to the nucleus [19]. The non-canonical Wnt signaling pathway has multiple regulatory mechanisms, including the Wnt/mTOR, Wnt/JNK, Wnt/Ca + and Wnt/YAP pathways, playing important roles in regulating cell growth, differentiation and apoptosis [20][21][22][23].
In recent years, it has been discovered that the Wnt/YAP1 signaling pathway can mediate various biological effects. YAP acts as a downstream effector of Wnt proteins to alter the Wnt signaling pathway, in which Wnt5a/b can induce the activation of YAP/TAZ signaling [24]. YAP1, a nuclear transcription factor and the key mediator of the Hippo signaling pathway, plays an important role in regulating cell proliferation, differentiation and survival [25,26]. Previous studies have shown that YAP1 can bind to TEADs in the nucleus and act as a transcriptional co-activator to regulate the expression of target genes, such as CYR61 and CTGF [27,28].
The CTGF/CCN2 (connective tissue growth factor) is the second member of the CCN family implicated in cell proliferation, differentiation and ECM production [29]. Many in vitro and in vivo studies have shown that CTGF as an osteogenesis-related protein plays an important role in regulating endochondral ossification [30]. CTGF promotes new bone formation by regulating extracellular matrix accumulation and intramembranous osteogenesis [31]. Loss of CTGF can cause severe chondrodysplasia [32]. In addition, IL-6 is a key inflammatory factor that regulates immune and inflammatory responses [33]. Previous studies have shown that inflammation is a common cause of cartilage destruction. The IL-6-mediated JAK2/STAT3 signaling pathway can induce chondrocyte apoptosis and MMP expression [34,35].
In this study, we analyzed the expression levels of PRDX family members in prechondrocytes and chondrocytes and found that as antler chondrocytes matured, PRDX2 expression increased significantly. It is unclear whether PRDX2 is involved in the process of Scale bar, 50 μm. The data include the means ± SD of three independent experiments. ** P < 0.01, *** P < 0.001.
PRDX2 Knockdown Inhibits the Synthesis of Cartilage Matrix Proteins
PRDX2 expression significantly increased in the chondrocytes compared with prechondrocytes, and we proposed that it may mediate early chondrogenesis. To determine whether PRDX2 mediates chondrogenesis, the interference fragment targeting the PRDX2 gene was transfected into proliferating chondrocytes of deer antler. The interference effi-
PRDX2 Knockdown Inhibits the Synthesis of Cartilage Matrix Proteins
PRDX2 expression significantly increased in the chondrocytes compared with prechondrocytes, and we proposed that it may mediate early chondrogenesis. To determine whether PRDX2 mediates chondrogenesis, the interference fragment targeting the PRDX2 gene was transfected into proliferating chondrocytes of deer antler. The interference efficiency of PRDX2 was detected using Western blot analysis and qRT-PCR ( Figure 3A,B). PRDX2 knockdown significantly inhibited the expression of cartilage matrix proteins (Col2a, Aggrecan and COMP) compared with the negative control ( Figure 3C). In addition, the results from toluidine blue staining and Alcian blue staining showed PRDX2 knockdown reduced the synthesis of glycosaminoglycan, which inhibited the formation of the cartilage extracellular matrix ( Figure 3D). The data include the means ± SD of three independent experiments. * P < 0.05, ** P < 0.01, *** P < 0.001.
PRDX2 Knockdown Inhibits the Expression of CTGF
In vivo and in vitro studies have shown that CTGF plays a vital role in regulating the synthesis of the ECM and maintaining chondrocyte phenotype. We investigated whether PRDX2 regulated the synthesis of cartilage matrix proteins through the transcription of CTGF. We found that PRDX2 knockdown significantly inhibited the expression of CTGF ( Figure 4A,B).
PRDX2 Knockdown Inhibits the Expression of CTGF
In vivo and in vitro studies have shown that CTGF plays a vital role in regulating the synthesis of the ECM and maintaining chondrocyte phenotype. We investigated whether PRDX2 regulated the synthesis of cartilage matrix proteins through the transcription of CTGF. We found that PRDX2 knockdown significantly inhibited the expression of CTGF ( Figure 4A,B).
PRDX2 Knockdown Inhibits the Expression of CTGF
In vivo and in vitro studies have shown that CTGF plays a vital role in regulating the synthesis of the ECM and maintaining chondrocyte phenotype. We investigated whether PRDX2 regulated the synthesis of cartilage matrix proteins through the transcription of CTGF. We found that PRDX2 knockdown significantly inhibited the expression of CTGF ( Figure 4A,B).
PRDX2 Knockdown Inhibits the Expression of Wnt5a and Promotes Nuclear Translocation of β-Catenin
The Wnt/β-catenin pathway is a key regulator during chondrogenesis. To further explore how PRDX2 regulated the expression of CTGF, we evaluated whether PRDX2 regulated the expression of CTGF though activating the Wnt/β-catenin signaling pathway. We found that PRDX2 knockdown significantly decreased Wnt5a expression but increased β-catenin activity ( Figure 5A). The immunofluorescence results showed that after PRDX2 knockdown, β-catenin had a strong signal in the nucleus ( Figure 5B). In addition, PRDX2 knockdown significantly inhibited phosphorylation of GSK-3β and enhanced GSK-3β activation ( Figure 5C,D). These results indicate that PRDX2 is involved in Wnt5a pathways (independent of β-catenin).
PRDX2 Knockdown Inhibits the Expression of Wnt5a and Promotes Nuclear Translocation of β-Catenin
The Wnt/β-catenin pathway is a key regulator during chondrogenesis. To further explore how PRDX2 regulated the expression of CTGF, we evaluated whether PRDX2 regulated the expression of CTGF though activating the Wnt/β-catenin signaling pathway. We found that PRDX2 knockdown significantly decreased Wnt5a expression but increased β-catenin activity ( Figure 5A). The immunofluorescence results showed that after PRDX2 knockdown, β-catenin had a strong signal in the nucleus ( Figure 5B). In addition, PRDX2 knockdown significantly inhibited phosphorylation of GSK-3β and enhanced GSK-3β activation ( Figure 5C,D). These results indicate that PRDX2 is involved in Wnt5a pathways (independent of β-catenin). Localization of p-GSK3β was determined by immunofluorescence staining. The nuclei were stained with DAPI (blue), and the p-GSK3β was detected with CY3 (red). Scale bar, 20 μm. The data include the means ± SD of three independent experiments. * P < 0.05.
PRDX2 Knockdown Reduces the Expression of CTGF by Inhibiting the Activity of YAP1
Previous studies have indicated that Wnt5a can induce YAP/TAZ activation to change the canonical Wnt signaling pathway [24,37]. Here, we performed Western blot analysis to examine the transcription activity of YAP1 after PRDX2 knockdown. Treatment with siPRDX2 significantly increased the expression of p-YAP1 and decreased the nuclear translocation of YAP1 ( Figure 6A). The immunofluorescence results also showed that after PRDX2 knockdown, YAP1 had a weak signal in the nucleus compared with the negative control ( Figure 6B). Next, the interference fragment targeting the YAP1 gene was transfected into chondrocytes. The protein level of YAP1 was significantly reduced compared with the negative control ( Figure 6C,E). The expression of CTGF was significantly reduced after YAP1 knockdown by q-PCR ( Figure 6D). These results indicate that PRDX2 regulated the expression of CTGF depending on the transcriptional activity of YAP1. Previous studies have indicated that Wnt5a can induce YAP/TAZ activation to change the canonical Wnt signaling pathway [24,37]. Here, we performed Western blot analysis to examine the transcription activity of YAP1 after PRDX2 knockdown. Treatment with siPRDX2 significantly increased the expression of p-YAP1 and decreased the nuclear translocation of YAP1 ( Figure 6A). The immunofluorescence results also showed that after PRDX2 knockdown, YAP1 had a weak signal in the nucleus compared with the negative control ( Figure 6B). Next, the interference fragment targeting the YAP1 gene was transfected into chondrocytes. The protein level of YAP1 was significantly reduced compared with the negative control ( Figure 6C,E). The expression of CTGF was significantly reduced after YAP1 knockdown by q-PCR ( Figure 6D). These results indicate that PRDX2 regulated the expression of CTGF depending on the transcriptional activity of YAP1. Localization of YAP1 was determined by immunofluorescence staining. The nuclei were stained with DAPI (blue), and the YAP1 was detected with CY3 (red). Scale bar, 20 μm. (C) The protein level of YAP1 was decreased after treatment with YAP1 siRNA for 72 h by Western blot. (D) The CTGF expression level was decreased after treatment with YAP1 siRNA for 72 h by q-PCR. (E) The interference efficiency of YAP1 was examined by immunofluorescence staining. The YAP1 was detected with CY3 (red). Scale bar, 20 μm. The data include the means ± SD of three independent experiments, * P < 0.05, ** P < 0.01, *** P < 0.001. Localization of YAP1 was determined by immunofluorescence staining. The nuclei were stained with DAPI (blue), and the YAP1 was detected with CY3 (red). Scale bar, 20 µm. (C) The protein level of YAP1 was decreased after treatment with YAP1 siRNA for 72 h by Western blot. (D) The CTGF expression level was decreased after treatment with YAP1 siRNA for 72 h by q-PCR. (E) The interference efficiency of YAP1 was examined by immunofluorescence staining. The YAP1 was detected with CY3 (red). Scale bar, 20 µm. The data include the means ± SD of three independent experiments, * p < 0.05, ** p < 0.01, *** p < 0.001.
PRDX2 Knockdown Activates IL6-Induced JAK2/STAT3 Signaling Pathway in Chondrocytes
The JAK2/STAT3 pathway plays an important role in osteoarthritis (OA) pathogenesis. Activation of the JAK2/STAT3 pathway can accelerate the degradation of the cartilage matrix. Figure 7A shows that PRDX2 knockdown promotes the expression of IL-6. As expected, PRDX2 knockdown upregulated the phosphorylation levels of JAK2 and STAT3 ( Figure 7B,C). These results indicate PRDX2 knockdown induced inflammatory response and inhibited the expression of cartilage matrix proteins through activating the IL6-induced JAK2/STAT3 signaling pathway. The JAK2/STAT3 pathway plays an important role in osteoarthritis (OA) pathogenesis. Activation of the JAK2/STAT3 pathway can accelerate the degradation of the cartilage matrix. Figure 7A shows that PRDX2 knockdown promotes the expression of IL-6. As expected, PRDX2 knockdown upregulated the phosphorylation levels of JAK2 and STAT3 ( Figure 7B,C). These results indicate PRDX2 knockdown induced inflammatory response and inhibited the expression of cartilage matrix proteins through activating the IL6-induced JAK2/STAT3 signaling pathway.
Discussion
PRDX2 is a very powerful ROS-scavenging protein compared with other PRDX members involved in important physiological functions. In this work, we found that PRDX2 was expressed at different stages of chondrogenesis. PRDX2 expression was significantly increased in chondrocytes compared with pre-chondrocytes. PRDX2 knockdown inhibited the expression of Col2a, Aggrecan, COMP and reduced the synthesis of glycosaminoglycan, indicating that PRDX2 was involved in the cartilage ECM formation and maintained chondrocyte phenotype. Furthermore, PRDX2 knockdown markedly increased the expression of IL-6, indicating PRDX2 expression had an anti-inflammatory function during antler growth.
Previous studies have shown that CTGF is a significant matricellular protein that regulates cartilage growth [38][39][40]. Moreover, exogenous CTGF treatment increases the expression of cartilaginous matrix proteins, such as Col2a and Aggrecan, and promotes
Discussion
PRDX2 is a very powerful ROS-scavenging protein compared with other PRDX members involved in important physiological functions. In this work, we found that PRDX2 was expressed at different stages of chondrogenesis. PRDX2 expression was significantly increased in chondrocytes compared with pre-chondrocytes. PRDX2 knockdown inhibited the expression of Col2a, Aggrecan, COMP and reduced the synthesis of glycosaminoglycan, indicating that PRDX2 was involved in the cartilage ECM formation and maintained chondrocyte phenotype. Furthermore, PRDX2 knockdown markedly increased the expression of IL-6, indicating PRDX2 expression had an anti-inflammatory function during antler growth.
Previous studies have shown that CTGF is a significant matricellular protein that regulates cartilage growth [38][39][40]. Moreover, exogenous CTGF treatment increases the expression of cartilaginous matrix proteins, such as Col2a and Aggrecan, and promotes chondrocytes proliferation [41]. Here, we found that PRDX2 knockdown inhibited the expression of CTGF. PRDX2 involvement in regulating the cartilage matrix protein expression may depend on the activity of CTGF. We further explored the relationship between PRDX2 and CTGF as well as possible regulatory mechanisms. Previous studies have shown that YAP1 can bind to TEADs in the nucleus and act as a transcriptional co-activator to regulate the expression of target genes, such as CYR61 and CTGF [27,28]. We suspect that PRDX2 may affect the transcription of YAP1 in antler chondrocytes. Here, we observed that PRDX2 knockdown significantly promoted YAP1 phosphorylation and inactivation. In addition, relevant studies have shown that inhibition or downregulation of GSK-3β can promote nuclear aggregation of YAP1 [42]. PRDX2 knockdown upregulated the expression of GSK-3β ( Figure 4C). These results suggested that PRDX2 regulated the expression of CTGF by activating the YAP1 signaling pathway. However, it is unclear how PRDX2 is associated with YAP1, and further study is required to explore this.
Non-canonical Wnt signaling mediates a variety of physiological functions, including embryonic development, cell differentiation and inhibition of canonical Wnt signaling [43,44]. Wnt5a has traditionally been considered a common ligand in non-canonical Wnt signaling. Wnt5a-driven signaling is able to inhibit canonical Wnt signaling by inhibiting the expression of Wnt target genes [24]. Ror2 receptor, as a major receptor of Wnt5a, plays a vital role in mediating non-canonical Wnt signaling. Wnt5a can inhibit the transcriptional activity of β-catenin by activating the phosphorylation of the Ror2 receptor [45,46]. We found that PRDX2 knockdown inhibited the Wnt5a expression and increased the β-catenin accumulation ( Figure 4A). Obviously, Wnt5a inhibited the transcriptional activity of β-catenin. In addition, related research results show that Wnt5a can act as an activator of YAP/TAZ signaling, thereby altering the Wnt signaling pathway [24]. Our results indicated that PRDX2 activated the YAP/TAZ signal through Wnt5a in the chondrocytes of deer antler.
CTGF has been identified a vital transcription factor that promotes synthesis of cartilage matrix proteins including Col2a and Aggrecan [41]. Degradation of the extracellular matrix has long been a hallmark of arthritic diseases, and inflammatory responses are the main cause of a series of structural damage [47]. A large number of inflammatory mediators including IL-6, TNF-α and IL-1β were detected in osteoarthritis [48]. In vitro and in vivo studies have shown that the JAK2/STAT3 pathway plays vital roles in osteoarthritis (OA) [35,49]. The activation of the JAK2/STAT3 pathway resulted in the production of matrix metalloproteinases (MMPs), inducible nitric oxide synthase (iNOS) and cyclooxygenase-2, which would lead to cartilage destruction [34]. IL-6 is a well-known target protein upstream of the JAK2/STAT3 signaling pathway, and it can regulate articular cartilage degradation in OA [34,50]. PRDX2 knockdown inhibited the expression of cartilage matrix proteins by activating the IL6-induced JAK2/STAT3 signaling pathway in chondrocytes.
It is well known that cartilage disruption and loss of the extracellular matrix are the most distinctive features of OA [51]. Counteracting cartilage degradation by stimulating chondrocyte proliferation and extracellular matrix synthesis is a potential treatment for OA. In addition, the results of a study showed that PRDX2 expression showed lower levels in the cartilage tissue of OA patients [52]. In this study we found that PRDX2 knockdown downregulated the expression of cartilage matrix proteins. We reported that PRDX2 regulated CTGF expression through the Wnt5a/YAP1 pathway. Our data indicated that PRDX2 promoted extracellular matrix formation by inhibiting the IL-6/JAK2/STAT3 pathway. However, we only explored the effect of cartilage matrix protein expression by knocking down the expression of PRDX2. Whether overexpression of PRDX2 can promote the formation of the cartilage ECM is unclear. In addition, whether PRDX2 expression regulates chondrocyte differentiation remains to be further investigated.
Antler Issue Collection and Cell Culture
Deer antler samples (four antlers) were collected (Jinsanxin Farm, Wuhan, China). These cervus nippon are native to the northeastern part of China. The tips of antler tissues (about 5 cm) about 60 days after casting from healthy deer (two years old) were dissected into different layers (mesenchyme layer, pre-cartilage, and cartilage layer) separately as described previously [2]. Dissected tissues were cut up in DMEM/high glucose medium (Hyclone, GE Healthcare, Logan, UT, USA), centrifuged at 1000 rpm for 2 min and using 0.2% collagenase II (Sigma-Aldrich, Marlborough, MA, USA) digested for 30 min. The digested tissue layers were filtered through a cell strainer (pore size: 100 µm), and the cell suspension was centrifuged at 1300 rpm for 4 min. The cell pellets were resuspended in DMEM/high glucose medium containing 10% FBS (fetal bovine serum). Antler cells were cultured at 37 • C with 5% CO 2 . Finally, cell cryopreservation solution was added to freeze the cells in liquid nitrogen for long-term storage.
Immunohistochemistry Staining
For immunohistochemistry staining, antigen-treated, paraffin-embedded tissue was exposed to 3% hydrogen peroxide in the dark, and decolonization was performed with PBS 3 times for 5 min each. Blocking of the paraffin section was performed using PBS with goat serum for 40 min, and it was incubated with PRDX2 antibody (1:500) (Abcam, ab109367) at 4 • C overnight. Subsequently, the paraffin sections were washed with 1X-PBS 3 times and incubated with goat anti-rabbit IgG at 37 • C for 30 min following PBS washing. Before sealing paraffin sections with a neutral stain, they were counterstained using hematoxylin for 2 min and subjected to ddH 2 O washing and 1% hydrochloric acid alcohol differentiation for a few seconds, rinsed with ddH 2 O until ammonia water returned to blue and rinsed with ddH 2 O again.
RNA Interference
Antler chondrocytes were seeded in cell culture plates up to 60-70% confluence. The cells were transfected with 100 nM siRNA by Lipofectamine RNAiMAX Reagent in Opti-MEM medium (Life Technologies, Inc., Carlsbad, CA, USA) according to the instructions as compared to the control. After 48 h of transfection, cells were harvested for mRNA expression or protein expression. The siRNA sequence used in this study is as follows. PRDX2: 5 -AGGAAUAUUUCUCCAAACATT -3 , YAP1: 5 -GGUGACACUAUCAACCAAATT -3 .
Total RNA and Quantitative Real-Time PCR (qRT-PCR)
Post-transfection (48 h), total RNA from Antler cells was extracted using an RNA kit (Cat R6834-02, Omega Bio-Tek, Norcross, GA, USA) according to the kit instructions. Subsequently, the purity of RNA was determined for the absorbance at 260/280. The RNA with the absorbance value of 1.8-2.1 was used with the cDNA first-trans synthesis kit (CatKR118-02, TIANGEN Biotech, Beijing, China) for reverse transcription into cDNA. The expression of genes was analyzed using the method of 2 −∆∆CT . All the targeted primers were designed by Primer 5.0 software (Primer Biosoft, Palo Alto, CA, USA) ( Table 1).
Immunofluorescence Assay
Cells were grown on circular glass coverslips in 24-well plates, fixed with 4% paraformaldehyde (15 min) and then permeabilized using TritonX-100 (3 min). After washing with PBS, cell coverslip blocking was performed using BSA 5% for 30 min, and then the coverslips were incubated with primary antibodies at 4 • C overnight. Cell coverslips were washed 3 times with PBST and incubated with FITC or CY3-conjugated secondary antibodies (Servicebio, Wuhan, China). Lastly, counterstaining of the nucleus was performed with DAPI (Servicebio, Wuhan, China).
Toluidine Blue and Alcian Blue Staining
The adherent cells were washed 3 times with phosphate buffered saline solution and fixed with 4% paraformaldehyde at room temperature for 15 min. Then, the cells were incubated at room temperature with the prepared toluidine blue solution (Servicebio, Wuhan, China) for 2 h and washed with distilled water 3 times. Prepared 0.3% Alcian blue solution (Servicebio, Wuhan, China) was incubated at room temperature for 2 h and then washed with distilled water. Finally, an upright microscope was used to collect pictures.
Statistical Analysis
Analysis of data was performed using GraphPad Prism 5.0 software (GraphPad Software, Inc., San Diego, CA, USA). Data were summarized as mean ±standard deviation (SD), and a significant difference between two groups was determined using the t test. p < 0.05 indicated a statistically significant difference.
Conclusions
Our results indicate that PRDX2 has a positive effect on chondrogenesis. PRDX2 expression can effectively promote the production of extracellular matrix proteins. Mechanistically, on the one hand, PRDX2 regulated the expression of CTGF though Wnt5a/YAP1 pathway. On the other hand, PRDX2 suppressed inflammation response by inhibiting IL-6/JAK2/STAT3 pathway in antler chondrocytes, which facilitated the formation of the cartilage extracellular matrix.
Author Contributions: X.S. designed and performed the experiments. L.Y. and J.X. provide technical assistance and supervised this study; X.G. analyzed data; J.P. and X.Z. provided reagents; X.S. helped with images and the initial draft, Z.R. edited and revised the manuscript. All authors have read and agreed to the published version of the manuscript. | 2022-05-10T15:03:14.111Z | 2022-05-01T00:00:00.000 | {
"year": 2022,
"sha1": "ae6a5cc229308607c0552c749b699558c6f21bf0",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/23/9/5232/pdf?version=1651923107",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7fa74e2c76a133c9822950cdf9db6a4a9ecce88c",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
271334812 | pes2o/s2orc | v3-fos-license | Dissociating prosodic from syntactic delta activity during natural speech comprehension
of low-frequency activity (below 2 Hz) bilaterally and independently of prosodic cues. Importantly, low-frequency alignment at chunk onsets increases the accuracy of an encoding model in bilateral auditory and frontal areas while controlling for the effect of acoustics. Our findings provide novel insights into the neural basis of speech perception, demonstrating that both acoustic features (prosodic cues) and abstract linguistic processing at the multi-word timescale are underpinned independently by low-frequency electrophysiological brain activity in the delta frequency range.
In brief
Chalas et al. use an acoustically manipulated audiobook and a wordbundle annotation to show that during listening, the linguistic processing of incoming speech (traced in frontal and auditory cortices) interacts with the acoustical processing of the input, as reflected in the low-frequency delta activity of auditory cortices.
INTRODUCTION
Natural speech consists of a rich spectrotemporal signal 1,2 from which, after a set of computations, 3,4 we gain access to language comprehension.Over the last decades, a plethora of studies investigated associations between brain activity and continuous speech during listening.][10][11][12][13][14] Mechanistically, this could arise either from intrinsic neural oscillations adjusting their phase and frequency or by concatenated evoked responses to certain features of the stimulus.Still, it remains unclear whether parsing auditory speech through rhythmical activity depends only on exogenous speech rhythms or also on rhythms arising from endogenous linguistic representations.
For speech, syllabicity occurs on a timescale of $200 ms, 15,16 which aligns with the rhythmicity in the amplitude modulations of speech (around 5 Hz 17 ), and is hypothesized to be tracked by theta-band oscillator at a similar timescale (4-7 Hz 11,[18][19][20] ).However, on the timescale of word groups, such an alignment between acoustic components, linguistic elements, and brain activity is not clearly evident.One prominent example are prosodic phrases, which exhibit rhythmicity in the scale of $1 s 21,22 and are proposed to be tracked accordingly by an acoustically driven oscillator (in the delta range; below 2 Hz [23][24][25] ).Acoustic-and prosodic-landmarks, speech pauses, have been also related to speech-brain alignment in the delta frequency band. 26,27reviously, neural tracking of prosody has been reported in the delta timescale. 25Yet, factorial experiments suggest that deltaband tracking of prosody may in part be confounded by neural activity that relates to the segmentation and content-level integration of abstract multi-word units.For example, the phase of ongoing delta activity was found to predict termination of multi-word chunks 28 by establishing a time constraint for segmentation in the absence of prosodic cues. 29In this line, multiword chunks could reflect cognitive processes of syntactic segmentation, semantic combinatorics, or conceptual composition, which would be evident at the beginnings or endings of the chunks during natural speech processing.Importantly, top-down entrainment in the delta range has been found to occur for auditory rhythmic tone patterns, lacking physical boundaries. 30he most influential evidence of delta activity in the processing of multi-word chunks was reported in an artificially constructed paradigm in which isochronous syllables were combined hierarchically into phrases and sentences. 31Spectral peaks in the brain signal, matching the timescale of the phrasal units, were considered as an indication of internal and contextually dependent processing (but see Kazanina and Tavano 32 for criticism on the role of neural oscillations in language processing).Similar findings were reported by incorporating more sophisticated paradigms. 33,34Yet, again this line of research has also been criticized to confound effects stemming, at least partially, from non-syntactic (implicit prosodic) factors. 35,36In fact, syntax and prosody are ontologically 37 and temporally 38 intertwined.It is thus not surprising that prosodic boundaries facilitate higherlevel syntactic processing [39][40][41] and more prominently so in the early stage of language acquisition. 42Given that in defining phrasal boundaries, acoustical properties and combinatorial processing are difficult to disentangle, it is unclear whether the attributed effect of syntactic processing echoes prosodic properties of the input (e.g., pauses or pitch contour), evident in lowfrequency (below 2 Hz) brain activity.To this end, Kaufeld et al. contrasted compositional phrases with prosodically identical jabberwocky controls and found delta speech-brain alignment in linguistic content over and above prosodic cues. 43So far, systematic research investigating such intrinsic delta activity with respect to higher linguistic processing during natural speech comprehension is lacking.
Here, we aim to address a fundamental mechanism of human speech processing: namely, how acoustical and combinatorial processing interact and shape perception during speech comprehension.For this, we aimed to disentangle the sensory processing of prosody from the segmentation and integration of multi-word chunks.6][47][48][49] We provide evidence that in spoken narratives, both acoustic cues and abstract multi-word chunks are encoded separately in the phase of delta-band auditory activity.
Prosodic delta speech tracking and gamma coherence are anti-correlated during listening
We aimed to disentangle sensory-driven encoding from contextually driven encoding of speech with a focus on the phrasal timescale.To this end, 30 healthy participants listened to a story in two parts while their brain activity was recorded non-invasively with magnetoencephalography (MEG).First, we focused on altering the distributional statistics of speech, which we anticipated to lead to speech-brain misalignment in the delta range.Thus, we first asked how the acoustic natural statistics of connected speech drive sensory-cortical processing.For this, we divided the story into two equal blocks: in the first block, the story was left intact (we will refer to it as control condition), whereas in the other, we identified pauses (silences longer than 50 ms) that were in turn randomly prolonged or shrunk (jittered condition; Figure 1; see STAR Methods section for details).This manipulation altered the distribution of pauses, increasing the median of the overall length (median = 0.101 for control and median = 0.17 for jittered) and increasing the standard deviation (SD = 0.21 for control and SD = 0.58 for jittered; Figure 1D).Thus, we inherently changed the temporal regularities of natural speech between words and syllables, 50,51 but we kept intelligibility intact. 44Previously, this manipulation has been reported (using electroencephalogram [EEG]) to disrupt speech-to-brain entrainment in the delta range (0.5-2 Hz 44 ), which we further aimed to disentangle from contextually driven encoding in the phrasal scale.
We first computed the overall amplitude modulation of the speech signal (envelope) along with its derivative (hereafter we will refer to it as acoustic edges) and estimated source activity from 360 brain areas, for both conditions (Figure 1E, upper part).We represented the temporal dynamics of each brain area with the first three principal components of the concatenated vertex-level activity within a parcel, as we have observed that this accounts for over 90% of the total variance in auditory areas. 53Then, we quantified multivariate speech-brain alignment 53,54 via frequency-resolved non-linear statistical associations (mutual information [MI]; see STAR Methods).
The speech-brain coupling analysis resulted in MI values for each area, frequency (from 0.5 to 40 Hz; logarithmically spaced), and time-lags (from À300 to 300 ms; see Figure 2A upper left for representative spectra).As expected, a significant difference between the two conditions (control vs. jittered) was found in the delta frequency range (0.5-1.5 Hz; Figure 2A upper right and bottom), which was located in bilateral auditory cortices (group statistics; p < 0.05; cluster-corrected; Figure 2B; Table S1).Thus, we find that disrupting prosodic punctuation of natural speech compromises bilateral speech-brain coupling in the delta frequency range.
Next, we investigated differences in the temporal dynamics of bilateral auditory cortices (left and right A5) time-locked to speech onsets that follow a naturally occurring (control condition) or manipulated (jittered condition) pause.Note that with speech onsets we refer to energy resets after silent gaps in the speech stimuli.First, we focused on phase alignment to the stimulus by means of intertrial coherence (ITC 55 ) for the time range from À1.5 to 1.5 s relative to speech onset.We identified trials per condition and computed ITC for both conditions (n = 506 for control; n = 329 for jittered, pseudorandomized across subjects) within bilateral A5 and statistically compared them by means of non-parametric permutation tests (see STAR Methods).We report a left-lateralized increase in gamma ITC (25-45 Hz, p < 0.05) during a presentation of temporally unpredictable speech onset, which was time-locked to speech onsets (Figures 2C and 2D).Strikingly, the observed increased gamma ITC at the jittered condition was significantly correlated with the decreased delta entrainment across participants (r(29) = 0.47; p = 0.008; Figure 2E), suggesting a functional relationship between these effects; when prosodic-related delta speechbrain drops, gamma coherence increases.Interestingly, we did not find any significant difference between the frequency-specific power (Figure 2F) of the epochs centered on speech onsets or in the evoked responses (p > 0.05; Figure 2G).
Importantly, we have parametrically altered the sensorydriven distributional properties of incoming speech.Further, we aimed to evaluate whether altering the natural statistics of speech affects the subsequent parsing in the timescale of multi-word chunks.All in all, our findings suggest that alterations in the temporal structure of speech disrupt speech-brain coupling in the delta frequency range, which is accompanied by an increase in gamma coherence during sampling of speech onsets.
Multi-word chunk onsets elicit delta alignment in bilateral auditory areas
We then asked whether delta alignment in auditory cortices is evident at the boundaries of multi-word chunks.In usage-based syntactic processing models from psycholinguistics and cognitive psychology, these abstract units are thought of as sets of words that depend on one another (e.g., a determiner, adjective, and noun).While memory limits constrain chunk duration and thus necessitate occasional segmentation, each chunk-size filling of the buffer is coherent through the inter-word dependencies-thus allowing for the integration of word meanings within the chunk, and thus comprehension. 48Previously, invasive work has shown that a spatially distinct area within superior temporal gyrus is exclusively sensitive to speech onsets 56 while non-invasive recordings demonstrated that delta speech-brain coupling is specific-and temporally restricted-to speech onsets. 26However, it is not yet clear whether contextually driven onsets (such as the boundaries of multi-word chunks) are sufficient to align activity on longer timescales during listening.To address this, we leveraged a morphosyntactic and language-agnostic algorithm 45 to annotate phrases as multi-word chunks in a natural speech paradigm (Figure 1A; each chunk is depicted with a different color).6][47] When conflicting chunks were identified in the process, optimal ones were selected based on information theory and part-of-speech tags (see STAR Methods).In total, identified chunks were n = 868 (447 for control and 421 for jittered, pseudorandomized across participants) with a mean length of $1 s (mean = 1.21 s; SD = 0.65; Figure 1C).First, we marked segments of trials in the speech (A) Depiction of the acoustic signal along with the respective phrases (chunks).Text transcriptions were inserted in a language-agnostic morphosyntactic algorithm, which automatically identifies chunks from universal dependencies (chunks are depicted with different colors on top of original speech signal 45 ; see STAR Methods for details).We identified segments of silence exceeding a duration of 50 ms (red; top), and we randomly increased or decreased their respective length (red; bottom), preserving the mean of pause duration (see Figure 1C) but increasing the jitter (i.e., the standard deviation) by 90%, making the temporal structure of speech stream unpredictable (jittered condition).Note that English translation is shown for illustrative purposes only.(B) Distribution of chunk duration concatenated for both conditions.(C) Probability density functions representing pause length distribution for the two conditions (top).The median was increased by 60% and the standard deviation was increased by 55% after the manipulation (median = 0.101; SD = 0.28 for control, median = 0.17; SD = 0.51 for jittered).(D) The overall pipeline of the study: acoustic and linguistic features were extracted from the speech waveform (envelope, acoustic edges, word onsets, and chunk onsets) along with source time series from the HCP-MMP1 atlas (n = 360; 3 principal components per parcel 52 ).We applied two methods of speechtracking analysis: first, we associated the acoustic features and the source time series with a multivariate frequency-resolved mutual information analysis for both conditions (see Figure 2), and second, we leveraged an encoding's model approach to predict source time series from linguistic features, while accounting for the acoustics (see Figure 4).See also Figure S7.
envelope, which were centered on speech onsets and chunk onsets (Figure 3A, left).For the latter, we wanted to isolate chunk onsets from prosodic boundaries, so we only considered chunk onsets that were not discernible from a preceding pause (pauses exceeding duration of 50 ms; n = 112 for control and n = 161 for jittered, counterbalanced across participants; Figure 3B).This was done to ensure that beginnings of chunks occur without acoustic onsets.It is evident that whereas speech onsets with a pause have a distinct peak in the averaged speech envelope, this is not the case for the onsets without a pause (Figure 3B, plots on top).For ensuring that no other acoustical events are marking chunk beginnings without pauses, we also computed the log-mel spectrogram and the respecting multivariate acoustic edges of the speech signal (see STAR Methods).We collaborated on our initial observation that speech trials of chunk onsets do not differ from randomly drawn trials (Figure S4).
We then extracted bilateral auditory activity (from left and right A5) during the presentation of these segments, using the first three principal components.We chose left and right A5 as it was the parcel that bilaterally showed the highest difference between the experimental conditions.From the continuous source's estimated MEG data, we extracted frequency-resolved trials using continuous morlet wavelet decomposition (Figure 3A; see STAR Methods for details).We computed ITC as a measure (A) Grand average of mutual information (MI) spectra for the control (gray) and jittered (red) conditions across significant parcels after a non-parametric cluster permutation test (control vs. jittered; upper left).Violin plots depicting the group difference in delta speech-brain coupling (0.5-1.5 Hz; upper right).Individual t value spectra from the control vs. jittered comparison of MI spectra (cluster-level-corrected; bottom).(B) Cortical map of t values summed across significant frequencies in the delta range (0.5-1.5 Hz).(C) t value maps from non-parametric cluster permutation test (control vs. jittered) of ITC maps for left and right A5. Significant cluster was found in the gamma frequency range (25-45 Hz) centered at speech onsets (group statistics; p < 0.05; cluster-corrected).(D) Boxplots depicting the group difference in gamma ITC between control and jittered conditions.(E) Delta speech-brain coupling decrease and gamma coherence increase are correlated (r(29) = 0.47; p = 0.008).(F) t value maps from non-parametric cluster permutation test (control vs. jittered) of power maps for left and right A5.No significant cluster was found (group statistics; p > 0.05; cluster-corrected).(G) Evoked responses for left and right A5 during speech onsets for control (gray) and jittered (red) conditions.No significant difference was observed between the two conditions (non-parametric cluster permutation test; p > 0.05).See also Table S1.
of phase alignment within single trials of bilateral auditory activity, and we statistically compared it to null trials (i.e., randomized triggers).
As expected, we found a strong phase consistency in the delta band time-locked to chunk onsets after a pause, extending to theta frequency range for both control and jittered conditions (group statistics; p < 0.05; cluster-corrected; Figure 3C).Moreover, we also found delta alignment for chunk onsets not distinguished by a preceding pause in the first principal component (group statistics; p < 0.05; cluster-corrected; Figure 3D).Similarly, delta alignment is also evident in the second and third principal components (Figure S1A).We further proceeded with investigating the event-related responses at chunk onsets (Figure S5A).We tested a significant amplitude modulation by means of clutter-permutation testing with randomly distributed onsets.In contrast with the ITC analysis, there is no significant event-related response in the early time window (from 0 to 500 ms).Instead, we find a significant negative modulation for the left (control and jittered conditions) and the right (control condition) A5 (cluster statistics; p < 0.05; Figure S5B).The timing of the response does not match the significant ITC results; thus, we consider them as two separate processes.Thus, we report that delta alignment is evident during acoustically driven speech onsets, independently of contextually driven chunk onsets (not marked by a prosodic boundary).
Contextually driven encoding of multi-word chunks in bilateral auditory and frontal areas
Next, we wanted to assess whether including information of chunks in encoding models improves the predictive power of neural responses.This would serve as further evidence for contextually dependent processing of speech, independent from distributional properties of acoustics.To this end, we used a time-resolved temporal response function (TRF) approach 57,58 to model neurophysiological fluctuations of the source-estimated signal (n = 360 parcels; separately for the first three principal components) from a set of regressors.Specifically, we estimated regression weights between neurophysiological responses and the speech envelope, acoustic edges, and word and chunk onsets for different time-lags (À100 to 1 s; in steps of 10 ms).This analysis first aimed to pinpoint the predictive power of chunks above and beyond word-level processing and acoustics.To this end, source-reconstructed brain activity was modeled with ridge regression in a train dataset from a set of acoustic (speech envelope and acoustic edges) and linguistic features (discrete word and chunk onsets).
In Figure 4A, we plot the estimated regression weights within left and right A5 for the first principal components.We find a typical temporal response profile for the speech envelope and the acoustic edges (bilateral peaks at 60/200 ms and trough at 130 ms), matching a damped wave in the theta frequency range and thus reflecting the rhythmicity of speech. 59Note that this profile is also captured from the second and third principal component (Figure S2A).In contrast, the temporal response profiles of word and chunk onsets depict a response in a slower-frequency range.Specifically, word onsets show peaks at 90 and through at 220 ms, which are observable only in the second principal component (Figure S2A).The regression weights for the chunk onsets show a partly similar profile to but mostly distinct profile from word onsets across the three principal components.(C) t values of ITC maps between trials of onsets with a pause vs. null trials, after non-parametric cluster permutation test for control (upper) and jittered (bottom) conditions.Significant clusters (group statistics; p < .05)are indicated with higher opacity (red).(D) t values of ITC maps between trials of onsets without a pause vs. null trials, after non-parametric cluster permutation test for control (upper) and jittered (bottom) conditions.Significant clusters (group statistics; p < .05)are indicated with higher opacity (red).See also Figures S1 and S4-S6.
We see distinct peaks (at 140 and 290 ms) for the left and at 120 ms for the right, with bilateral troughs around 520 ms.This difference indicates that chunk onsets might have predictive power above and beyond word-level information.
To test this hypothesis, we compared the cross-validated performance between predicted and empirical time series by means of Pearson's correlation using two models: one model encompassing all predictors (envelope, acoustic edges, word onsets, and chunk onsets; hereafter we will refer to this as the full model) with another model including the same predictors, except that the chunk onsets were jittered randomly (null onsets; we will refer to this as the no chunks model; Figure 4B).We computed the correlation between the predicted and empirical time series for each parcel and each principal component and averaged across components.Then, we compared the performance of the two models by means of cluster-based permutation testing.We found a significant difference between the two models for both conditions located in bilateral auditory, expanding to frontal areas (group statistics; p < 0.05; cluster-corrected; Figure 4C) in which the model with the chunk onsets performs significantly better, compared with the model without chunk onsets (Figure 4D).As expected from the lack of difference in regression weights between conditions, there was no significant interaction between the construction accuracy of the two models (with and without) and conditions (control and jittered; group statistics; p > 0.05; Figure S6).
While pauses constitute the main prosodic boundary between chunks of speech, multi-word chunks could be potentially cued by other acoustic events, such as pitch contour.To account for this confound, we included a control analysis in which the pitch information (the fundamental frequency [F0] of the speech signal) is added in the acoustic variables of the encoding models.We note that adding the pitch contour as an additional acoustic variable did not affect the central finding (Figure 3).
Next, we asked whether the observed increase in performance can be predicted by the magnitude of delta alignment at chunk onsets without pauses (see Figure 3D).For this, we employed a linear mixed effect model (LMEM) approach in which we (C) Pearson's correlations of predicted responses per parcel (n = 360) and principal components (n = 3) were averaged across brain areas and subjected into nonparametric permutation cluster test to the performance of the two models (with vs. without pauses).Cortical maps of t values of the cluster t test for control and jittered conditions (cluster statistics; p<.05).(D) Violin plots depict the difference in accuracy (as expressed by Pearson's correlation) between the models with and without chunk onsets, extracted and summed across significant parcels.(E) Schematic illustration of the linear mixed effect model (LMEM) employed: increase of accuracy between the two models (with and without) was expressed as a linear combination, along with their interactions, of ITC increase in chunk onsets without pauses, from left and right A5, for control and jittered conditions.(F) Histograms shows the empirical LMEM weight for the (D (ITC); top) and D (ITC) 3 condition interaction against the null distribution computed from 5,000 random iterations of subjects' accuracy vectors.See also Figures S2 and S3.
expressed the difference in accuracy between the two models (with and without pauses) by the linear combination of the delta phase concentration (as depicted in ITC values) of left and right A5 for control and jittered conditions, along with their respective interactions (see Figure 4E for graphical illustration of the model).This would further serve as evidence that the contextual processing of chunks can be traced down to the phase alignment of delta activity.We found a significant main effect of ITC (t(112) = 2.43, p = 0.016), ITC 3 condition interaction (t(112) = À2.51,p = 0.013), and ITC 3 hemisphere interaction (t(112) = À0.26,p = 0.03).To further test the reliability of the model, we additionally randomly permuted participants k = 5,000 times, recomputed LMEM, and constructed a null distribution of LMEM weights.Empirical LMEM weight scored above the 99th percentile for the ITC (Figure 4F, top) and below the 1st percentile for the ITC 3 condition interaction (Figure 4F, bottom).This further corroborates the overall influence of delta alignment on the processing of multi-word chunks.
To summarize, we report an increase in accuracy of encoding models when phrasal onsets information is employed.This increase is located in bilateral auditory and frontal areas, suggesting contextual combinatorial processing of chunks.Importantly, we find that for the bilateral auditory area A5, this increase can be traced down to the observed delta alignment during chunk (or multi-word) onsets.
DISCUSSION
How context-invoked processes interact with bottom-up sensory evidence during speech perception remains a fundamental question for which data interpretation has been highly contentious.The main focus of this study was to disentangle contextual processing from prosodic features during listening to a story.We find compelling evidence that both these processes coexist independently and can be both traced down interactively in the phase of slow rhythmic cortical activity in the delta frequency range.Our results complement previous findings on participation of entrainment and evoked responses in speech processing 44,60 and have important implications for the interaction of online sensory evidence and long-term contextual knowledge during spoken language processing.
We confirmed that temporally manipulating prosodic cues compromises cortical alignment to speech, specifically in bilateral auditory areas in the delta frequency band (Figures 2A and 2B).This replicates previous EEG findings, 44 aligns with a proposed role of delta oscillations in temporal expectations, 61,62 and is consistent with previous speech-brain coupling research proposing delta activity specific to speech onsets. 26In this line, we foster the notion that delta speech-brain coupling reflects top-down predictive processes 18 evoked by acoustical regularities, 63,64 in particular the temporal structure of pauses. 51hile processing temporally unpredictable speech onsets, we moreover report an increase in gamma synchrony, as expressed by higher ITC (Figures 2C and 2D).Auditory-evoked gamma-band responses (GBRs) have been previously reported and dissociated from the slow-frequency-evoked component. 65Broadly, gamma oscillations have been linked to sensory prediction errors realized-on the microcircuit level-by collective firing of superficial pyramidal neurons, 66,67 but still it remains controversial. 68Within this framework, we speculate that these might serve as a speech-specific 69 feedforward index of temporal prediction violation 70 and consequently rhythmic gain modulation 71 across the auditory pathway hierarchy, as previously shown in the visual domain. 72,73Alternatively, this might reflect transient aperiodic events of increased spiking activity. 74,75Interestingly, we find that these two processes are anti-correlated: The larger the increase of gamma synchrony, the greater the decrease of delta speech-brain coupling observed across participants (Figure 2E).Although this correlation does not serve as causal evidence, it suggests that delta speech tracking and gamma synchrony in speech onsets are tightly connected, such that when delta speech tracking is thwarted, gamma synchrony counteracts during processing speech onsets.
As expected, we found strong delta alignment during speech onsets marked with a prosodic boundary 25 in auditory areas, irrespective of the temporal predictability in the speech signal (Figure 3C).This points to the direction that perception of prosodic boundaries constitutes a sensory-driven bottom-up process and leads to online segmentation of speech 76,77 through acoustically driven delta-band activity. 23This alignment can reflect entrainment of a limited number of cycles for lower frequencies, previously described in physical 78 and biological systems, 79,80 or temporally superpositioned evoked responses, as it has been previously reported in visual domain 81 and recently proposed for speech perception. 60,82Prosodic boundaries are also evident as a closure positive shift in the event-related component, 83 but whether these two phenomena are inter-related remains an open question.Nevertheless, our results support the notion that acoustically related processing of prosodic phrases is occurring in a bottom-up fashion through a delta-band frequency channel.This points to a more complex pattern than previously hypothesized for top-down and bottom-up processing via low-and high-frequency channels, respectively, 73,84 as previously reported in the visual 74 and auditory domain. 85elta alignment was also evident in bilateral auditory areas at chunk onsets not marked by a prosodic boundary (Figure 3D).Naturally, no difference was observed between the two conditions as the chosen speech trials had not been manipulated.We consider this intrinsic auditory delta alignment as a manifestation of top-down or contextually driven cortical activity during listening to naturalistic speech.Previously, using an encoding model approach, it was shown that sentence context-extending beyond entropy and surprisal and sensory distributional facets-modulates word processing in the amplitude of the delta band. 86Here, we show that this modulation can be established through processing of computationally formalized speech chunks, derived from the optimal set of sub-trees within a dependency tree.Moreover, increased speech-brain coupling has been reported, as expressed by higher MI values, between the phase of the delta band and pulse-train-coded phrasal chunks when compared with jabberwocky and prosodycontrolled speech chunks. 43,87We extend these findings in naturalistic listening by leveraging a combination of the encoding model approach with linear mixed modeling: we show that the increase in model accuracy from chunk onsets information can be predicted by the instantaneous delta activity in bilateral auditory areas.Again, this does not constitute causal proof of interaction, but it strongly suggests a functional connection between chunk processing and the phase of delta activity in bilateral auditory areas.
Chunks onset information drastically increased the accuracy of encoding predictive models in temporal and frontal areas for both control and jittered experimental conditions.Before, similar encoding models including only acoustic information of the signal (envelope and derivative) were superior (in terms of accuracy) to ones including more sophisticated linguistic-related information (articulatory features 88 ; but see Tezcan et al. 89 ).
Here, the profound increase in accuracy with chunk onsets might reflect long-term memory processes, which align (in a feedback manner) to contextual dependencies acquired through development, 90 and facilitating predictions, 91 accessed through long temporal integration windows. 92Consequently, our findings echo the analysis by synthesis (A 3 S) framework of speech processing, 93,94 within which speech input is processed by concurrently matching internally generated brain states (synthesis; feedback) to the incoming acoustical signal (analysis; feedforward).In this line, lexical representations are subject to feedback contextual inferences, prior to sensory input. 95We propose that in the phrasal timescale, this might be operationalized at the neural level by delta-band alignment.As proof of principle, deltaband entrainment (1.6-1.8Hz) was evident for several cycles beyond the sensory stimulation, 63 while persistent entrainment beyond stimulation was found to affect perceptual inference during listening. 96We note that the algorithm used for parsing was based on dependency trees and consequently remained agnostic to hierarchical structures of language that can be captured with constituency-based trees.In this line, each word (or node) can be further annotated by a depth degree that has been shown to affect neural responses. 97,98Here, we find a robust effect of neural responses to chunk onsets without accounting for such a hierarchy, which we assume to cancel out in the stimuli as some chunks would be higher and other lower in the hierarchy.For future studies, it will be interesting to investigate whether neural responses to chunk onsets pertain to such a hierarchical structure.
Furthermore, we found no difference in the increase of encoding accuracies between the two experimental conditions (control and jittered).This might reflect the fact that for both conditions, the speech envelope and acoustic edges were controlled in the analysis.We note that reduction of speech-brain coupling in the delta frequency band was observed in the phase (not in the power) of bilateral auditory activity; the encoding models operating in the power and the phase of the broadband signal might remain blind in this effect.Future studies could implement such an encoding approach, utilizing the phase of cortical activity as a dependent variable.Importantly, the interaction between condition and ITC in the delta frequency band (Figure 4F, bottom) indicates that delta alignment at chunk onsets serves as a better predictor of the model, leveraging chunk onsets information, in the jittered compared with the control conditions.While surprising, as for both experimental conditions chunk onsets without pauses were considered (and thus there was no prior manipulation), this probably stems from the slightly higher (but statistically insignificant) ITC values in the jittered condition (Figures 3C and 3D).
Behind the formalization of chunking applied here lies the assumption that the mesoscopic multi-word units of sentence processing are clusters of local syntactic dependencies. 49,99his is grounded theoretically in dependency grammar, 47 current psycholinguistic models of sentence processing, 100,101 as well as in the cognitive psychology of working memory constraints on sentence processing. 48Our algorithm generates chunks by considering dependency annotations and part-of-speech labels that follow the universal dependencies (UD) standard. 102UD follows dependency grammar in representing the syntactic structure of any sentence as a set of binary relations that must be established in order for a sentence to be understood; dependency grammar is applicable cross-linguistically, has a long history in theoretical linguistics, and is standard in the field of natural language processing. 47The notion that humans form dependencies akin to those specified by UD is pursued by the two leading current sentence processing models-the dependency locality theory (DLT 100 ) and cue-based retrieval 101,103 -which are supported by dozens of behavioral, 104 electrophysiological, 105 and imaging studies. 106The definition of chunks as local dependency clusters also takes into account the proposal that humans form dependencies within short segments of speech only, 48 that is, among the words currently held within the working memory buffer.Unfortunately, it is left open whether the current MEG results reflect cognitive operations at chunk onsets or offsets.Our primary reason for focusing on onsets was our aim to dissociate chunking from acoustic speech onsets.Yet, any chunk onset is an immediate successor to an offset, where the cognitive operation of chunking that the dependency cluster currently held in working memory might be assumed to occur. 46ur work focused on dissociating the neural delineation of abstract multi-word chunks from the auditory processing of acoustic chunk boundaries.To pursue this objective, we modeled multi-word chunks as clusters of local syntactic dependencies, given by UD annotations. 102It is beyond the scope of the current study to address how our findings relate to interesting prior work that has linked electrophysiological activity in the delta band to a particular linguistic theory, hierarchical phrase-structure grammar. 107The mapping from the incremental processing of syntactic dependencies and the chunking of multiple words in working memory-in accordance with the motivations from psycholinguistics and cognitive psychology cited above 48,99,101 -to the theoretical descriptions assumed by linguistic theories is an important objective for future research.
Conclusions
In summary, we show neural evidence for parallel processing of prosodic and contextual sampling during listening to a story.Contextually driven processing is underpinned by the alignment of slow cortical activity (in the delta frequency band), which matches the phrasal timescale, and it differentiates from the speech-brain coupling in this frequency range: delta speechbrain coupling seems to be under predictive top-down control, while intrinsic delta alignment acts in a bottom-up fashion during processing of speech onsets.All in all, we find a temporally and functionally dissociated role of intrinsic and bilateral auditory alignment of dynamics within the delta frequency band, one for contextually driven speech and another for prosodic sampling of speech.the natural story was preceding the altered one, whereas in the other version of the stimuli the altered story preceded the natural one.Both versions included the same story (first 5 chapters of the Little Prince), only the order of the experimental manipulation changed.Thus for the first version the first 3 chapters were natural and the next 2 manipulated, whereas in the second version the first 3 chapters were manipulated and the next 2 natural.The version presented to the participants was pseudo-randomized across subjects.To assess whether participants paid attention to the story, they answered by means of a button press 15 multiple choice questions (3 after each chapter; 2 response options each).
Identification of multi-word chunks
Multi-word chunks were extracted from the text using a morphosyntactic and language-agnostic evolutionary algorithm which identified chunks from universal dependency (UD) treebanks. 45We applied the chunking algorithm from Lo et al. 46 The text was parsed automatically obtain UD annotations by using the English Web Treebank model from UDPipe v2.6, which has a high performance with a labeled attachment score of 87.43% on the raw text of the corresponding test file. 111,112We note that chunks were defined as words and bound morpheme sequences with all possible local dependency clusters. 45,47Initially, chunks were considered as base-level subtrees.Then, unitary chunks were minimized with the following procedure: For a given sentence and its associated tree structure, potential multi-word chunks are extracted if, for the node n at the position x with the corresponding head h at position k(where k can be either greater or less than x), the nodes between x and k-1 (if k>x) or the between x and k+1 (k<x) have the same head h.This process results in overlapping chunks, where some words of a chunk might be a dependent of another chunk.To select optimal chunks for a given tree, we computed normalized pointwise mutual information (NPMl 113 )) between the Universal part-of-speech (UPOS) tag of a node (t) and the tuple of the UPOS of the head of that node (ht) and the relation between the node and its head (rel) for a given node, as described in ( 1) and ( 2).Then we average NPMIs associated with the nodes within a candidate chunk (see (3)).Chunks with higher average NPMI are selected.Subsequently, to minimize unitary chunks (i.e., with only one node), we removed punctuations with the UPOS tag of punctuation only and then attached remaining floating unitary nodes to the chunks where those unitary chunks are syntactically linked to any element of the corresponding chunk.
PMIðt; ht; relÞ = log pðt; ht; relÞ pðtÞpðht; relÞ (Equation 1) NPMIðt; ht; relÞ = PMI À logðpðt; ht; relÞ (Equation 2) with the average NPMl for a candidate chunk defined as: where N is the total number of nodes in a phrase and d is a dependent in a given phrase C.
Data acquisition
Evoked magnetic fields were recorded in a magnetically shielded room via a 275 channel whole-head MEG system (OMEGA, CTF Systems Inc, Port Coquitlam, Canada).Data were continuously recorded with a sampling rate of 1200 Hz.Subjects were seated upright with a fixed head position, comfortably stabilized inside the MEG dewar via pads.The stimulus was delivered using PsychToolBox 114 with two 60 cm long silicon tubes.Questions after each chapter were projected onto the back of a semi-transparent screen positioned approximately 90 cm in front of the subjects' nasion using an PROPixx Lite Projector (VPixx Technologies Inc., Canada) with a refresh rate of 60 Hz.
Data preprocessing
Data were preprocessed with the MNE-python toolbox. 115Initially, we epoched the data according to the onset and offset of each chapter.Then, we filtered the data with a Hamming-windowed finite impulse response (FIR) zero-phase high-pass filter with a 1 Hz cut-off.We extracted 40 independent components for the 2nd-order gradiometer data using the fastica algorithm.We manually identified artifactual components belonging to eye movements and heart activity (mean number = 4.75, SD = 1.58) and we reconstructed the original recording in which the artifactual components were removed.We filtered above 1 Hz prior to the ICA to avoid biases towards lower frequencies, which tend to have greater power. 116Finally, we filtered the reconstructed data with a Hamming-windowed FIR zero-phase filter filter with a 0.2 Hz cut off and we downsampled the data to a sampling rate of 600 Hz.
Source localization T1-weighted Magnetic Resonance Images were obtained from each participant in a 3-T scanner (Gyroscan Intera T30, Philips, Amsterdam, Netherlands).400 contiguous T1-weighted slices of 0.5 mm thickness in the sagittal plane (TR = 7.33.64 ms, TE = 3.31 ms) were collected by a Turbo Field Echo acquisition protocol.The field of view was set to 300 ✕ 300 mm with an in-plane matrix of 512 ✕ 512 setting defining the native voxel size at 0.58 ✕ 0.58 ✕ 0.58 ㎣.The intensity bias of the images was then regularized using SPM8 ðS 5j + b 5 Þ Ã condition à hemisphere + ðS 6j + b 6 Þ Ã D ðITCÞ Ã condition à hemisphere + e j (Equation 8) For participant j, the accuracy difference between encoding models with and without chunk onsets was expressed as a combination of an intercept (b 0 ), the fixed effects of dependent factors ðb 1 :::b 6 Þ, random slopes ðS 1j :::S 6j Þand an error term (e j $ Nð0; s 2 Þ.Random slopes were included in the model to account for intersubject variability, by means that the fixed effects would not modulate the encoding model's accuracy identically across participants.Furthermore, significance of the empirical LMEM weights was further tested by computing the LMEM n=5000 times after shuffling participant's model accuracies and comparing its rank with percentiles of the null distribution (1st for negative and 99th for positive values).
Statistical analysis
Significance in the group level was determined with cluster-based permutation tests 127 using ft_freqstatistics in Fieldtrip.For this, a series of 2-tailed t tests of individual data points (MI in each frequency, Pearson's r, ITC or amplitude maps) were conducted for each parcel between the two experimental conditions (control, jittered) and resulting t-values were thresholded at p=0.05.Then, spatially and/or spectrally adjacent significant data points were defined as clusters with an assigned cluster-level statistic constituting the sum of the t-values within each cluster.Further, we used Monte Carlo approximation to test each cluster for significance.For that, single subject data points between the two conditions were randomly interchanged and the series of t tests, clustering and estimation of cluster-level statistics were recomputed.After repeating this procedure 5000 times the original cluster-level statistics were compared with the histogram of the randomized null statistics.When initial clusters yielded a larger cluster-level statistic than the 95% of the randomized data they were considered as significant.
Figure 1 .
Figure 1.Experimental design and analysis pipeline
Figure 2 .
Figure 2. Delta entrainment and gamma coherence are anti-correlated during listening
Figure 3 .
Figure 3. Delta intertrial coherence elicited by speech and chunk onsets (A) Schematic illustration of the analysis: source-derived trials from left and right A5 for onsets with and without a pause.Length of the trials was 3 s.(B) Speech envelope trials for chunk onsets without a pause (left) and speech onsets sorted by the preceding pause length (right).(C)t values of ITC maps between trials of onsets with a pause vs. null trials, after non-parametric cluster permutation test for control (upper) and jittered (bottom) conditions.Significant clusters (group statistics; p < .05)are indicated with higher opacity (red).(D) t values of ITC maps between trials of onsets without a pause vs. null trials, after non-parametric cluster permutation test for control (upper) and jittered (bottom) conditions.Significant clusters (group statistics; p < .05)are indicated with higher opacity (red).See also Figures S1 and S4-S6.
Figure 4 .
Figure 4. Linguistic tracking of chunks in bilateral auditory and frontal areas (A) Temporal response functions across feature spaces (envelope, acoustic edges, word onsets, and chunk onsets) for control (gray) and jittered (red) conditions.(B) Feature spaces used for the two models: in one model the envelope, acoustic edges, word onsets, and chunk onsets were used (with) and in the other model the chunk onsets were replaced by null pulse responses (without).(C) Pearson's correlations of predicted responses per parcel (n = 360) and principal components (n = 3) were averaged across brain areas and subjected into nonparametric permutation cluster test to the performance of the two models (with vs. without pauses).Cortical maps of t values of the cluster t test for control and jittered conditions (cluster statistics; p<.05).(D)Violin plots depict the difference in accuracy (as expressed by Pearson's correlation) between the models with and without chunk onsets, extracted and summed across significant parcels.(E) Schematic illustration of the linear mixed effect model (LMEM) employed: increase of accuracy between the two models (with and without) was expressed as a linear combination, along with their interactions, of ITC increase in chunk onsets without pauses, from left and right A5, for control and jittered conditions.(F) Histograms shows the empirical LMEM weight for the (D (ITC); top) and D (ITC) 3 condition interaction against the null distribution computed from 5,000 random iterations of subjects' accuracy vectors.See also FiguresS2 and S3. | 2024-07-24T13:30:32.171Z | 2024-07-01T00:00:00.000 | {
"year": 2024,
"sha1": "5ad7e37fe2d3e5f216409d13a40ee4ec2d206329",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1016/j.cub.2024.06.072",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "106d357e4567013248f1066208efd7056b20cb37",
"s2fieldsofstudy": [
"Linguistics"
],
"extfieldsofstudy": [
"Medicine"
]
} |
12831809 | pes2o/s2orc | v3-fos-license | Segmentation of Myocardial Boundaries in Tagged Cardiac MRI Using Active Contours: A Gradient-Based Approach Integrating Texture Analysis
The noninvasive assessment of cardiac function is of first importance for the diagnosis of cardiovascular diseases. Among all medical scanners only a few enables radiologists to evaluate the local cardiac motion. Tagged cardiac MRI is one of them. This protocol generates on Short-Axis (SA) sequences a dark grid which is deformed in accordance with the cardiac motion. Tracking the grid allows specialists a local estimation of cardiac geometrical parameters within myocardium. The work described in this paper aims to automate the myocardial contours detection in order to optimize the detection and the tracking of the grid of tags within myocardium. The method we have developed for endocardial and epicardial contours detection is based on the use of texture analysis and active contours models. Texture analysis allows us to define energy maps more efficient than those usually used in active contours methods where attractor is often based on gradient and which were useless in our case of study, for quality of tagged cardiac MRI is very poor.
Introduction
Non invasive assessment of the cardiac function is of major interest for the diagnosis and the treatment of cardiovascular pathologies. Whereas classical cardiac MRI only enables radiologists to measure anatomical and functional parameters of the myocardium (mass, volume, etc.), tagged cardiac MRI makes it possible to evaluate local intramyocardial displacements. For instance, this type of information can lead to a precise characterization of the myocardium viability after an infarction. Moreover, data concerning myocardium viability makes it possible to decide of the therapeutic medical treatment, angiopathy, or coronary surgery and following of the amelioration of the ventricular function after reperfusion.
The Space Modulation of Magnetization (SPAMM) acquisition protocol [1] we used for the tagging of MRI data, displays a deformable 45 degrees oriented dark grid which describes the contraction of myocardium (Figure 1) on the images of temporal Short-Axis (SA) sequences. Thus, the temporal tracking of the grid can enable radiologists to quantify cardiac geometrical parameters within myocardium.
A common step of all these approaches is the segmentation of myocardial boundaries for each instant of Left Ventricular (LV) contraction (diastole) (see Figure 2 for a manual segmentation of these boundaries) since LV contraction represents 80% of the whole heart contraction function.
This segmentation step is of primary importance since detection and tracking of the grids are made on this particular area for locally quantified LV displacements. Among all previous cited papers, the only study integrating automatic detection of endocardial and epicardial boundaries within the tracking of the grid process was developed by Guttman [26] and carried out on radiallytagged acquisitions ( Figure 3).
This method based on a prior erasure of tags using nonlinear filtering, turned out to be inappropriate to our images which are not radially tagged as one can notice on Figure 1. Moreover, this particular type of tagging is no more used in medical practice.
All other methods dealing with this segmentation problem involve manual detection of the myocardial boundaries [16,27,28], or a detection previously made on classical cardiac MRI sequences [2] or on filtered ones [3] and as such do not entirely address to the problem of in routine clinical practice.
In this article we present an alternative method for the automatic detection of myocardial boundaries on tagged cardiac MRI which integrates active contours and texture analysis. Our method enables an automatic detection of myocardial boundaries of LV on SA sequences and then an optimized tracking of the grid of tags within myocardium is possible.
Concerning the layout of this paper, next section is dedicated to the presentation of the global segmentation method of myocardial boundaries. Sections 3 and 4 deal with the computation of what we call energy maps thanks to texture analysis. Following section presents visual results of segmentation obtained on different patients and a statistical validation of the developed method. Last section is dedicated to discussion.
Active Contours and Context
Originally proposed in [29], active contours for segmentation have attracted extensive research in the past two decades. The basic idea of the active contour is to iteratively evolve an initial curve towards the boundaries of the target objects driven by the combination of internal forces determined by the geometry of the evolving curve and the external forces induced from the image.
Image segmentation methods using active contours are usually based on minimising functionals which are so defined that curves close to the target boundaries have small values. For instance, in [29], authors formerly proposed the following functional: where C(q) is a parameterized flat curve, u 0 the initial image data and α, β, λ are positive constants. The first two parameters α, β control the regularity of the curve (E intern ) and λ controls the attraction of the curve to the targeted boundaries (∇u 0 , with ∇ the classical gradient operator) of the studied image u 0 (E extern ). To solve these functional minimisation problems, a corresponding partial differential equation is constructed as the Gateaux derivative gradient flow resulting in a curve evolution.
To obtain interesting results with minimisation of (1), initialization of the curve has to be made close to the boundary of the structure to be segmented. This drawback is directly linked to the computation of the external energy International Journal of Biomedical Imaging induced from the image which is based on a classical gradient operator. As a consequence, if the initialization of the curve is made too far from the targeted structure, other local minima of the E extern can corrupt final segmentation result. However, in many applications of medical image segmentation, initialization of the curve has to be simple and fast and as result performed far from the target. For instance, initialization is often made on the boundary of the image, or near the center of gravity of a particular Region of Interest.
Taking this into consideration, we propose an evolution of the functional described by (1) given by As one can notice, two terms are added to the classical functional. The last one, κ 1 0 n(C(q))dq is a classical balloon energy formerly introduced in [30]. To explain its role, let us consider a circle as the initializing curve which evolution law is only driven by this energy with κ > 0. For each step of the evolution process, the circle has no other solution than to spread all along its local normals (n); the diameter of the circle grows up. It is an extra expanding term usually found necessary for quicker convergence. This energy has shown to be useful for fast growing of the curve when initialization is made far from the targeted boundary.
The term given by λ 1 0 |∇u map (C(q))|dq is derived from the former one of (1) and represents the induced boundary energy (E extern ) adapted to our particular domain of application, tagged cardiac MRI. Indeed, the grid of tags does not allow us to obtain a good gradient attractor ( Figure 4). As a consequence, the boundary-based energy is computed from a preprocessed version u map of original image u 0 which is described next section.
Endocardial Boundary.
A major property observed on SA tagged MRI sequences is the fast erasure of tags in the cardiac cavity due to blood circulation (see Figure 1). To explain this phenomenon, one must understand that tagging process is obtained thanks to a saturation of hydrogen atoms in surfaces orthogonal to main orientations of the grid (see [1] for complete description of the process). As a consequence, muscles, fat tissues, and blood are tagged. But, considering cardiac motion, between two phases of contraction, blood is pumped out of the cardiac cavity into the main circulation. Therefore, tagged cells of blood are no more visible during acquisition process as soon as contraction has begun.
This property is of primary importance since image can be roughly divided into two areas: a tagged area (1) where the tracked grid remains visible and a homogenous area (2), roughly the cardiac cavity of Left and Right ventricles, where tagged are no more visible. As a consequence, areas (1) and (2) can be easily discriminated by simple texture parameters calculated on a local kernel like mean, and standard deviation. Area (1) is characterized by a standard deviation having bigger values than area (2) which is more homogenous (absence of tagging). For both areas, means remain nearly the same.
Considering this, we propose the calculation of a mean (M)-standard deviation (σ) image to build a precise gradient-based energy to detect the endocardial boundary ( Figure 5(c)). This map is obtained by computation of (3) on the processed original sampled image u 0 (i, j) where (i, j) denotes the indices of a given pixel: w m and w σ are, respectively, the weights given to the mean computed on a kernel of size N * N centered on the processed pixel and the weight given to the standard deviation computed with the same kernel. w m and w σ verify w m + w σ = 1.
As one can notice on Figure 5(b), the computation of the mean-standard deviation image makes enhancement of the cardiac cavities of LV possible (pixels of high intensity). This image leads, on the one hand, to a possible automatic detection of the center of the LV cardiac cavity (that can be used for active contour initialization), and, on the other hand, to a gradient-based energy ( Figure 5(c)) totally adapted to our purpose.
Epicardial Boundary.
The gradient-based energy for the segmentation of the epicardial contour was more complex to compute since, as one can see on Figure 1, the boundary is hard to detect visually even for experts. Moreover, as for endocardial contour, active-contour segmentation cannot simply integrate tagged MRI gradient as attractor.
As a consequence, to compute a useful u epi map, it appeared interesting to analyze the particular texture of the lung (dark area situated on the right of Figure 1). This area, compared with the rest of tagged MRI, is described by a rough texture. Thus, the use of second-order texture parameters and more particularly, the calculation of the cooccurrence matrix entropy on an N * N block, can make the enhancement of the lung area possible by characterizing it with high entropy coefficients (Figure 6(b)). The map u then obtained allows us to compute an interesting gradientbased energy (Figure 6(c)). sequence) and tracking of the myocardial boundaries (on the other images describing diastole) are made separately; the detection method is divided into five steps: (1) u endo map is first computed and barycenter of the LV cavity is used for automatic active contour initialization (a circle) (Figures 7(a) and 7(b)); (2) considering the given automatic initialization of step 1, a first fast growing (only taking into account the balloon force) of the active contour is used to obtain a rough segmentation of the endocardial contour; (3) the resulting curve from step 2 is then used as initialization for boundary-based evolution considering corresponding term of (2) with u map = u endo map (Figure 7(b)); (4) since epicardial contour is situated close to the endocardial one (see Figure 2), an automatic radial spreading of the curve detected step 3 (no more than 4 pixels) is performed to obtain a rough segmentation of it; (5) the curve of step 4 is then used as initialization for boundary-based evolution considering corresponding term of (2) with u map = u epi map (Figure 7(c)). Considering now tracking, in order to have a quick implementation, each detected myocardial boundaries (endocardial and epicardial) at instant t of the diastole is used as initialization for a boundary-based evolution at instant t + 1.
For epicardial contour detection and tracking, concerning part of the contour where gradient-based energy fails to bring good attraction (left part of the boundary), the coherence of the detection is obtained by setting to zero the λ parameter of (2). To do so, a test of distance (Euler) between the new calculated coordinates of the active contour point and the center of the LV cavity is computed. Indeed, compared to endocardial contour, global displacement of the epicardial one is still less important. As a consequence, if distance to the center of the new coordinates control point appears to be incoherent (to far from the precedent one), calculation is made again with λ = 0. Geometrical constrains are privileged to ensure coherence of the result. Figure 8 shows results of detection for 5 different patients (extracted from a global set of 8).
Results.
Considering first intrinsic performances of the proposed method, as one can notice, the implemented method is robust as regard of the initialization which is always the same in each different case. Moreover the developed method is reproducible and does not need new tuning of the different parameters for new patients as Figure 8 shows where all segmentations have been made with same setting of the different parameters presented in previous section.
Considering now performances of the proposed approach in terms of precision, we propose a first statistical analysis made on a set of 8 patients as a basis for a future more complete analysis made on a larger scale. More precisely, for each of the 8 patients, 6 images extracted from a synchronized systole acquisition made at a median slice level of the LV are considered. For each image of a particular sequence, the semiautomatic segmentation of endocardial and epicardial boundaries of the LV and the corresponding myocardial surface is generated (see Figure 9(a)). The same study, starting from a manual expert segmentation of the myocardial boundaries, is also performed (see Figure 9(b)). For each image of a sequence, automatic and manual surfaces are compared (see Figure 9(c)) thanks to a calculation of the ratio corresponding to the matching and nonmatching pixels.
More precisely, each pixel of the automatic generated mask is identified as being a True Positive (TP) pixel or a False Positive (FP) pixel or a True Negative (TN) pixel or at last a False Negative (FN) pixel. Table 1 shows the average number of each type of pixels (expressed in percentage of the average total number of pixels of each pixel class within manual generated myocardium surface) for each of the 6 considered time steps of the systole.
Calculation of the ratio between matching pixels of both surfaces (TP) shows that for 48 processed images (6 per patients) more than 80% of the different surfaces are matching. Moreover, only more or less 3% of the automatic detected pixels are considered as FP, that is to say that they do not match at all expert surface. Those pixels are often situated near endocardial boundaries. This is above all due to the papillaries muscles of the LV which are also tagged and as a consequence influence expert segmentation (they tend to integrate them within the cardiac cavity). One can notice that in this case, the expert's segmentation of endocardial boundaries can lead to an overestimation of the surface area of LV's cavity since papillaries muscle must not be taken into account (which is the case with proposed automatic method).
No comparison with other methods are proposed since, as we mentioned it before, proposed automatic methods of the literature are not directly performed on tagged cardiac MRI but always on classical cardiac MRI (double acquisition).
Conclusion and Outlooks
In this article, we propose an automatic approach for segmentation of epicardial and endocardial boundaries of the LV directly on tagged cardiac MRI. The detection of the endocardial and epicardial boundaries on SA sequences is Contraction-dilatation fully automatic and satisfactory, whereas the literature always involves manual detection during the analysis of tagged MR images. About the results of the detection of the epicardial contours, the method allows us to obtain satisfying results which are in agreement with medical specialists opinion. The method is less robust for endocardial segmentation, but still performs well. The way the method is initialized allows it not to be too dependant of this important step. The fact that even visually, detection is still very difficult for radiologists, and is particularly important for the consideration of our results. As far as the robustness is concerned, progresses still to be made as the segmentation is dependent on the values the weights selected for the different energies and the size N of the neighborhood on which texture maps are computed.
Regarding precision of the method, we have presented a first statistical study which shows very promising results. If this statistical study still to be improved in order to characterize intra-and interoperator variabilities, obtained estimation allows us to go to a step further in the use of the proposed method. It is now possible to use these results (shared with those given by the tracking of the grids on SA) to develop a 2D + T analysis of the myocardium. The aim of this study will be the calculation of local cardiac quantitative parameters correlated to "gold standards ones" (like fraction ejection) in order to early reveal eventual pathologies like ischemia, for example, but also to characterize myocardial viability after reperfusion.
First results have been already obtained. Classical cardiac parameters on 10 SA sequences as radial, circumferential, longitudinal displacements, torsion, or deformations have been calculated. We present in Table 2 a comparison between our obtained results for the quantification of the radial displacements and two studied of the medical literature.
As one can notice, our results are comparable to those of the medical literature. Moreover, it is also possible to realize a two-dimensional temporal map (according to the recommendations of the American Heart Association) characterizing the local displacements and local deformations of the myocardium ( Figure 10).
Presented results could be interesting for radiologists to evaluate torsion, shearing, longitudinal, and radial displacements of the LV and then to draw early diagnoses of particular cardiopathies. These first results need now to be confirmed through a more complete clinical validation. | 2014-10-01T00:00:00.000Z | 2009-06-11T00:00:00.000 | {
"year": 2009,
"sha1": "697f6290366a48a030b7eb29b9f24a38986924a4",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/ijbi/2009/983794.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "30750af38a1c397124b1d4556132acb71b9ee436",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
} |
1735105 | pes2o/s2orc | v3-fos-license | A novel inhibitor of fatty acid synthase shows activity against HER2+ breast cancer xenografts and is active in anti-HER2 drug-resistant cell lines
Introduction Inhibiting the enzyme Fatty Acid Synthase (FASN) leads to apoptosis of breast carcinoma cells, and this is linked to human epidermal growth factor receptor 2 (HER2) signaling pathways in models of simultaneous expression of FASN and HER2. Methods In a xenograft model of breast carcinoma cells that are FASN+ and HER2+, we have characterised the anticancer activity and the toxicity profile of G28UCM, the lead compound of a novel family of synthetic FASN inhibitors. In vitro, we analysed the cellular and molecular interactions of combining G28UCM with anti-HER drugs. Finally, we tested the cytotoxic ability of G28UCM on breast cancer cells resistant to trastuzumab or lapatinib, that we developed in our laboratory. Results In vivo, G28UCM reduced the size of 5 out of 14 established xenografts. In the responding tumours, we observed inhibition of FASN activity, cleavage of poly-ADPribose polymerase (PARP) and a decrease of p-HER2, p- protein kinase B (AKT) and p-ERK1/2, which were not observed in the nonresponding tumours. In the G28UCM-treated animals, no significant toxicities occurred, and weight loss was not observed. In vitro, G28UCM showed marked synergistic interactions with trastuzumab, lapatinib, erlotinib or gefitinib (but not with cetuximab), which correlated with increases in apoptosis and with decreases in the activation of HER2, extracellular signal-regulated kinase (ERK)1/2 and AKT. In trastuzumab-resistant and in lapatinib-resistant breast cancer cells, in which trastuzumab and lapatinib were not effective, G28UCM retained the anticancer activity observed in the parental cells. Conclusions G28UCM inhibits fatty acid synthase (FASN) activity and the growth of breast carcinoma xenografts in vivo, and is active in cells with acquired resistance to anti-HER2 drugs, which make it a candidate for further pre-clinical development.
We have recently described a novel family of anti-FASN compounds that exhibit in vitro anticancer activity, which do not exhibit cross-activation of β-oxidation, and do not induce weight loss in animals [13]. In the current study, we have characterised molecularly the in vivo anticancer activity of G28UCM in a model of FASN+/HER2+ breast carcinoma. In addition, we have evaluated the pharmacological interaction of G28UCM with anti-HER drugs, such as trastuzumab, lapatinib, erlotinib, gefitinib or cetuximab, at the cellular and molecular levels. Finally, we report the effect of G28UCM on breast cancer cells resistant to trastuzumab or lapatinib. Our data support the study of G28UCM as a potential therapeutic agent, either alone or in combination, against in vivo HER2+ tumours that have progressed on trastuzumab and lapatinib.
Cell culture and cell lines BT474 and AU565 breast carcinoma cells were obtained from the American Type Culture Collection (ATCC, Rockville, MD, USA). BT474 cells were cultured in DMEM-F12 (Gibco, Berlin, Germany) supplemented with 10% heat-inactivated fetal bovine serum (FBS, HyClone Laboratories, Logan, Utah, USA), 1% L-glutamine, 1% sodium pyruvate, 50 U/mL penicillin, and 50 μg/mL streptomycin (Gibco). AU565 cells were routinely grown in Dulbecco's Modified Eagle's Medium (DMEM, Gibco) supplemented as above. Trastuzumabresistant cells (AU565TR) were developed [39,40] by exposing AU565 cells continuously to trastuzumab (0.4 μM for pool 0.4 and 2 μM for pool 2) for six months. Cells per plate were then pooled together and sensitivity to trastuzumab was determined by treating AU565 parental (AU565WT) and resistant (AU565TR) cells with 2 μM trastuzumab and performing trypan blue exclusion assay periodically during 10 days. Thus, cell pools which were resistant to trastuzumab were maintained in 2 μM trastuzumab, a concentration at which parental cells were not viable. To develop lapatinib-resistant cells (AU565LR), AU565 cells were treated for one month with an initial dose of 3.5 μM of lapatinib (IC 40 of lapatinib in AU565WT cells), at which time the dose of lapatinib was increased up to 7 μM for five months. AU565LR cells were maintained in 7 μM lapatinib, a concentration at which AU565 parental cells were not viable.
Growth inhibition and dose-response studies
Dose-response studies were done using standard colorimetric MTT reduction assay. Parental AU565 and trastuzumab-and lapatinib-resistant AU565 cells were plated out at a density of 7 × 10 3 cells/100 μL/well in 96-well microtitre plates. Following overnight cell adherence, the medium was removed and fresh medium along with the corresponding concentrations of FASN inhibitors (EGCG and G28UCM) or anti-HER agents (trastuzumab, cetuximab, erlotinib, gefitinib and lapatinib) were added to the cultures. For the drug-combination experiments a dose concentration of G28UCM (5 to 40 μM) and EGCG (20 to 150 μM) plus different fixed concentrations of trastuzumab, cetuximab, erlotinib, gefitinib and lapatinib, were added to the microtitre culture plates. The concentrations of the anti-HER2 agents were determined from dose-response experiments in AU565 cells (data not shown). Agents were not renewed during the entire period of cell exposure (48 h for erlotinib, gefitinib or lapatinib and 72 h for trastuzumab or cetuximab), and control cells without agents were cultured under the same conditions with comparable media changes. Following treatment, the media was replaced by drug-free medium (100 μL/well) containing MTT solution (10 μL, 5 mg/ml in PBS), and incubation was prolonged for 3 h at 37°C. After carefully removing the supernatants, the formazan crystals formed by metabolically viable cells were dissolved in DMSO (100 μL/ well) and the absorbance was determined at 570 nm in a multi-well plate reader (Model Rosyf Anthos 2010, Anthos Labtec B.V., Heerhugowaard, Nederland). Using control optical density (OD) values (C), test OD values (T), and time zero OD values (T 0 ), the compound concentration that caused 50% growth inhibition (IC 50 value) was calculated from the equation, 100 × ((T -T 0 )/(C -T 0 )) = 50. The data presented are from three separate wells per assay and the assay was performed at least three times.
Isobologram analysis of drug interactions
The interactions of G28UCM and EGCG with anti-HER drugs (trastuzumab, lapatinib, gefitinib, erlotinib and cetuximab) were evaluated by the isobologram method as we have previously published [41,42]. Briefly, the concentration of one agent producing a 30% inhibitory effect is plotted on the horizontal axis, and the concentration of another agent producing the same degree of effect is plotted on the vertical axis; a straight line joining these two points represents zero interaction (addition) between two agents. The experimental isoeffect points were the concentrations (expressed relative to the IC 30 concentrations) of the two agents that when combined kill 30% of the cells. When the experimental isoeffect points fell below that line, the combination effect of the two drugs was considered to be supra-additive or synergistic, whereas antagonism occurs if the experimental isoeffect points lie above it. Within the designed assay range, a set of isoeffect points was generated because there were multiple FASN inhibitors and antitarget agent concentrations that achieved the same isoeffect. A quantitative index of these interactions was provided by the equation I x = (A/a) + (B/b), where, for this study, a and b represent the respective concentrations of FASN inhibitors (EGCG or G28UCM) and anti-HER agents (trastuzumab, cetuximab, erlotinib, gefitinib and lapatinib) required to produce a fixed level of inhibition (IC 30 ) when administered alone, and A and B represent the concentrations required for the same effect when the drugs were administered in combination, and I x represents an index of drug interaction (interaction index). I x values of < 1 indicate synergy, a value of 1 represents addition, and values of > 1 indicate antagonism. For all estimations of I x , we used only isobolos where intercept data for both axes were available.
Western blot analysis of tumour and cell lysates
Cells and animal tumour tissues were collected and lysed in ice-cold lysis buffer containing 1 mM EDTA, 150 mM NaCl, 100 μg/mL PMSF, 50 mM Tris-HCl (pH 7.5), protease and phosphatase inhibitor cocktails (Sigma). A sample was taken for measurement of protein content by Lowry-based BioRad assay (BioRad Laboratories, Hercules, CA, USA) and either used immediately or stored at -80°C. Total protein extracts were immunoblotted using 3% to 8% SDS-PAGE (FASN, p185 HER2/neu , phospho-p185 HER2/neu , mTOR and phospho-mTOR) or 4% to 12% SDS-PAGE (AKT, phospho-AKT, ERK1/2 and phospo-ERK1/2 and PARP), transferred to nitrocellulose membranes and blocked for 1 h in blocking buffer at room temperature (2.5% powdered-skim milk in PBS-T (10 mM Tris-HCL pH 8.0, 150 mM NaCl and 0.05% Tween-20)) to prevent nonspecific antibody binding. Blots were incubated overnight at 4°C with the corresponding primary antibody diluted in blocking buffer. After washes in PBS-T (3 × 5 minutes), blots were incubated for 1 h with the corresponding secondary antibody and revealed, employing a commercial kit (West Pico chemiluminescent substrate). Blots were re-probed with an antibody for β-actin to control for protein loading and transfer.
In vivo studies: human breast tumour xenograft experiments
Experiments were conducted in accordance with guidelines on animal care and use established by Biomedical Research Institute of Bellvitge (IDIBELL) Institutional Animal Care and Scientific Committee. The BT474 cell line was selected for the in vivo studies due to its high constitutive FASN and HER2 expression and its in vivo behavior, as we have previously reported [13]. A dose of G28UCM of 40 mg/Kg was chosen for efficacy experiments. Ten female mice were included in the control group and 14 in the G28UCM-treated group. Tumour xenografts were established by subcutaneous injection of 10 × 10 6 BT474 cells mixed in Matrigel (BD Bioscience, Bedford, MA, USA) into the flank. Tumours were allowed to increase up to a size of 150 to 250 mm 3 . Mice were treated by intraperitoneal injection daily with 40 mg/Kg of G28UCM or vehicle for 45 days. Mice were weighed once per week, tumours were measured daily with electronic calipers, and tumour volumes were calculated by the formula: (π/6 × (v1 × v2 × v2)), where v1 represents the largest tumour diameter, and v2 the smallest one. At the end of the experiment, animals were weighed and all mice were euthanized, and tumours, brain, lung, heart, liver, spleen, intestine and kidney tissues and serum were stored at -80°C.
In vivo studies: animal toxicity experiments
Experiments were conducted in accordance with guidelines on animal care and use established by Biomedical Research Institute of Bellvitge (IDIBELL) Institutional Animal Care and Cientific Committee (AAALAC unit 1155). The study protocol has received ethical approval. Female athymic nude BALB/c mice (four to five weeks old, 23 to 25 g) were purchased from Harlan Laboratories (France), fed ad libitum with a standard rodent chow and housed in a light/dark 12 h/12 h cycle at 22°C in a pathogen-free facility for one week. Animals were randomized into four groups of six animals each: control, 5, 40 and 75 mg/Kg G28UCM-treated animals. Each group received daily a single intraperitoneal (i.p.) injection (0.5 mL) of G28UCM (5, 40 and 75 mg/Kg) or vehicle alone (DMSO), dissolved in RPMI 1640 medium. The body weight was registered daily for 45 days. On day 45 animals were sacrified and renal (urea and creatinin) hepatic (aspartate transaminase, alanine trasaminase and alkaline phosphatase) function markers, and hematological parameters (% neutrophils, % lymphocytes, % monocytes, % platelets, hemoglobine and % hematocrit) were determined in serum of control and G28UCMtreated animals.
Ex vivo immunohistochemistry of FASN
Immunohistochemical staining for FASN was performed using a rabbit monoclonal antibody anti-FASN (Assay Designs, Ann Arbor, MI, USA). Briefly, paraffinembedded tissue sections of control and G28UCM-treated xenografts were deparaffinized, rehydrated, and blocked with 2% hydrogen peroxide for endogenous peroxidase. Slides were washed with phosphate-buffered saline (PBS) and blocked with 20% horse serum (JRH Bioscience, Lexena, KS, USA). Slides were then incubated with anti-FASN antibody overnight at 4°C. After additional PBS washes, sections were sequentially incubated at room temperature for 45 minutes with biotinlabeled antirabbit IgG (Envision + R System Labelled Polymer-HRP anti-rabbit, Dako, Aachen, Germany). Slides were washed with PBS and incubated with diaminobenzidine (DAB, Sigma Chemical, St. Louis, MO). Finally, slides were counterstained with Hematoxylineosin, dehydrated, cleared and cover-slipped. FASN expression was categorized as negative (no or weak expression) or positive (strong expression). Appropriate positive and negative controls were included in each run of immunohistochemistry. All immunohistochemically stained slides were interpreted by a pathologist blinded to other data.
Fluorescent in situ hibridation (FISH)
Cytospin slides of AU565 parental and resistant cells to trastuzumab or lapatinib were prepared. The HER2 FISH pharmDX™ Kit (Dako, Aachen, Germany) was used as directed by the manufacturer. Slides were heated in Pre-Treatment Solution for 10 minutes, and digested with ready-to-use pepsin at room temperature for 5 to 10 minutes. A ready-to-use FISH probe mix was hybridised onto slides. This probe mix consists of a mixture of Texas Red-labelled DNA probes covering a 218 kb region including the HER2 gene on chromosome 17 (CEN17), and a mixture of fluorescein-labelled peptide nucleic acid (PNA) probes targeted at the centromeric region of CEN17. The specific hybridisation to the two targets results in formation of a distinct red fluorescent signal at each HER2 gene locus and a distinct green fluorescent signal at each chromosome 17 centromere. After a stringent wash with the buffer the slides were mounted with fluorescent mounting medium containing DAPI and coverslipped. Twenty nuclei were assessed for HER2 and CEN17. The ratio of average HER2 to average CEN17 copy number was calculated. Gene amplification was defined when the FISH ratio HER2 signal/ CEN17 signal was > 2.
Statistical analysis
Results were analysed by Student's t-test or by one-way ANOVA using a Tukey test as a post-test. Statistical significant levels were P < 0.05 (denoted as *) and P < 0.005 (denoted as**). All data are means ± standard deviation (SD) or ± standard error (SE). All observations were confirmed by at least three independent experiments.
To analyse the activation of HER2 and its downstream related phosphoinositide-3 kinase/protein kinase B (PI3K/AKT) and mitogen-activated protein kinase/ extracellular signal-regulated kinase (MAPK/ERK1/2) signalling cascades or to the mammalian target of rapamycin protein (mTOR) signalling pathway, we performed Western blotting and immunohistochemical analysis of each individual animal tumour. Apoptosis and induction of caspase activity were checked with cleavage of poly-ADP-ribose polymerase (PARP) in Western blotting analysis. Apoptosis was not detected in the tumours of control (C) and treated (T) animals with non-responding tumours. In contrast, in the tumours of G28UCM-responding animals, there was an increase in the levels of 89 kDa PARP product. Figure 1B shows the results of some representative tumours of each experimental group. We next examined the effects of G28UCM on HER2 and its related downstream proteins AKT, ERK1/2 and mTOR. Tumours that showed a response to G28UCM had a marked decrease in phosphorylated HER2, ERK1/2 and mTOR proteins and, to a lesser extent in phosphorylated AKT, without detectable changes in the total levels of the corresponding proteins. Figure 1B shows a representative result of each experimental group.
We also analysed FASN protein expression levels of each individual animal tumour. Results in Figure 1B depict FASN levels from one representative animal of the control group (4C, Figure 1A) and two G28UCM-treated animals (4T and 12T, Figure 1A). No significant changes in FASN protein levels were observed in any of the samples, as assessed both by Western blotting ( Figure 1B) and either by immunohistochemical staining ( Figure 1C). With respect to ex vivo FASN enzymatic activity (see Additional file 1), however, the experimental tumours that had a response to G28UCM (11T and 12T, Figure 1A) showed a decrease of 30.5 ± 15% compared with the control 4C tumour (see Additional file 2).
Toxicity studies
Previous first-generations of FASN inhibitors have been limited by inducing severe body weight loss, which is thought to be related to a parallel stimulation of fatty acid oxidation by these inhibitors [4,6,[24][25][26][27][28]. To address this problem, G28UCM were designed to inhibit FASN activity without parallel stimulation of in vitro fatty acid oxidation [13]. In this study, animals treated for 45 days with G28UCM were weighed daily to evaluate in vivo body weight effect of the novel FASN inhibitor. With respect to control animals, we identified no significant changes on food and fluid intake or body weight after daily treatment with 40 mg/Kg of G28UCM for 45 days. The average weight of the animals at the beginning of the study was 19.8 ± 1.7 g. At the conclusion of the study, control animals increased their weight by 7.15 ± 0.8% of pre-treatment weight, compared with 8.04 ± 1.6% for the G28UCM-treated animals ( Figure 1D) which was not statistically significant.
Hepatic and renal function serum markers (aspartate transaminase, alanine transaminase, alkaline phosphatase, creatinin and urea) showed no significant alteration between control and experimental animals treated with G28UCM at daily doses of 5, 25 or 40 mg/Kg. Animals treated at doses of 75 mg/Kg, however, showed differences compared with control in their blood counts, in particular, increased neutrophils and platelet cells and decreased monocytes and lymphocytes (see Additional file 3 for hepatic, renal and hematological function serum markers of G28UCM-treated animals). Histological studies (Hematoxylin-Eosin, Masson's Trichrome, Sudan Black B and Picrosirius Red stains) of liver, heart, kidney, lung and brain showed no tissue structural abnormalities in G28UCM-treated animals when compared with control animals (data not shown).
In vitro cell growth interactions between G28UCM and anti-HER drugs
To determine how best to use G28UCM either as a single agent or in combination with anti-HER drugs, we conducted a series of in vitro studies to evaluate the inhibitory effects of G28UCM in combination with trastuzumab, cetuximab, erlotinib, gefitinib and lapatinib in a pre-clinical model of HER2-overexpressing breast cancer cells. The combined effect was analysed by the isobole method, using a series of isobologram transformations of multiple dose-response curves at an effect level of 30% (IC 30 ), a type of analysis that we have used previously [41,42]. Results in Table 1 show the median interaction index (Ix) of combinations between G28UCM with trastuzumab, cetuximab, erlotinib, gefitinib and lapatinib. Simultaneous treatment of AU565 cells with G28UCM and either trastuzumab, lapatinib, gefitinib or erlotinib resulted in a strong synergistic interaction (I x = 0.519 ± 0.178, I x = 0.796 ± 0.144, 0.882, I x = 0.832 ± 0.161 and I x = 0.735 ± 0.092, The interaction index (I x ) for the two-drug effect in AU565 cells was calculated using isobologram analysis. The I x parameter indicate whether the doses of the two drugs required to produce a given degree of cytotoxicity are greater than (I x > 1 or antagonism) equal to (I x = 1 or additivism) or less than (I x < 1 or synergism) the doses that would be required if the effect of two agents were strictly additive. I x values for the two drug treatment were obtained from triplicate studies. * (P < 0.05) and ** (P < 0.005) indicate the level of statistical significance of the I x compared with an I x of 1.0. respectively). The combination of G28UCM plus cetuximab indicated a marked antagonistic interaction (I x = 1.913 ± 0.243). Under the same schedule, EGCG showed an additive interaction with trastuzumab (I x = 1.123 ± 0.458) and antagonistic interactions with lapatinib, gefitinib and erlotinib and cetuximab (I x = 1.875 ± 0.691, I x = 1.829 ± 0.672, I x = 1.393 ± 0.229, I x = 2.156 ± 0.215, respectively). Together, these data show that co-exposure of the FASN inhibitor G28UCM with drugs that exhibit anti-HER2 activity (but not with the specific anti-HER1 compound, cetuximab) is more active than either of the drugs used alone.
Molecular interactions between G28UCM and anti-HER drugs
To determine whether the molecular causes of the synergistic interactions between G28UCM and trastuzumab, lapatinib, cetuximab and erlotinib were triggered by changes in the phosphorylated forms of HER2 and its downstream signaling proteins, we analysed changes in apoptosis and HER2, AKT and ERK1/2 protein phosphorylated forms. First, we studied the cell death mechanism. Apoptosis and induction of caspase activity were checked by Western blotting analysis showing cleavage of PARP. The experiments were done at a concentration equal to the cytotoxicity IC 50 value of G28UCM and anti-HER drugs (trastuzumab, lapatinib, cetuximab and erlotinib) in AU565 cells. Co-treatment of AU565 cells with G28UCM (30 μM) plus trastuzumab (1 μM) during 24 h induced a marked increase in the levels of the PARP cleavage product (89 kDa band) compared to 24 h single agent (G28UCM or trastuzumab) treatment ( Figure 2). The apoptotic effect of the combined regimes was validated by flow cytometry using the Annexin V-Alexa Fluor 488 staining (data not shown). Similar results in PARP cleavage were obtained when AU565 cells were co-treated with G28UCM (30 μM) plus lapatinib (5 μM) during 12 hours or plus erlotinib (8 μM) during 24 hours ( Figure 2). Therefore, we sought to compare the effects of combined treatments versus single drug treatments on HER2, AKT, and ERK1/2 activation. The phosphorylated form of HER2 (p-HER2) was noticeably decreased after 24 h exposure to G28UCM plus trastuzumab, and p-AKT protein decreased after 48 h of co-treatment with G28UCM and trastuzumab ( Figure 3). Co-incubation of cells with G28UCM and lapatinib was significantly correlated with a decreased level of the phosphorylated form of HER2 (pHER2) and p-ERK1/2, which occurred as soon as 12 h after treatment compared to 12 h cell treatment with either G28UCM or lapatinib alone ( Figure 3). Co-exposure of G28UCM plus erlotinib induced a decrease of p-HER2 and p-AKT after 24 hours (Figure 3). During all time-course co-treatment experiments no significant change either in the total level of the corresponding proteins (HER2, ERK1/2 and AKT) or in FASN levels was detected ( Figure 3). As we expected, under the same culture conditions, co-treatment of AU565 cells with G28UCM plus cetuximab (15 μg/mL) did not induce apoptosis ( Figure 2) and did not block HER2 phosphorylation or its downstream related signal transduction pathways ERK1/ 2 and PI3K/AKT (Figure 3).
Effect of G28UCM on cells resistant to trastuzumab or lapatinib
The vast majority of HER2 positive advanced breast cancer patients develop resistance to trastuzumab based therapies within the first year of treatment. Consequently, identification of novel agents that inhibit the growth of trastuzumab-resistant cells/tumours is critical to improving the survival of metastatic HER2+ breast cancer. For this purpose, we extended our study to examine the anti-cancer effect of G28UCM on HER2+ breast cancer cells (AU565) that were continuously exposed in culture medium supplemented with trastuzumab (AU565TR) or lapatinib (AU565LR) over a period of at least six months. Trastuzumab resistant (AU565TR) or lapatinib resistant (AU565LR) cells were developed in our laboratory as described in the Materials and methods section. Sensitivity to trastuzumab was determined by treating AU565 parental and resistant cells to 2 μM trastuzumab and performing trypan blue exclusion assay periodically during 10 days ( Figure 4A, left). A dose of 2 μM trastuzumab caused a significant cell death in AU565 cells (70.2 ± 5%), but the majority of AU565TR cells remained viable (94.6 ± 7%). Lapatinib resistance was confirmed by an MTT colorimetric assay ( Figure 4A, right).
To eliminate the possibility that we have selected a population of resistant cells that do not possess HER2 gene amplification, we examined HER2 gene amplification by fluorescence in situ hybridisation using a method that determines oncogene copy number corrected to the number of copies of chromosome 17 (CEP17). The ratio of the average HER2 gene copy number to the average CEP17 gene copy number in AU565TR was 3.9, 4.9 in AU565WT, and 4.4 in AU565LR respectively, demonstrating that both trastuzumab and lapatinib resistant cells possess HER2 amplification similar as parental cells (Table 2).
Additionally, we performed immunoblotting experiments to determine HER2, pospho-HER2 (pHER2) and FASN protein levels in AU565TR and AU565LR cells. HER2 and pHER2 were down-regulated in AU565TR cells ( Figure 4B). In AU565LR cells, protein levels of HER2 and pHER2 did not change compared with AU565WT cells and FASN levels were similar in the three cell lines ( Figure 4B). To analyse the sensitivity of the resistant cells to G28UCM, we determined the growth inhibition effect of this compound by an MTT colorimetric assay, using trastuzumab and lapatinib as reference compounds. As expected, trastuzumab and lapatinib had either no effect or a weak effect on growth inhibition of trastuzumab-and lapatinib-resistant cells, respectively ( Figure 4C). For instance, while the IC 30 value of trastuzumab in AU565WT was 2 μM, AU565TR cells were insensitive to trastuzumab at the concentrations analysed (up to 50 μM of trastuzumab). The IC 30 value of lapatinib was increased from 1.6 μM in AU565WT to 14 μM in AU565LR ( Figure 4C). Trastuzumab concentration necessary to achieve IC 30 value had to be increased about 16-fold in AU565LR (IC 30 = 31.5 ± 4.9 μM) compared to AU565WT (IC 30 = 2 ± 0.7 μM), and lapatinib had no cytotoxic activity in AU565TR cells using doses up to 50 μM ( Figure 4C). Interestingly, G28UCM showed similar cytotoxic activity in parental (IC 30 = 22 ± 7 μM), trastuzumab-(IC 30 = 24 ± 8 μM) and lapatinib-resistant cells (IC 30 = 17 ± 2 μM). Taken together, these data suggest that inhibiting FASN activity may be a new therapeutic strategy in breast carcinomas with acquired resistance to anti-HER2 therapies.
Discussion
Treatment with G28UCM was associated with xenograft volume reductions from 20% to 90%, in 5 of 14 animals. The responding tumour tissues showed changes in apoptosis and in HER2-related signalling pathways. They showed an increase in the levels of 89 kDa PARP product, and the phosphorylated forms of HER2 (pHER2), ERK1/2 (pERK1/2) and mTOR (pmTOR) were almost abolished. These samples showed a decline in FASN enzymatic activity, but not total FASN levels. It is not clear why a substantial number of xenografts did not respond to G28UCM. The degree of interindividual variability in the response to G28UCM might be related to bioavailability, clonal variation or experimental design. Concerning bioavailability, G28UCM reached the target tissue in the responding xenografts, since the in vivo FASN inhibition was of 30% (see SD), which is similar to the reported intra-tumour 40% inhibition of FASN activity 12 hours after intraperitoneal injection of other FASN inhibitors [43]. Non-responding tumours, in contrast, had no detectable changes in apoptosis or pHER2, pERK or pmTOR expression after treatment with G28UCM. The observed inhibition was able to elicit clear molecular responses in at least one-third of the treated animals. Clonal variability of BT474 cells cannot be excluded. In fact, Sheridan et al. described that 80% of BT474 cells in culture expressed CD24, while 20% did not [44]. The relevance of CD24, a cell adhesion molecule, in our system is not clear. Furthermore, for the sake of therapeutic significance, our experimental design consisted of administration of G28UCM after the xenografts had reached a size of 100 to 150 mm 3 . It is possible that treating smaller tumours or administering G28UCM at the same time as the human cells might translate into a less variable result. Future experiments will need to explore in detail the pharmacokinetics and pharmacodynamics of the compound in this model, develop alternative animal and xenograft models, as well as alternative routes of administration of the compound. These in vivo data seem to confirm that the oncogenic properties of FASN could be associated with an increased phosphorylation of HER2, and its related PI3K/AKT, MAPK/ERK1/2, and mTOR signaling cascades [4,5,8,[13][14][15][16][17][18][19][20]. In this report we did not address the issue of the extent to which the effects of G28UCM are mediated by inhibition of FASN alone or by off-target effects, since we have reported previously on this relationship [13]. Future experiments, however, will address the specificity of G28UCM against FASN. This is particularly important since the parent molecule of G28UCM has been reported to have an array of biological activities, including the inhibition of gelatinase-B (MMP-2), NO synthase or aromatase enzymatic activities [45][46][47]. An important part of our in vivo results concerns the toxicity of G28UCM. We performed a long-term weight evaluation, and no significant effect on food and fluid intake or body weight was identified after daily treatment with 40 mg/Kg of G28UCM for 45 days. In addition, hepatic and renal function serum markers and histological studies of liver, heart, kidney, lung and brain showed no significant alterations between control and animals treated during 45 days with daily G28UCM. We suggest that the chemical structure of G28UCM may be more specific of the lipogenic pathway than cerulenin or its derivatives, which stimulate CPT-1 and accelerate fatty acid β-oxidation, which has been related to the severe decrease of food intake and induction of weight loss in rodents [24][25][26][27][28].
We found that the simultaneous treatment of FASN +/HER2+ breast cancer cells with G28UCM plus trastuzumab or lapatinib (which involve predominantly HER2), resulted in a strong synergistic interaction, and that this was also observed with gefitinib or erlotinib (inhibitors of HER1 but also HER2 tyrosine kinase activities) [48,49]. In contrast, the combination of G28UCM with the monoclonal antibody cetuximab (which is HER1-specific) resulted in an antagonistic effect. Taken together, these results support that the interactions between FASN and HER proteins are restricted to HER2 and do not involve the HER1 receptor. On the other hand, EGCG showed only an additive interaction with trastuzumab and an antagonistic interaction with lapatinib, gefitinib, erlotinib and cetuximab, which may be in part related to the lower cytotoxic activity of EGCG by itself. We also addressed the molecular interactions of G28UCM, analysing FASN protein levels, apoptosis, and the phosphorylated forms of HER2, AKT and ERK1/2 proteins after G28UCM combined with trastuzumab, erlotinib, gefitinib or lapatinib treatment. Trastuzumab and HER tyrosine kinase inhibitors (lapatinib, gefitinib and erlotinib) displayed molecular synergistic interaction with G28UCM. This synergistic effect was accompanied by increased apoptosis and seemed to be mediated by abrogation of the activation of HER2, AKT and ERK1/2 when the drugs are combined. It is important that the synergistic molecular effects observed with G28UCM in combination with trastuzumab, erlotinib, gefitinib or lapatinib followed the same pattern than the cellular effects. These in vitro cellular and molecular synergistic results support the in vivo evaluation of these agents in a combination regimen.
Finally, we used stable cell lines derived from the AU565 cells that were resistant to either trastuzumab (AU565TR) or lapatinib (AU565LR) to test the anticancer properties of G28UCM. In these cells, in which the cytotoxicity of trastuzumab and lapatinib were almost lost, we observed that the cytotoxic activity of G28UCM in the resistant cells and in the parental cells was similar. The activity of G28UCM in this model of resistance to anti-HER2 treatments is consistent with a previous report that observed that trastuzumab-resistant breast cancer cells were sensitive to EGCG [50]. Furthermore, our results also show that, even after long-term exposure to trastuzumab and lapatinib, resistant cells continued to overexpress FASN.
Conclusions
In summary, our findings provide a rationale for the pre-clinical development of G28UCM either alone or in combination with anti-HER agents (trastuzumab, lapatinib, erlotinib, gefitinib or cetuximab) in HER2-overexpressing breast cancer. In addition, we report the effect of G28UCM on breast cancer cells resistant to trastuzumab or lapatinib. Our data support the study of G28UCM as a potential therapeutic agent, either alone or in combination, against in vivo HER2+ tumours that have progressed on trastuzumab and lapatinib. Future studies will focus on testing the in vivo activity of G28UCM in mice bearing trastuzumab and lapatinib resistant xenografts.
Additional material
Additional file 1: Additional Material and methods on ex vivo FASN enzymatic activity assay.
Additional file 2: Figure. FASN activity decrease in G28UCM-treated responsive animal. Twelve hours after the last i.p. G28UCM injection, tumour tissues from a representative animal of control (4C) and G28UCM-treated responding group (12T) were minced and homogenized in ice-cold lysis buffer and FASN activity was assayed in particle-free supernatants by recording spectrophotometrically at 37°C the decrease of A340 nm due to oxidation of NADPH after the addition of malonyl-CoA as described in the Materials and methods section. Data are mean ± SD from two separate experiments.
Additional file 3: Table. Hepatic, renal and hematological function serum markers of G28UCM-treated animals. | 2016-05-12T22:15:10.714Z | 2011-12-16T00:00:00.000 | {
"year": 2011,
"sha1": "debe2f793ebca718957f830cd085e8bcd185904b",
"oa_license": "CCBY",
"oa_url": "https://breast-cancer-research.biomedcentral.com/track/pdf/10.1186/bcr3077",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "debe2f793ebca718957f830cd085e8bcd185904b",
"s2fieldsofstudy": [
"Chemistry",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
12195041 | pes2o/s2orc | v3-fos-license | The N = 2 and N = 4 Supersymmetric Extensions of the Lorentz- and CPT-Violating Term in Abelian Gauge Theories
In this work, we propose the N=2 and N=4 supersymmetric extensions of the Lorentz-breaking Abelian Chern-Simons term. We formulate the question of the Lorentz violation in 6 and 10 dimensions to obtain the bosonic sectors of N=2, and N=4, supersymmetries, respectively. From this, we carry out an analysis in N=1, D=4 superspace and, in terms of N=1, superfields, we are able to write down the N=2 and N=4 supersymmetric versions of the Lorentz-violating action term.
Introduction
The formulation of physical models for the fundamental interactions in the framework of quantum field theories for point-like objects is based on a number of principles, among which Lorentz covariance and invariance under suitable gauge symmetries. However, mechanisms for the breakdown of these symmetries have been proposed and discussed in view of a number of phenomenological and experimental evidences [1,2,3,4,5]. Astrophysical observations indicate that Lorentz symmetry may be slightly violated in order to account for anisotropies. Then, one may consider a gauge theory where Lorentz symmetry breaking may be realized by means of a term in the action. A Chern-Simons-type term may be considered that exhibits a constant background four-vector which maintains the gauge invariance but breaks down the Lorentz space-time symmetry [1].
In the context of supersymmetry (SUSY), the issue of Lorentz violation has been considered in the literature in different formulations: in ref. [6], supersymmetry is presented by introducing a suitable modification in its algebra; in ref. [7,8], one achieves the N = 1−SUSY version of the Chern-Simons term by means of the conventional superspace-superfield formalism; in ref. [9], the authors adopt the idea of Lorentz breaking operators. More particularly, considering the importance of extended supersymmetries in connection with gauge theories, we propose in this work an N = 2 and an N = 4 extended supersymmetric generalization of the Lorentz-breaking Chern-Simons term in a 4-dimensional Minkowski background. We start off with the Chern-Simons term in (1 + 5) and (1 + 9) space-time dimensions and adopt a particular dimensional reduction method, see [10], to obtain the bosonic sector in D = (1 + 3) of the N = 2 and N = 4 supersymmetric models, respectively. This is possible because in N = 1, D = 6-and N = 1, D = 10-supersymmetries, the bosonic sector has the same number of degrees of freedom as the bosonic sector of an N = 2, D = 4 and N = 4, D = 4, respectively [11]. Once the bosonic sectors are identified, we adopt an N = 1, D = 4-superfield formulation to write down the gauge potential and the Lorentzviolating background supermultiplets to finally set up their coupling in terms of N = 2 and N = 4 actions realized in N = 1-superspace. The result is projected out in component fields and we end up with the complete actions that realize the extended supersymmetric version of the Abelian Chern-Simons Lorentz-violating term.
The general organization of our work is as follows: in Section 2, we set some preliminaries for the presentation of the N = 2 Abelian gauge model in terms of N = 1 superspace and superfields. In Section 3, we focus on the task of carrying out the N = 2 extension of the Lorentz-violating Chern-Simons term. Next, we go one step further and reassess the discussion of Section 3 for the case of a (maximally) N = 4-extended gauge theory. This is the content of Section 4. Finally, in Section 5, we present our Concluding Remarks and Comments. An Appendix follows, where we collect the relevant conventions to perform the N = 1-superfield manipulations.
N = 2-SUSY Abelian gauge model: basic ideas
The N = 2 supersymmetric generalization of the Abelian gauge model can be built up by using the superfield formalism in an N = 1 superspace parametrized by the coordinates (x µ , θ a ,θȧ) [10]. The bosonic sector of the gauge action can be obtained by means of a dimensional reduction from D = 6 to D = 4 [12,13]. The Maxwell Lagrangian in 6 dimensions is whereμ = 0, 1, 2, 3, 4, 5. The connection Aμ can be parametrized as Aμ = (A µ , ϕ 1 , ϕ 2 ), where µ = 0, 1, 2, 3. Notice that we keep the 6 components in 4 dimensions. By adopting as an ansatz the fact that the fields have no dependence on the coordinates x 4 , x 5 , we obtain the D = 4 Lagrangian This is the bosonic sector of the N = 2 extended supersymmetric action. The supersymmetrization of the theory above is achieved by combining superfields of the N = 1 superspace as supermultiplets that accommodate the ordinary fields and their superpartners. The superfields that accomplish the task of accommodating the usual fields and their respective superpartners are a scalar, Φ, and vector superfield, V, of N = 1 − D = 4-superspace, which play the role of the vector multiplet (Φ, V ) for N = 2 − D = 4.
The vector superfield V in the WZ-gauge is written as: which fulfills the reality constraint, V = V † . The scalar superfield is written as which obeys the chiral condition:DΦ = DΦ = 0.
It is clear that the Lagrangian (2.6) is invariant under N = 1 supersymmetry transformation and it also exhibits N = 2 invariance.
3 The Lorentz-violating term in the N = 2 gauge model Now, we shall look for the N = 2 supersymmetric version of the Chern-Simons Lorentzbreaking term. Using the fact that the bosonic sector for N = 2 in D = 4 is the same as the one for N = 1 in D = 6, we write the Chern-Simons term for D = 6 and perform the dimensional reducion to D = 4. The D = 4 Chern-Simons term originally proposed by [1] is We adopt for D = 6 the Chern-Simons term in the form The background tensor Tλρσ has 20 components, and we may rewrite it as whereμ = µ, 4, 5, and we consider there is no dependence of the fields on the x 4 , x 5 coordinates. The fields R ρσ and S ρσ have 6 components each one, and the other 8 components are redefined as 2 vectors that we write as a gradient of the scalars fields v and u. Then, the number of components is reduced to 14.
As shown in the previous section, we also redefine the gauge field Aμ ≡ (A µ ; ϕ 1 ; ϕ 2 ). It is clear that εμνκλρσAμAκ∂νTλρσ = 0, so we obtain, upon integration by parts, the Lagrangian as follows: In order to carry out the supersymmetrization of the Lagrangian (3.3) by using a superspace formalism, it is advisable to define some complex field combinations that are found in the superfields we deal with. We define these bosonic fields as Notice that we have introduced the new real scalar fields t and w that are bosonic fields but do not appear in the bosonic Lagrangian (3.3). These fields will be necessary in the supersymmetric version to maintain the balance between the bosonic and fermionic degrees of freedom present in the scalar superfields defined with complex scalar fields. Each tensor field, R µν and S µν , appears as the real part of the complex tensor field whose imaginary parts are given in terms of their dual fields, as we see in (3.4) and can be found in [14].
The superfields for the gauge sector have been defined above. Thus, we take superfields which contain the fundamental fields of the background sector plus their supersymmetric partners. These superfields are N = 1-multiplets that form an N = 2-hypermultiplet, (S, R, Σ a , Ω a ). The scalar superfields that accommodate s, s * , r and r * are, respectively: which satisfy the chiral condition:DS = DS =DR = DR = 0.
The spinor superfields that contain R µν , S µν and their dual fields are written as that are also chiralD˙bΣ a = D bΣȧ =D˙bΩ a = D bΩȧ = 0. We can notice that we have to introduce two extra background complex scalar fields, ρ and φ, to match the bosonic and fermionic degrees of freedom. Now, we are interested in building up the supersymmetric action. For that, it is useful to quote the mass dimensions of the superfields previously given: Based on the dimensionalities, and by analysing the bosonic Lagrangian (3.3), we propose the following supersymmetric action, S br : which is invariant under the Abelian gauge transformations: In terms of superfields, we have two sectors: {s, s * , R µν , S µν , ρ, ρ * , φ, φ * , r, r * } F ermionic background sector : {ξ,ξ, τ,τ , F,F , χ,χ, G,Ḡ, ζ,ζ}.
We therefore observe that the action This Lagrangian in its component-field version reads as below: We point out the pieces corresponding to the bosonic action (3.3) in the complete componentfield action above: We can notice that this Lagrangian describes the bosonic sector (3.3) and its superpartners. We find here the N = 1 supersymmetrization of the Chern-Simons term presented in [7], where the first term is the same as proposed by [1], considering the constant vector as the gradient of a scalar. Since the gradient vector is a constant, we have that s = α + β µ x µ . We notice in our Lagrangian the presence of the bosonic real scalar fields, t = s + s * and u = r + r * , and the complex scalar fields, ρ and φ, that do not appear in the bosonic Lagrangian (3.3). These scalar fields appear in the supersymmetric generalization in order to keep the bosonic and fermionic degrees of freedom in equal number. We can see that the bosonic fields D, D * , f, f * , h, h * , g and g * play all the role of auxiliary fields. The bosonic fields s, s * , R µν , S µν , ρ, ρ * , φ, φ * , r, r * and the fermionic fields ξ,ξ, τ,τ , F,F , χ,χ, G,Ḡ, ζ,ζ work as background fields breaking the Lorentz invariance.
N = 4-Supersymmetric Extension of the Lorentz-Violating Action
The Adopting the fact that the fields have no dependence on the coordinates x 4 , x 5 , x 6 , x 7 , x 8 , x 9 , we obtain the Lagrangian This is the bosonic sector of the N = 4-extended supersymmetric action. The supersymmetrization of the theory above is accomplished by defining superfields in the N = 1 superspace as multiplets containing the fields and their superpartners. The superfields that contain these bosonic fields and their superpartners are 6 chiral scalars, Φ I , and vector superfield, V, of N = 1-superspace; put together, (Φ I , V ), they form the gauge multiplet of N = 4-supersymmetry.
The vector superfield V as defined in (2.3) fulfills the reality constraint, V = V † . The scalar superfield is written as which obeys the chirality condition:DΦ I = DΦ I = 0.
The N = 4-supersymmetric extension of the gauge Lagrangian (4.2) is where the Abelian field-strength superfield was defined in (2.7).
It is clear that the Lagrangian (4.5) is invariant under N = 1-supersymmetry transformation and it has also N = 4 invariance. The background tensor Tλρσ has 120 components, but we can redefine it as Tλρσ ≡ (R I ρσ ; ∂ µ v; ∂ µ u IJ ), whereμ = µ, 4, 5, 6, 7, 8, 9 is the space-time index and I, J = 1, 2, 3, 4, 5, 6 is an internal index. We consider that there is no dependence of the fields on the x 4 , x 5 , x 6 , x 7 , x 8 , x 9 coordinates. Then, we have 6 anti-symmetric tensor fields R I ρσ with 6 components each one and 15 vectors written as gradients of 15 scalars represented by the anti-symmetric index I, J. Then, the number of components is reduced to 52.
Next, we need to redefine the gauge field as Aμ ≡ (A µ ; ϕ I , I = 1, 2, 3, 4, 5, 6) where ϕ I is real scalar fields. Observing that εμνκλρσδτβγAμAκ∂νTλρσδτβγ = 0, we obtain, integrating by parts, the Lagrangian as follows: As in the case of N = 2, we have to define some complex field combinations that can be found in superfields. We define these bosonic fields as Notice that we have introduced the new real scalar fields β I , t IJ , w which are not present in the bosonic Lagrangian (4.7). As already pointed out, this has to be done the supersymmetric version to maintain the same number of degree for the matching of the bosonic and fermionic sectors of the scalar superfields defined in terms of complex scalar fields. Each tensor field, R I µν , appears as the real part of the complex tensor field whose imaginary parts are given in terms of their dual fields.
We now take the superfields which contain the fundamental fields of the background sector and that accommodate their supersymmetric partners. These superfields are N = 1multiplets that combine to form an N = 4-hypermultiplet, (S, R IJ , Σ I a ). The scalar superfields that accommodate s, s * , r * IJ and r IJ are, respectively, S andS (as in eqs. (3.5) and (3.6)) and R IJ andR IJ as cast below: which satisfy the chiral condition:DS = DS =DR IJ = DR IJ = 0.
The spinor superfields that contain R I µν and their respective dual fields are written as that are also chiralD˙bΣ I a = D bΣ İ a = 0. We can notice that in spinor superfields we have to introduce six extra background complex scalar fields, ρ I , to match the bosonic and fermionic degrees of freedom.
We can observe that the action (4.13) is invariant under N = 1-supersymmetry and there is a larger symmetry, the N = 4-supersymmetry as well.
This N = 4 Lagrangian in its component-field version reads as follows: D * ψIτ I (4.16) We can ascertain the presence of the bosonic sector (4.7) by means of the terms below: We can notice that this Lagrangian fairly accomodates the N = 4 bosonic sector (4.7). We re-obtain here the N = 1 and N = 2 supersymmetrisation of the Chern-Simons term presented in ref. [7] and in (3.16), respectively. We notice that N = 4 Lagrangian is similar to N = 2 but now existing an internal index in same fields. The fields β I , t, u IJ and ρ IJ , that do not appear in the bosonic Lagrangian (4.7), were introduced in order to keep the bosonic and fermionic degrees of freedom in equal number. We can see that the bosonic fields D, D * , f I , f * I , h, h * , g IJ and g * IJ works as auxiliary fields. The bosonic fields s, s * , R I µν , ρ I , ρ * I , r IJ , r * IJ and the fermionic fields ξ,ξ, τ I ,τ I , F I ,F I , ζ IJ ,ζ IJ work as background fields breaking the Lorentz invariance.
Concluding Remarks and Comments
In the important context of studying the gauge invariant Lorentz-violating term formulated as a Chern-Simons action term , we propose here its N = 2 and N = 4 supersymmetric versions. This programme could be done in a simple way with the help of a dimensional reduction method; here, we have chosen the methodà la Scherk, but it would also be interesting to contemplate other possibilities, such as the proceduresà la Legendre orà la Kaluza-Klein. With our reduction scheme, we could treat the extended supersymmetric version in terms of simple N = 1 superspace to supersymmetrize the Chern-Simons like term, as proposed by Jackiw, written in terms of a constant background vector here parametrized as the gradient of the scalar function α + β µ x µ , where α and β µ are constants.
Another interesting point we should consider is the possibility, once we have now the full set of SUSY partners of the Lorentz-breaking vector, to express the central charges of the extended models whenever topologically non-trivial configurations are taken into account. This would allow us to impose bounds on the central charges in terms of the phenomenological constraints imposed on the vector responsible for the Lorentz covariance breakdown. | 2014-10-01T00:00:00.000Z | 2004-10-29T00:00:00.000 | {
"year": 2004,
"sha1": "d8f2bd8b1c96bafc0131fef312248e7942ab5fb4",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "1b95a28cfdc61fe80b25c4bfe1f5aecb975c6e9d",
"s2fieldsofstudy": [
"Physics",
"Mathematics"
],
"extfieldsofstudy": [
"Physics"
]
} |
253223796 | pes2o/s2orc | v3-fos-license | Dysregulation of the PRUNE2/PCA3 genetic axis in human prostate cancer: from experimental discovery to validation in two independent patient cohorts
Background: We have previously shown that the long non-coding (lnc)RNA prostate cancer associated 3 (PCA3; formerly prostate cancer antigen 3) functions as a trans-dominant negative oncogene by targeting the previously unrecognized prostate cancer suppressor gene PRUNE2 (a homolog of the Drosophila prune gene), thereby forming a functional unit within a unique allelic locus in human cells. Here, we investigated the PCA3/PRUNE2 regulatory axis from early (tumorigenic) to late (biochemical recurrence) genetic events during human prostate cancer progression. Methods: The reciprocal PCA3 and PRUNE2 gene expression relationship in paired prostate cancer and adjacent normal prostate was analyzed in two independent retrospective cohorts of clinically annotated cases post-radical prostatectomy: a single-institutional discovery cohort (n=107) and a multi-institutional validation cohort (n=497). We compared the tumor gene expression of PCA3 and PRUNE2 to their corresponding expression in the normal prostate. We also serially examined clinical/pathological variables including time to disease recurrence. Results: We consistently observed increased expression of PCA3 and decreased expression of PRUNE2 in prostate cancer compared with the adjacent normal prostate across all tumor grades and stages. However, there was no association between the relative gene expression levels of PCA3 or PRUNE2 and time to disease recurrence, independent of tumor grades and stages. Conclusions: We concluded that upregulation of the lncRNA PCA3 and targeted downregulation of the protein-coding PRUNE2 gene in prostate cancer could be early (rather than late) molecular events in the progression of human prostate tumorigenesis but are not associated with biochemical recurrence. Further studies of PCA3/PRUNE2 dysregulation are warranted. Funding: We received support from the Human Tissue Repository and Tissue Analysis Shared Resource from the Department of Pathology of the University of New Mexico School of Medicine and a pilot award from the University of New Mexico Comprehensive Cancer Center. RP and WA were supported by awards from the Levy-Longenbaugh Donor-Advised Fund and the Prostate Cancer Foundation. EDN reports research fellowship support from the Brazilian National Council for Scientific and Technological Development (CNPq), Brazil, and the Associação Beneficente Alzira Denise Hertzog Silva (ABADHS), Brazil. This work has been funded in part by the NCI Cancer Center Support Grants (CCSG; P30) to the University of New Mexico Comprehensive Cancer Center (CA118100) and the Rutgers Cancer Institute of New Jersey (CA072720).
Introduction
Prostate cancer is the most common cancer and the second most common cause of cancer death in men (Siegel et al., 2021), and there continues to be a pressing need for new diagnostic and therapeutic approaches for this disease, as well as better prognostic biomarkers to guide treatment. Long non-coding RNA (lncRNA) species are increasingly recognized as having regulatory functions in tumorigenesis, and nucleic acid-based therapeutics are being developed as a promising means of targeting pathogenic lncRNAs (Arun et al., 2018). Several lncRNAs have recently been found to associate with prostate cancer, and the best known of these, prostate cancer associated 3 (PCA3; formerly prostate cancer antigen 3) has been used clinically for many years as the most specific diagnostic biomarker for prostate cancer (Bussemakers et al., 1999;de Kok et al., 2002); however, its prognostic significance remains uncertain. Strikingly, PCA3 emerged first only in mammals, with further evolution in primates (Clarke et al., 2009), and, given aspects of the sequence and genomic organization, we have hypothesized that it might have been introduced into the genome by an ancient oncogenic virus (Teixeira et al., 2017). In humans, PCA3 has an unusual genomic organization, being present in an antisense direction within an intron of the protein-coding gene PRUNE2. Somewhat surprisingly for a molecule that is well established as a Food and Drug Administration (FDA)-and European Medical Agency (EMA)-approved biomarker, relatively little was known about the biological function of PCA3 until recently. Ferreira et al., 2012, showed that PCA3 is androgen-regulated and that it promotes prostate cancer cell survival. Subsequently, we have established that PCA3 downregulates the expression of PRUNE2 in a rather unusual way: at the RNA level by RNA editing mediated via adenosine deaminase RNA-specific family members (Salameh et al., 2015). We have shown that expressing ectopic PCA3 or, alternatively, silencing PRUNE2 induced cell transformation and cell proliferation in vitro, increased adhesion and migration of prostate cancer cells, and yielded larger tumors in xenograft tumor models. The opposite biological effects were seen with PCA3 silencing or ectopic PRUNE2 expression (Salameh et al., 2015). Preliminary studies of human prostate cancer samples compared to normal prostate showed increased PCA3 expression, decreased PRUNE2 expression, and evidence for RNA editing of these genes. Based on these experimental findings, we proposed that there is a functional molecular axis in human prostate cancer in which PCA3 acts as a transdominant-negative oncogene to downregulate a previously unrecognized tumor suppressor gene, PRUNE2 (Salameh et al., 2015).
Here, we propose that this molecular interplay may serve as a translational target for diagnostic and/or therapeutic intervention in human prostate cancer. First, we present additional correlative evidence from two retrospective post-surgical primary prostate cancer cohorts in support of our experimental model of PCA3 as a dominant-negative oncogene and PRUNE2 as a tumor suppressor gene and for their co-regulation in human prostate cancer. Moreover, we examine the dysregulation of the PCA3/PRUNE2 regulatory axis across tumors of different grades (patterns), stages, and groups (Gordetsky and Epstein, 2016;van Leenders et al., 2020). Finally, we assess whether tumor expression levels of PCA3 and/or PRUNE2 are prognostic of biochemical disease recurrence after surgery.
Discovery patient cohort
Based on a power analysis using gene expression data from our prior work (Salameh et al., 2015), for the UNMCCC single-institutional discovery cohort, we searched the archives of the Department of Pathology at the UNM School of Medicine for at least 100 consecutive patients (final cohort size: n=107) who had a radical prostatectomy as the primary treatment for organ-confined prostate cancer between the years 2001 and 2013 and who had the following clinical and pathological attributes: final post-prostatectomy Gleason Score 7 (either Gleason Grade Group 2 (3+4) or Gleason Grade Group 3 (4+3)), pathological stage pT2 or pT3a, negative surgical margins, negative for seminal vesicle invasion, no evidence of local or distant metastasis, and no prior treatment for prostate cancer. The following additional data were retrospectively abstracted from the individual medical records: age at surgery, race, presence of recurrence, type of recurrence (i.e., biochemical, local, metastatic), and disease-free survival time. Biochemical disease recurrence was defined as a detectable serum prostate-specific antigen concentration of at least 0.2 ng/ml post-operatively. Lost to follow up was defined as not having been followed up at the UNMCCC after their urological surgery. All included cases had an independent pathological re-review by a Board-certified pathologist with expertise in urological pathology (MB), with confirmation of diagnosis, Gleason-based analysis (grading, scoring, and grouping), standard TNM staging, and margin status post-resection. A small number of identified cases (<5%) had to be excluded due to the very limited amount of tumor present.
Microdissection of tumor and normal prostate (nonneoplastic prostatic glandular tissue) for the discovery cohort To obtain tumor for RNA analysis, a representative carcinoma-containing formalin-fixed paraffin embedded (FFPE) block was chosen from each case. Contiguous foci of tumor were marked on the glass slide such that the density of tumor cells was at least 75%. The boundary of the corresponding areas on the tumor block was scored with a blade tip, effectively allowing microdissection of tumor in the process of microtome sectioning. Multiple 10 µm sections were cut, depending on the area of the tumor focus/foci. In 24 (22.4%) of the cases, we also microdissected areas of nonneoplastic prostatic glandular tissue away from tumor in a similar manner, again also aiming for at least 75% epithelial density.
Measurement of PRUNE2 and PCA3 gene expression in the discovery cohort by quantitative RT-PCR Briefly, gene expression for PCA3 and PRUNE2 were determined by quantitative reverse transcription polymerase chain reaction (qRT-PCR) by using TaqMan gene expression assays (Thermo Fisher Scientific) with amplicon detection via a LightCycler 96 (Roche Diagnostics). Gene expression was quantified by the relative logarithmic RT-PCR threshold cycles (∆Ct) between the target genes and housekeeping control genes (Livak and Schmittgen, 2001). Specifically, total RNA was extracted from the microdissected FFPE sections using the PureLink FFPE Total RNA Isolation Kit (Thermo Fisher Scientific, Cat. No. K1560-02). RNA was quantified on a NanoDrop ND-1000 Spectrophotometer (Thermo Fisher Scientific), and the average A260/A280 ratio was 1.94 (range 1.88-2.07), indicating optimal quality of the RNA extracted for gene expression assays. RNA was then further quantified with the Qubit RNA HS Assay Kit (Thermo Fisher Scientific, Cat. No. Q32852) on a Qubit 2.0 (Thermo Fisher Scientific) for accurate RNA concentration. RNA integrity was evaluated with the Agilent RNA 6000 Nano kit (Agilent Technologies, Cat. No. 5067-1511) on an Agilent 2100 Bioanalyzer (Agilent Technologies). To remove genomic DNA contamination, RNA samples were treated with 2 U of DNase I (Thermo Fisher Scientific, Cat. No. 18068-015) per 2 µg of total RNA. All procedures were performed according to the manufacturer's standard protocols.
Reverse transcription was performed in triplicate in order to create enough cDNA for the entire project. Five-hundred ng RNA in each of three tubes was reverse transcribed with the High-Capacity RNA-to-cDNA Kit (Thermo Fisher Scientific,Cat. No. 4387406) in a final volume of 20 µl, according to the manufacturer's instructions. Reverse transcription was carried out in a Gene Amp PCR System 9700 (Applied Biosystems) at 37°C for 60 min and terminated by 95°C for 5 min. Then, three aliquots were combined for the following experiments.
For the Thermo Fisher Scientific TaqMan gene expression assay experiments, three (Hs00322421_ m1, Hs00999960_m1, and Hs01060890_m1) and two (Hs01371939_g1 and Hs03462121_m1) assays were chosen for target genes PRUNE2 and PCA3, respectively (designated PR1, PR2, and PR3, and PC1 and PC2). Three endogenous controls GAPDH (Hs02758991_g1), HPRT1 (Hs02800695_m1), and UBC (Hs01871556_s1) were selected (designated C1, C2, and C3) (Vandesompele et al., 2002). Each PRUNE2 assay and PCA3 assay was labeled with FAM and paired with a VIC-labeled endogenous control in a duplex reaction, with separate reactions to include all of the three endogenous controls. Therefore, a total of fifteen duplex gene expression mixes, nine for PRUNE2 and six for PCA3, was required for all specimens ( Each duplex gene expression assay was then performed in triplicate for all specimens following the manufacturer's standard protocols, for a total of 45 expression measures for each case. qRT-PCR was performed with the TaqMan Gene Expression Master Mix (Thermo Fisher Scientific, Cat. No. 4369514) using 1 µl of each TaqMan target gene assay (20× FAM) and endogenous controls assay (20× VIC), 1 µl of cDNA template (equivalent to 25 ng RNA input), and 7 µl of RNase-free water for a 20 μl final reaction mixture. A non-template control was included in every master mix in every 96-format tray. In addition, in order to evaluate inter-plate variation, we also included one RNA sample, in triplicate, in all the 96-format trays. Analysis of these controls indicated that there were no significant batch effects (data not shown). The qRT-PCR product detection was achieved on a LightCycler 96 (Roche Diagnostics). The cycle program was: at 95°C for 10 min, followed by 40 cycles at 95°C for 15 s and at 60°C for 1 min. Quantification of target and control genes (Cq) in each sample was performed by LightCycler 96 SW 1.1 (Roche Diagnostics).
Validation patient cohort
For The Cancer Genome Atlas (TCGA) patient validation cohort (n=497 patients), we first downloaded clinical data along with the expression of the lncRNA PCA3 and the PRUNE2 gene (http:// cancergenome.nih.gov) with the UCSC Xena browser (Cancer Genome Atlas Research Network, 2015;University of North Carolina TCGA Genome Characterization Center, 2017), together with paired nonneoplastic samples in 52 of the cases (10.5%). The following clinical and pathological characteristics were included in the study: age at diagnosis, vital status, tumor Gleason-based analysis (grading, scoring, grouping), pathological stage, status of biochemical recurrence, and time to recurrence. Gene expression was calculated with log 2 RNA-Seq by Expectation-Maximization (RSEM) (Li and Dewey, 2011;Goldman et al., 2020). By using the available dataset, we evaluated PCA3 and PRUNE2 gene expression values in terms of tumor versus nonneoplastic prostate, biochemical recurrence, pathological T stage, Gleason analysis (grade, score, and group), and age at pathology-proven diagnosis. Because the regulation of PRUNE2 by PCA3 occurs at the RNA level by the formation of an RNA hetero-duplex, we also evaluated the ratio of the expression of the two genes in terms of the clinical and pathological variables for each patient of the cohort.
Statistics
Demographic and clinical variables were summarized with descriptive statistics. For the discovery cohort, the mean and median of gene expressions across multiple control genes and assays were summarized, and these were used as measures for gene expression of PRUNE2 and PCA3 relative to endogenous housekeeping controls for each case. More detailed methods are described in Appendix 1.
Testing for differences of PCA3 and PRUNE2 expression between paired tumor and nonneoplastic prostate expression was by the Wilcoxon signed rank test. The Kruskal-Wallis test was used when comparing three or more groups. Assessment for significant differences of gene expression by recurrence status was by Wilcoxon rank sum test. The Kaplan-Meier product limit method with log-rank test was used to explore the relationship between gene expression levels or the ratio and the time to recurrence. Multivariable Cox proportional hazard modeling was used to fit for the association between time to recurrence and expression levels of PRUNE2 or PCA3 or their ratio, while controlling for multiple clinical covariates. All statistical analyses were carried out by using the SAS (9.4) or R software package (R 3.4.5), unless otherwise indicated (R and SAS codes are available in the Source code 1). The online version of this article includes the following source data for table 1: Source data 1. Discovery cohort.
Study approval
For the discovery cohort, there was University of New Mexico Health Sciences Institutional Review Board (IRB) approval (HRRC15-138), and the study was carried out in accordance with the United States Common Rule.
Discovery single-institutional cohort
In the initial single-institutional discovery cohort from the University of New Mexico Comprehensive Cancer Center (UNMCCC), patients with intermediate-risk (Gleason Score 7; corresponding to Gleason Groups 2 and 3) organ-confined prostate cancer (n=107) met the criteria for inclusion in this study (Table 1). Briefly, the mean age of the cohort was 63 years (ranging from 45 to 84 years); most patients (85%) were non-Hispanic white, but Hispanic (7.5%), American Indian/Native American (2.8%), and African American (2.8%) men were also represented. All patients had final Gleason Score 7 adenocarcinoma after radical prostatectomy, with 86.9% being 3+4 = 7 (Gleason Grade Group 2) and 13.1% being 4+3 = 7 (Gleason Grade Group 3). The pathological stage distribution was as follows: 74.8% were pT2 and 25.2% were pT3a. Nineteen of the patients (17.8%) had biochemical recurrence discovered during follow-up, including one with documented local recurrence and one with documented metastases. Five patients (4.7%) were lost to follow up. RNA extraction and qRT-PCR were successful in all microdissected tumor samples (n=107). In 24 of these cases (22.4%), we extracted RNA from benign prostatic glandular tissue away from tumor (hereafter termed 'normal prostate': qRT-PCR was successful in all cases for PRUNE2 [n=24, 100%] and The online version of this article includes the following source data and figure supplement(s) for figure 1: Source data 1. Analyses of discovery prostate cancer cohort.
Source data 2. Analyses of discovery prostate cancer cohort.
Source data 3. Analyses of discovery prostate cancer cohort. in most cases for PCA3 [n=21, 87.5%]). Comparing PRUNE2 and PCA3 expression in prostatic adenocarcinoma with expression in normal prostate (all relative to endogenous housekeeping controls), we found consistent trends for both genes in multiple assays, with lower expression of PRUNE2 in tumor as compared with normal prostate and higher expression of PCA3 in tumor as compared with normal prostate (Figure 1-source data 1). These results are summarized in Figure 1A and as follows. Relative to controls, PCA3 expression was significantly higher in prostatic adenocarcinoma ( . We next explored the association between biochemical recurrence and tumor expression levels of PRUNE2, PCA3, and the ratio of PRUNE2 to PCA3 expression by using several approaches. First, we compared the gene expression values and their ratio by recurrence status. In patients who recurred compared to those who did not, we found no significant difference in mean expression values of PRUNE2 (−1.6 to -1.58; p-value = 0.68), PCA3 (2.98 versus 2.43; p-value = 0.16), or their ratio (−1.61 to -1.21, p-value = 0.48). The different expression levels by recurrence were not significant ( Figure 1-figure supplement 1). Next, for PRUNE2 expression, PCA3 expression, and their ratio, we regrouped the cancer cases according to whether the values were greater than (deemed 'high') or less than/equal to (deemed 'low') their respective mean values. By using the Kaplan-Meier product limit methodology and the log-rank test, we found no significant associations between high or low levels and time to recurrence for PRUNE2 expression (p-value = 0.24), PCA3 expression (p-value = 0.22) (Figure 2 and Tables 2-3), or their ratio (p-value = 0.84). As a further assessment of association between gene expression and time to biochemical recurrence, we used Cox proportional hazards modeling and found no significant associations of time to biochemical recurrence with expression of PRUNE2 (
As shown for the discovery cohort, we also evaluated the relationship between PCA3 and PRUNE2 expression levels and recurrence status. We found that patients who had biochemical recurrence after prostatectomy had significantly lower tumor expression levels of PCA3 (median, 11.58; IQR, 8.28-13.14) than those who did not recur (12.51; 10.64-13.71, [p-value <0.01]; Figure 3D). However, we did not see an association between tumor PCA3 expression and biochemical recurrence on multivariable Cox proportional hazards modeling when adjusting for tumor grade, stage, and age at diagnosis (HR, 0.96; 95% CI, 0.87-1.04, [p-value = 0.36]), as presented in Appendix 1 and Appendix 1-table 2. We did not see a significant association between PRUNE2 expression in those patients that had biochemical recurrence as compared with those patients who did not recur (Figure 3-figure supplement 1).
Discussion
Here, we assessed the tumor and control adjacent normal prostatic glandular tissue expression of the lncRNA PCA3 and the protein-coding PRUNE2 gene in two independent retrospective cohorts of patients with primary organ-confined prostate cancer after treatment by radical prostatectomy (Figure 4). As compared with normal prostate, we found that prostate cancer showed consistent increased expression of PCA3 and consistent decreased expression of PRUNE2 in tumors across a broad range of pathological attributes (i.e., Gleason grades, scores, groups, and stages) in both patient cohorts. Although the magnitude of the change of expression between normal and tumor appears greater for PCA3 than for PRUNE2 in both cohorts ( Figure 1A and Figure 3A), we attribute this to the reciprocal nature of the comparison, in conjunction with the very low level of normal prostatic PCA3 expression as compared with the higher expression of PRUNE2 in normal prostate. Overall, the findings support the mechanistic role of a tumor-specific molecular axis in which PCA3 acts as dominant-negative oncogene and PRUNE2 as a tumor suppressor gene in human prostate cancer and indicate that the interplay between these genes is dysregulated early in prostate cancer.
Specifically, when we compared PCA3 expression in the validation cohort from TCGA, although average expression in all grades, stages, and groups was higher than in normal prostate, we found that among tumors there was significantly decreased PCA3 expression in tumors with higher grades (Gleason Score >7) and in higher stages (>pT2), as compared with lower grades, stages, or groups, respectively. These paradoxical findings are consistent with several early studies (Salagierski et al., 2010;Balcerczak et al., 2003) and in particular with a recent tissue-based study of PCA3 expression in prostate cancer (Alshalalfa et al., 2017).
In that large cohort study, lower levels of tumor PCA3 in both biopsy and radical prostatectomy specimens were associated with high-grade tumors, and in radical prostatectomy specimens decreased PCA3 expression was associated with features of higher stages. Based on these results, Figure 2A). Figure 2B). it has been proposed that PCA3 might actually represent a differentiation marker in human prostate cancer (Alshalalfa et al., 2017). The finding of decreasing PCA3 expression with increasing tumor grades and stages in both our study and others is broadly consistent with another previous study (Reis et al., 2004), which found that the class of antisense intronic RNAs was markedly over-represented among the top transcripts associated with tumor differentiation in human prostate cancer. The finding of an inverse association between PCA3 expression and increasing grades and stages may also relate to links between PCA3 expression and androgen receptor (AR) signaling and the likelihood of PCA3 having an important role in the early steps of prostate cancer carcinogenesis, with a reduced role when the disease is more advanced. Indeed, previous work by our own group and by others indicates that PCA3 is upregulated by AR signaling (Teixeira et al., 2017;Ferreira et al., 2012;Salameh et al., 2015), and that PCA3 is also involved in modulating AR signaling (Ferreira et al., 2012;Lemos et al., 2016). Interestingly, it has also been shown in vitro that PCA3 silencing sensitizes prostate cancer cells to enzalutamide-induced decreased cell growth (Özgür et al., 2017). Alshalalfa et al., 2017, suggest that because low pretreatment serum testosterone levels are associated with diseases with higher grades and stages, and because of the relationship between AR signaling and PCA3 expression, therefore lower PCA3 expression may reflect the lower serum testosterone in these patients. However, we do not have any data on the pretreatment serum concentration of testosterone and other androgens, and we are not able to test that hypothesis in this study.
Because prostate cancers, especially Gleason Score 7 (Grade Groups 2 and 3) tumors, are quite frequent (about half of the total cases) and show divergent clinical behavior, there is great interest in developing prognostic biomarkers for risk stratification. Studies on the association of PCA3 expression levels with outcome and prognosis show conflicting results (Loeb and Partin, 2011), and unlike this present study, most prior reports are based on urinary PCA3 expression (Loeb et al., 2015;Lemos et al., 2019;Fenstermaker et al., 2017). Our exploration of the validation cohort from TCGA, which comprised a wide spectrum of tumor grades and stages, revealed an association between lower levels of tumor PCA3 expression and biochemical recurrence; however, this association was not found after taking grade and stage into account. This finding makes sense, as increasing grade and stage are both variables that are associated with lower PCA3 expression. In their tissue-based cohort, Alshalalfa et al., 2017, also found an association between low PCA3 levels and adverse outcomes, including biochemical recurrence, metastasis, and prostate cancer-specific mortality; however, it is not clear whether such findings are independent of clinical and pathological variables (such as Gleason grade, stage, and group), as a multivariable analysis was not reported. Nevertheless, the demonstration of an (unadjusted) association between PCA3 levels and outcome may have potential relevance in the liquid biopsy setting. For the discovery cohort of patients, we selected organ-confined, intermediate-risk tumors (Gleason Grade Groups 2 and 3, with tumor stages pT2 and pT3) where prognostic information might be expected to be most helpful clinically, to test for an association with outcome. We did not see any association between tumor PCA3 expression and biochemical recurrence in this particular grade and stage setting.
PRUNE2, a human homolog of the Drosophila prune gene, encodes for a protein with BCH, DHHA2, and PPX1 functional domains (Ferreira et al., 2012). The BCH domain can inhibit the Rho family of proteins, small GTPases with roles in cell transformation, migration and metastasis, and cell cycle progression (Clarke et al., 2009;Iwama et al., 2011). Evidence is accumulating that PRUNE2 might act as a tumor suppressor gene. Loss-of-function mutations have been described in several tumor types, including germline and somatic mutations in parathyroid cancer (Yu et al., 2015) and somatic mutations in solid papillary carcinoma (Alsadoun et al., 2018), while high expression of PRUNE2 protein correlates with favorable prognosis in neuroblastoma (Machida et al., 2006). Others have shown evidence of inactivating PRUNE2 mutations in Merkel cell carcinoma (Harms et al., 2015) and that the restoration of downregulated PRUNE2 in oral cancer suppresses tumor cell migration (Su et al., 2021), further supporting the role of PRUNE2 as a tumor suppressor. In prostate cancer, the evidence is limited and controversial: an early report found that PRUNE2 expression was upregulated in prostate cancer and metastases in a small number of samples, and was androgen-inducible in prostate cancer cells (Clarke et al., 2009). However, a subsequent study on a larger number of samples found that PRUNE2 expression either decreased or did not increase in aggressive prostate cancer, and that PRUNE2 expression was not androgen-inducible (Salagierski et al., 2010). While this work was under external peer-review, Cardoso et al. have shown that PRUNE2 is a prostate cancer predisposition gene, which is consistent with our results and interpretations (Cardoso et al., 2022).
Altogether, the findings in the current study provide additional support for our previous findings (Salameh et al., 2015) that PRUNE2 acts as a functional tumor suppressor gene in human prostate cancer. Here, we described consistently lower expression of PRUNE2 in prostate cancers of all grades and stages as compared to normal prostate. The findings in our present study are also consistent with the negative regulation of PRUNE2 by PCA3 in prostate cancer. We found no significant differences in PRUNE2 expression across tumor stage, and only a small decrease in expression with increasing tumor grade, suggesting that loss of PRUNE2 tumor suppressor activity is an early molecular event in prostate cancer. We are not aware of any prior reports of the prognostic significance of tumor PRUNE2 expression in prostate cancer but, at least in this retrospective study of two independent prostate cancer patient cohorts, we did not find any association between PRUNE2 expression and biochemical outcomes.
Strengths of this study include that broadly consistent findings were described in the two independent well-characterized clinically annotated primary prostate cancer cohorts used for analysis, and that the findings were robust across multiple assays in the discovery patient cohort and between the different methods of measurement of gene expression used in the two cohorts. The assessment of PCA3 expression directly and specifically in tissue (as opposed to urine) is a novelty and a strength as our primary goal was the study of the PRUNE2/PCA3 regulatory axis in human prostate cancer. We reasoned that the study of tissue expression is likely more informative of tumor biology than traditional urinalysis, not least of all because urinary expression, though very well characterized, could by subject to potential confounding issues such as RNA stability in urine or the contribution of differential urinary shedding. However, from the standpoint of assessment of prognostic information, a drawback of analyzing tissue PCA3 expression is that the results are not directly comparable to the multiple previous studies that measured urinary PCA3 scores and ultimately led to FDA and EMA approval for clinical applications in the US and EU. Moreover, while we did find consistent findings with a large tissue cohort study relating PCA3 expression and biochemical recurrence (Alshalalfa et al., 2017), the analysis presented here was limited in its ability to unequivocally determine the prognostic value of PCA3 and PRUNE2 expression as the overall proportion of patients with biochemical recurrences was relatively low. Finally, we were not able to fully address the relationship of reciprocal gene expression of PCA3 and PRUNE2 to the outcomes of metastases and prostate cancer-specific deaths, again due to the relative paucity of these events.
In conclusion, we found consistent upregulation of PCA3 and downregulation of PRUNE2 in prostate cancer as compared with normal prostate in two retrospective and independent patient cohorts (summarized in Figure 4, Figure 4-figure supplement 1), supporting that PCA3 and PRUNE2 function as an oncogene and a tumor suppressor gene, respectively, in human prostate cancer. The inverse correlation of PCA3 and PRUNE2 expression is consistent with our prior findings of a functional interplay between the two genes as part of a unique regulatory unit functioning at a single genetic locus in prostate cancer cells with PCA3 negatively downregulating PRUNE2 expression (Salameh et al., 2015). The mechanistic dysregulation of PCA3 and PRUNE2 is observed across the spectrum of tumor grades and stages, suggesting that this is an early and stable molecular event in prostate cancer. On the other hand, we have not detected any regulatory effects of PRUNE2/PCA3 in late genetic events such as prostate cancer progressing to biochemical recurrence, which includes the development of local tumor recurrence and/or the development of metastatic disease. The findings presented here represent additional evidence for the functional reciprocal co-regulation of PCA3 and PRUNE2 in the setting of early tumorigenesis but not in late events in human prostate cancer. Taken together along with the well-documented specificity of PCA3 overexpression, our findings establish the PCA3/PRUNE2 regulatory axis as an attractive early molecular target candidate for intervention in the therapy of human prostate cancer.
Additional information
Competing interests Diana N Nunes: The University of New Mexico filed patent applications on PRUNE2-related technology, for which Diana Nunes was an inventor (inventors: DNN, EDN, RP, and WA). Those applications were briefly optioned by MBrace Therapeutics, but the applications have since been abandoned and the agreements terminated. No payments were made to Diana Nunes, and the author has no other competing interests to declare. Emmanuel Dias-Neto: The University of New Mexico filed patent applications on PRUNE2-related technology, for which Emmanuel Dias-Neto was an inventor (inventors: DNN, EDN, RP, and WA). Those applications were briefly optioned by MBrace Therapeutics, but the applications have since been abandoned and the agreement terminated. No payments were made to Emmanuel Dias-Neto, and the author has no other competing interests to declare. Isan Chen: serves as the Chief Executive Officer of MBrace Therapeutics. Mbrace did not provide financial support for the present work. Webster K Cavenee: is a founder and shareholder of Interleukin Combinatorial Therapies, Inc, InVaMet, Inc, and io9, LLC; none of these companies provided funds or participated in the present work. These arrangements are managed in accordance with the established institutional conflict of interest policies for the respective institution. The author received support for attending the Aspen Cancer Conference, and participated in a Leadership or fiduciary role. The author holds a Leadership or fiduciary role at Genetron Health for which they receive board fees, and are on the Board of Directors for the GBM AGILE Clinical Trial. The author has no other competing interests to declare. Renata Pasqualini, Wadih Arap: Reviewing editor, eLife. The other authors declare that no competing interests exist.
Ethics
Human subjects: For the discovery cohort, there was University of New Mexico Health Sciences Institutional Review Board (IRB) approval (HRRC15-138), and the study was carried out in accordance with the United States Common Rule. As the discovery cohort involved secondary use of archival biospecimens, the IRB waived the requirement for informed consent . • Source code 1. R code and SAS code of descriptive statistics.
Data availability
For the discovery cohort, all data generated or analyzed are included in the manuscript and source data files, except for patient-level ethnicity data. Patient-level ethnicity data is not included due to the potential for identifiability. However detailed summary ethnicity data is presented in the manuscript and in Table 1. Requests to access the patient level ethnicity data should be directed to the corresponding author with a project proposal. Source codes are also available in the supplemental source code file. For the Validation Cohort, clinicopathological patient characteristics and gene level transcription data from The Cancer Genome Atlas (TCGA) were accessed from the UCSC Xena Resource.
The following previously published dataset was used:
Statistical analysis for quantifying the expression of PCA3 and PRUNE2
There were combinations of assays and control genes used for quantifying the expression of PCA3 and PRUNE2 in this study. Explicitly, there were nine duplex mixes for PRUNE2: PR1C1, PR1C2, PR1C3, PR2C1, PR2C2, PR2C3, PR3C1, PR3C2, PR3C3; and six duplex mixes for PCA3: PC1C1, PC1C2, PC1C3, PC2C1, PC2C2, PC3C3, where the first three letters denote an assay and last two letters denote a control gene being used in a particular run. For example, PC2C2 denotes the second assay for PCA3 (Hs03462121_m1, detailed in Methods) and the second endogenous control gene (Hs02800695_m1, detailed in Methods) were used for that specific experiment. C T is to denote the logarithmic number of PCR cycle when the fluorescent signal passes a threshold value. Let ∆C T = C T study gene − C T control gene and we had −∆C T to quantify the gene expression (relative to a control gene), resulting in a positive value meaning an upregulated gene's expression.
The experiment was completed three times for each gene duplex mix, for example, we have three data points of PC2C2 measure for a tumor sample. The median of the three −∆C T values is summarized to estimate the gene expression of a particular gene duplex mix. We then looked at both mean and median of nine estimates for PRUNE2 and six estimates for PCA3, separately (data not shown). We did not see any significant difference utilizing mean or median in this or subsequent analyses. | 2022-10-31T01:04:55.252Z | 2022-10-30T00:00:00.000 | {
"year": 2023,
"sha1": "9db5209df097211f4ff669d475c3f3679172aa09",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1101/2022.10.25.22281522",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "0ce675655075c6e45fc0008e32d51428c642f116",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
20052072 | pes2o/s2orc | v3-fos-license | Remobilization of silicic intrusion by mafic magmas during the 2010 Eyjafjallajökull eruption
Abstract. Injection of basaltic magmas into silicic crustal holding chambers and subsequent magma mingling or mixing is a process that has been recognised since the late seventies as resulting in explosive eruptions. Detailed reconstruction and assessment of the mixing process caused by such intrusion is now possible because of the exceptional time-sequence sample suite available from the tephra fallout of the 2010 summit eruption at Eyjafjallajokull volcano in South Iceland. Fallout from 14 to 19 April contains three glass types of basaltic, intermediate, and silicic compositions recording rapid magma mingling without homogenisation, involving evolved FeTi-basalt and silicic melt with composition identical to that produced by the 1821–1823 AD Eyjafjallajokull summit eruption. The time-dependent change in the magma composition suggests a binary mixing process with changing end-member compositions and proportions. Beginning of May, a new injection of primitive basalt was recorded by deep seismicity, appearance of Mg-rich olivine phenocrysts together with high sulphur dioxide output and presence of sulphide crystals. Thus, the composition of the basaltic injection became more magnesian and hotter with time provoking changes in the silicic mixing end-member from pre-existing melt to the solid carapace of the magma chamber. Finally, decreasing proportions of the mafic end-member with time in the erupted mixed-magma demonstrate that injections of Mg-rich basalt was the motor of the 2010 Eyjafjallajokull explosive eruption, and that its decreasing inflow terminated the eruption. Significant quantity of silicic magma is thus still present in the interior of the volcano. Our results show that detailed sampling during the entire eruption was essential for deciphering the complex magmatic processes at play, i.e. the dynamics of the magma mingling and mixing. Finally, the rapid compositional changes in the eruptive products suggest that magma mingling occurs on a timescale of a few hours to days whereas the interval between the first detected magma injection and eruption was several months.
Introduction
Improved understanding of volcanic plumbing-systems is needed for better interpretations of precursors to volcanic eruptions. While deformation and seismic studies yield real-time information of physical changes beneath a volcano, geochemical investigation of the eruptive products allows identification of magma sources and quantification of magmatic processes leading to an eruption. Here, we use petrological and geochemical evidences, obtained on a precisely dated sample-suite of lava and tephra from the 2010 Eyjafjallajökull eruption in south Iceland (Fig. 1), to evaluate the triggering mechanism for the 2010 summit eruption and to quantify the magma differentiation processes. We show that basaltic injection remobilized older silicic magma, causing explosive eruption of inhomogeneous mixture of mingled magma. Such mechanical magma mixing triggering an eruption has been frequently inferred from mingled magma deposits (e.g. Sparks et al., 1977;Eichelberger, 1980; Published by Copernicus Publications on behalf of the European Geosciences Union. Shaded relief map showing Eyjafjallajökull and Mýrdalsjökull ice-caps and the volcanic systems Vestmannaeyjar, Eyjafjallajökull and Katla. The pass between the two ice-caps, Fimmvörðuháls, was the location of the basaltic flank eruption preceding the summit eruption of Eyjafjallajökull in 2010. Nakamura, 1995;Clynne, 1999;Suzuki and Nakada, 2007;Pallister et al., 2008;Tonarini et al., 2009). Furthermore, we demonstrate how fast the composition and proportions of the mixing end-members changed, which is relevant for understanding the dynamics of magma mixing in general.
The Eyjafjallajökull 2010 eruption
Over the last fifteen years, episodic seismic swarms and inflation-induced deformation have been taken to indicate sill injections at mid-crustal depth beneath Eyjafjallajökull volcano (Guðmundsson et al., 2010;Sigmundsson et al., 2010;Hjaltadóttir et al., 2011). A deep-sourced inflation started late December 2009, accompanied by increase in seismicity. Deformation and earthquake activity continued until late 20 March 2010, when a flank eruption broke out on a radial fissure at the Fimmvörðuháls Pass between Eyjafjallajökull and Mýrdalsjökull ice-caps (Fig. 1). The eruption produced a lava field and two scoria cones comprised of olivine-and plagioclase-phyric and relatively primitive mildly-alkaline basalt until 12 April (Fig. 2). This was followed by a seismic swarm that migrated rapidly from a depth of more than 7 km towards the summit of the volcano (Hjaltadóttir et al., 2011), culminating in an explosive eruption in the early morning on 14 April. Magma-water interaction was intensive during the first two days but gradually declined, and the activity became purely magmatic by 21 April. During the first five days, magma discharge was on the order of 10 6 kg s −1 , dropped to 10 4 -10 5 kg s −1 until early May when activity picked up and reached a discharge of 10 6 kg s −1 again on 5-6 May, followed by an irregular decline in discharge until the end of the eruption in late May. The magma produced is of a ben-moreitic to trachytic composition ( Fig. 3a) with very fine to fine ash that disturbed air-traffic over Europe for extended periods in April and May.
Samples
Our sample suite is comprised of basaltic lava (FH-1) and tephra (FH-2) from the initial phase of the flank eruption at Fimmvörðuháls, a tephra (FH-3) collected directly from the fallout from the plume on 1 April, and lava (FH-4) from the last stage of the fissure eruption. The benmoreitic sample suite includes tephra collected 15 April (EJ-1), a composite sample of tephra produced 17-19 April (EJ-2), and tephra from 22 April (EJ-3), 27 April (EJ-4), and 5 May tephra , in addition to two bread-crust bombs of trachyte composition (EJ-6,7) from the final days of the eruption, collected on 3 June 2010 from the surface of 45 m thick tephra pile on the eastern rim of the new crater. Tephra from the 1821-1823 penultimate eruption of Eyjafjallajökull was sampled from a soil section on the western flank of the volcano for comparison. The freshly collected samples were reduced to a powder in an agate ring-mill in preparation for whole-rock analysis. Tephra samples were mounted in epoxy, whereas the 1821 tephra was washed and sieved several times until all soil remains were eliminated. Two size fractions were mounted in epoxy and polished before in-situ analysis. No alteration of the glass fragments was observed. The exact timing of our samples is fundamental for precisely deciphering the magma dynamics prior to and during the eruption.
Electron probe microanalyses
Major element compositions of crystals and glasses were determined on a Cameca SX100 electron microprobe at the Laboratoire Magmas et Volcans of Clermont-Ferrand. Operating conditions were 15 kV accelerating voltage and 15 nA focused beam for minerals. Synthetic and natural minerals standards were used for calibration, with counting time set at 10 s for all elements. During glass analyses, analytical conditions were adjusted to minimise sodium mobility; basaltic and andesitic/dacitic tephra were analysed at 4 nA and 2 nA beam current, respectively, with a 10 µm defocused beam. Optimized mixture of minerals standards (synthetic and natural) and glasses (A-THO and VG2) was used for calibration. The counting time was 10 s for Na, Si, Ca, Ti and P; 20 s for Al and Mg; 30 s for Mn, and 40 s for K and Fe. Secondary international glass standard USGS VG A-99 (Jarosewich et al., 1979;Thornber et al., 2002) was analysed during each session to monitor for possible instrumental drift. Analyses during three days in a row, yield relative standard deviation
Laser ablation inductively coupled plasma mass spectrometry
Trace element analyses in glasses were performed at the Laboratoire Magmas et Volcans (Clermont-Ferrand) using a Resonetics M50 EXCIMER laser (193 nm) coupled to an Ag-ilent 7500cs ICP-MS. The laser was operated at 6 mJ energy, 2 Hz repetition rate and a 11 µm spot size diameter. Ablation gas was pure helium; nitrogen (7 ml min −1 ) and argon were mixed with the carrier gas via Y-connectors between the ablation cell and ICP-MS. Analysis duration was split up in two distinct parts: 40 s background acquisition followed by 50 s data acquisition from the sample. Stability of signal intensity during ablation proved a good indicator of the analytical spot homogeneity.
www.solid-earth.net/2/271/2011/ Solid Earth, 2, 271-281, 2011 The raw analyses were reduced with the Glitter software (van Achterberg et al., 2001), using CaO concentrations (measured earlier by electron microprobe) as internal standard. NIST 612 glass was used as the primary standard; NIST 610, BCR2-G and A-THO, periodically analyzed during the laser sessions, were used as reference materials for run quality control. The two latter reference glasses have similar composition as the analyzed sample and are therefore well suited to estimate precision and accuracy. Despite the small spot size, precision and accuracy were always better than 10 % for all the elements at 95 % confidence level.
Major-and trace element concentrations
About 100 mg of powder sample were fluxed with lithium metaborate (proportions 1:3) in a carbon crucible using an induction furnace. The melt-pearl was immediately dissolved in diluted nitric acid and diluted 2000 times before ICP-AES analysis. Another 100 mg powder aliquot was dissolved in concentrated HF-HNO 3 , evaporated to near dryness and re-dissolved in 7 M HNO 3 . The aliquot was evaporated to near dryness and subsequently diluted in HNO 3 0.4 M to reach a total dilution factor of 5000 for determination of trace element abundances by quadrupole ICP-MS (Agilent 7500, Laboratoire Magmas et Volcans). The reaction cell (He mode) was used to reduce interferences on masses ranging from 45 (Sc) to 75 (As). The signal was calibrated externally with a reference basaltic standard (BHVO-2, batch 759) dissolved as samples, and employed the GeoReM preferred values (http://georem.mpch-mainz.gwdg.de/). Both standards and pure HNO 3 0.4 M were measured every 4 samples. The external reproducibility of the method, as estimated by running repeatedly different standards (BCR-2, BIR, BEN) is < 5% (2σ ) for most lithophile elements and < 15% for chalcophile elements.
Oxygen isotopes
Laser fluorination oxygen isotope analyses were performed at the University of Oregon stable isotope laboratory using a 35 W CO 2 -laser. Bulk grain sample ranging in weight from 1.1 to 2 mg were reacted with purified BrF 5 reagent to liberate oxygen. The gases generated in the laser chamber were purified through a series of cryogenic traps held at liquid nitrogen temperature, with a mercury diffusion pump to eliminate traces of fluorine gas. Oxygen was converted to CO 2 gas using a small platinum-graphite converter, and then the CO 2 gas was analyzed on a MAT 253 mass spectrometer integrated to the laser line. Five aliquots of standards were analyzed together with the unknown samples; Gore Mt Garnet (δ 18 O = 5.75 ) was used in the standard set. Day-today δ 18 O variability on the standards ranged from -0.1 to + 0.25 , and these values were added to the unknown sam-ples to correct for day-to-day variability and absolute values on SMOW scale. The obtained precision on the standards is better than 0.13 and 0.01 in two sessions at 1 standard deviation.
Isotope ratios of Sr and Nd
About 100-150 mg of rock powder (chips for FH-3) were weighed into Teflon beakers for the samples and rock standards and leached for an hour in warm 6 M HCl. After leaching, the samples were washed in Milli-Q water and dissolved in a 2:1 mixture of concentrated HNO 3 and HF on a hotplate for 3 days. After drying down, the sample residues were redissolved in 6 M HCl, dried down and redissolved again in 6 M HCl to obtain clear sample solutions. The samples for Sr and Nd analysis were dried down and redissolved in 1 M HNO3 and passed through TRU.Spec column chemistry; the Sr and Nd fractions were further purified through Sr.spec and LN.spec column chemistries, respectively (Pin et al., 1994;Pin et al. 1997).
The Sr samples were analysed at the Imperial College London MAGIC laboratories and Laboratoire Magmas et Volcans in Clermont-Ferrand on Triton TIMS in static mode. Rubidium interferences were monitored and corrected for, but were always lower than 40 ppm. Data were corrected for instrumental mass fractionation using the exponential law and 88 Sr/ 86 Sr = 8.375209. Six analyses of NIST SRM 987 interspersed with the samples gave an average 87 Sr/ 86 Sr value of 0.710251 ± 8 (2σ ).
Neodymium was analyzed in static mode and data were corrected for instrumental mass fractionation using the exponential law, and 146 Nd/ 144 Nd = 0.7219. Samples were analyzed in two analytical sessions: during the first, average value of the JNdi standard was 143 Nd/ 144 Nd = 0.512099 ± 20; during the second, 143 Nd/ 144 Nd = 0.512059 ± 20. Sample data were normalized to 143 Nd/ 144 Nd value of JNdi of 0.512113. Samarium interferences were monitored and corrected for during the run; they were always lower than 100 ppm. composition and oscillatory zonation (Fo 57−68 , An 79−68 , Cpx Mg-number: 79-63). The magma is highly vesiculated and the groundmass is largely crystallized (Fig. 2). The interstitial glass has evolved FeTi-basaltic composition similar to segregation veins at Surtsey and Holocene lavas from the Katla volcano (Sigmarsson et al., 2009;Óladóttir et al., 2008). Less evolved basaltic glass compositions are preserved in melt inclusions of olivine and plagioclase phenocrysts (Moune et al., 2011). In contrast, bulk samples of tephra from the explosive phase of the 2010 Eyjafjallajökull eruption are of a benmoreitic composition (Table 1). During the summit eruption, phenocryst compositions vary greatly, with olivines ranging from Fo 80 (Fig. 2) to Fo 46 , feldspars varying from An 69 to An 9 , and Mg-number of clinopyroxene ranging from 72 down to 19. Magnetite is abundant and traces of apatite, pyrite and orthopyroxene are also present. Both the plagioclases and the clinopyroxenes display an inverse chemical zonation (e.g. Fig. 2e-f), with a core having, respectively, lower An content and Mg-number. Such compositional zonation is readly explained by magma mixing. In contrast, zoned olivines have a core with higher Fo (Fig. 2c). Noteworthy are microgranitic fragments composed of anorthoclase (An 1.2 Or 32 ), tridymite, ferrohedenbergite (Mg-number = 19; En 11 Fs 48 Wo 41 ) and fluorite emitted during the first days of the summit eruption (Fig. 2d). The major-element concentration variations for the wholerock and glass samples are shown in Fig. 3, where CaO vs. MgO are plotted (a) and the molar ratio of CaO over Al 2 O 3 is displayed as a function of the Mg-number (b). The wholerock CaO/MgO decreased from 2.41 in the initial phase (sample EJ-1) to 1.48 in bread-crust bombs from the final stage of the eruption. Whole-rock sample of the first tephra (15 April) plots on a binary mixing line defined by the inter-stitial glass of the Fimmvörðuháls basalt and the glass composition of the 1821-1823 AD rhyolitic tephra. Three glass composition is detected in the composite tephra from 17-19 April (sample EJ-2): basalt with SiO 2 of 49-51 %, benmoreite (SiO 2 = 60-61 %), and trachyte (SiO 2 = 69-70 %; Fig. 3 and Table 2). These compositions plot on the same binary mixing line, indicating mechanical mixing, or mingling, of the evolved basalts with older silicic melt. In-situ glass analyses of tephra produced between 22 April and 5 May (samples EJ-3,4,5) are all of intermediate composition.
These tephra show increasingly lower whole-rock CaO/MgO values with time, indicating changes in composition of mixing end-members during the eruption. Other major elements display the same behaviour, namely a single mixing line on oxide vs. oxide diagrams for the early samples and changing ratios between the different elements for the later samples.
In-situ trace element measurements (see Table 3) in the three glass types of sample EJ-2, three glass inclusions in phenocrysts of the Fimmvörðuháls flank basalt, and in tephra glass from the 1821-1823 eruption confirm the role of mingling in forming the 2010 Eyjafjallajökull benmoreite magma. Strong linear correlations are not only observed between incompatible element concentrations such as Rb and Th (Fig. 4), but also between those of compatible and incompatible elements (e.g. Sr versus Th). This suggests that crystal-liquid separation had probably too little time to occur. The apparent absence of fractional crystallization despite tenfold variation in Th concentrations is best explained by rapid magma mingling with minimal melt homogenisation prior to eruption. The whole-rock tephra trace element compositions (Table 1) Rb-Th and Sr-Th ratios. Uniform O, Sr and Nd isotope ratios in the whole-rock lava and tephra samples (Table 1) support this conclusion. Delta 18 O of 5.96 ± 0.18 in EJ-1 is consistent with the silicic mixing end-member being formed by fractional crystallisation of mantle derived basalt similar to those erupted laterally on Fimmvörðuháls (δ 18 O = 5.4-5.8 ± 0.2 ). During this process the global partition coefficient of Sr (D Sr ) between fractionating mineral assemblage and residual melt must have been close to unity (Fig. 4b).
The rapid magma mingling/mixing is also reflected in the highly heterogeneous and zoned mineral compositions in the benmoreitic tephra. For instance, tephra that fell during the second peak in magma discharge (i.e. 5 and 6 May) contains 50 µm zoned olivine (Fig. 2c) with 10 µm thick rim having a composition of Fo 48−50 , but a core of Fo 80 indistinguishable from olivines in the Fimmvörðuháls basalts. This suggests arrival of deep-derived primitive basalts that concurs with a deep seismic swarm (originating from a depth close to the mantle-crust boundary (Hjaltadóttir et al., 2011;Bjarnason, 2008)), as well as increased magma and sulphur output as indicated, respectively, by higher eruption column on 5 May (Arason et al., 2011) andsatellite observations (Carn et al., 2010), together with the appearance of sulphide crystals in the benmoreitic tephra. The new influx of more primitive basalt magma coincides with changes in the composition of the silicic mixing end-member changing to a less evolved composition, as indicated by the mixing curves and lines on Fig. 3. The composition of the final mixing end-member is present in the EJ-5 tephra and identified as low temperature melting component of a Na-rich plagioclase (Fig. 2e-f) partial-melts of the magma chamber's carapace (Fig. 1d) due to interaction with newly injected and hotter mantlederived basalts. Taken together, the explosive Eyjafjallajökull eruption is best explained by mingling and mixing involving an older silicic intrusion that was heated up and remobilized by the injection of hot basalt magma that became more primitive with time. The high resolution sample suite from the 2010 Eyjafjallajökull eruption allows us to estimate a) the proportions of the basalt component in the mingled/mixed magma and its variations with time, and b) the time-dependent changes in the composition of the deepderived basalt magma (Fig. 4). These estimates are obtained from the calculated binary mixing curves shown in Fig. 3b, and from the intercept of the mixing lines with the fractional crystallisation vector of the basalts applying the lever rule (see legend to Fig. 3). The results indicate that the proportions of the basalt decreased from approximately 50 % late April to less than 30 % a month later, whereas the evolved FeTi-basalt composition early in the eruption was progressively replaced by more primitive basalt composition at the end.
The petrological and geochemical results obtained so far suggest the following scenario. The real-time deformation results measured during the first three months of 2010 (Sigmundsson et al., 2010) were caused by the ascent and degassing of relatively primitive and slightly alkaline basalt magma that produced, via fractional crystallisation, evolved FeTi-basalts similar to those of Katla volcano (Sigmarsson et al., 2009). This evolved basalt appears to have accumulated at depth over the three months, and only shortly before the explosive eruption (13 April) encountered the partially molten 1821-1823 residual silicic magma body beneath the summit of the volcano (Keiding and Sigmarsson, 2011). The silicic magma intrusion/chamber appears to have hindered the rise of the basalt, whereas a portion of the primitive basalt emerged further east during the Fimmvörðuháls flank eruption. Three weeks later the flank eruption stopped when the FeTi-basalt was injected into the now heated and remobilised alkaline rhyolite body directly beneath the summit crater, provoking the explosive eruption of mingled benmoreitic magma. The magnitude 2.3 earthquake that occurred The first two MI in olivine with Fo73 and the two A-THO analysis are duplicate analysis of the same glass patch. The standard glass NIST610 was run as an unknown during different runs and yields the overall reproducibilty. silicic intrusion. During the first two weeks of the explosive summit eruption, evolved basalt was involved in the magma mingling/mixing process, and thereafter the basalt became less evolved due to inflow of deeper-derived and more primitive magma. The ascent of deeper-derived basalts most likely caused the seismicity at 18-24 km depth observed in early May (Hjaltadóttir et al., 2011). Decreasing mafic end-member proportions with time in the erupted mixture strongly suggests that the basaltic injection remobilized the half-solidified residual silicic magma beneath Eyjafjallajökull and that the 2010 eruption was shut off by declining basaltic intrusion rather than emptying of a silicic magma reservoir. Therefore, the next eruption at this volcano is likely to produce silicic magma with corresponding tephra production. The strong evidences for magma mingling/mixing at the origin of recent explosive eruptions elsewhere, such as at Mt. St. Helens (USA, Pallister et al., 2008) and Mt. Unzen (Japan; Nakamura, 1995), and the timerelated increasing proportions of mafic enclaves in volcanics from the on-going eruption at Soufrière Hills (Montserrat, Lesser Antilles; Barclay et al., 2010), clearly demonstrate that not only is magma mingling and mixing important as a triggering mechanism at hazardous volcanoes but also a very dynamic process. The results of the present study clearly underline how fast magma mixing components can change.
Conclusions
The explosive summit eruption of Eyjafjallajökull in 2010 was triggered by an injection of Mg-rich basaltic magma several months earlier. This basalt stagnated below a silicic magma body -presumably residues from the penultimate alkali rhyolite eruption in 1821-1823 -degassed, partially crystallized and evolved to a FeTi-basalt. The heat and gas liberated rose up into the half-frozen silicic magma, opening a pathway for the evolved basalt that triggered the explosive eruption on 14 April through magma mingling within the silicic reservoir. In the meantime, the Mg-rich magma by-passed the central magma chamber and produced a flank eruption until the passage through the central conduit (Óladóttir et al., 2008). Mixing lines between these evolved basalt compositions (MgO∼4.5; CaO∼9.5 wt%) and the two silicic end-members, the 1821-1823 rhyolite and oligoclase melt (open crosses; see text for further discussion) of 5 May (dash-dot line) are also shown. The intercepts of these lines with the basaltic fractionation vector are used to estimate the degree of basaltic evolution (expressed as mixing proportions between evolved and primitive basalts in Fig. 5) in the mafic mixing pole. Dates of samples corresponding to different intercepts are shown in addition to the interval of active silicic mixing poles. (b) Molar ratios of CaO over Al 2 O 3 against the Mgnumber (#Mg = molar MgO over the sum of MgO plus FeO calculated from total iron analysed by assuming Fe 2 O 3 /FeO of 0.2). In the lower right corner are shown schematically the melt composition vectors of pure mineral fractionation from basalt. These are consistent with olivine domination on the fractionation vector from FH-basalt, but the trend of EJ-2 basalt glasses indicates clinopyroxene fractionation. Binary mixing curves are calculated using the same end-member composition as in (a). Mingling and mixing proportions of basalt to silicic melt are derived from the calculated curves. Duplicate analyses of three Eyjafjallajökull bulk-tephra (EJ-5, 6, 7) indicate the overall analytical precision. Abbreviations are WR for whole-rock compositions, EJ for Eyjafjallajökull, FH for Fimmvörðuháls flank basalts and MI for melt inclusions. Eyjafjallajökull eruption. Four spot analyses of three olivine-hosted melt inclusions are shown for comparison. Larger error bars for the in-situ analysis, compared to those of whole-rock (WR), are caused by small ablation crater-size (diameter of 11 µm) due to small area of crystal-free glass patches and consequent lower count rates of each element. Multiple analyses of single grains are encircled in both panels. In (b), two curves of melt evolution during hypothetical crystal fractionation are shown for different D Sr (assuming D T h close to 0) with the remaining melt fractions indicated. The origin of the 1821-1823 alkaline rhyolite will be addressed elsewhere but it is consistent with over 90 % fractional crystallisation from primitive basalt. This differentiation mechanism appears dominant at Icelandic volcanoes close to the periphery of the island (Martin and Sigmarsson, 2007). Apatite fractionation, where D Sr can be as high as 5 (Prowatke and Klemme, 2006), together with that of plagioclase, having D Sr in the range 1-3 (Fabbrizio et al. and that of basalt melt in the benmoritic tephra of Eyjafjallajökull. Arbitrary 5 % error is assigned to the estimated magma mixing proportions that are derived from Fig. (3). The decrease of the basalt component suggests that supply of silicic magma at depth is abundant (see text for further details).
by the magma mingling/mixing and deeper Mg-rich basalt rose from a depth in excess of 20 km into the silicic reservoir and caused increased magma output and a corresponding higher eruption column. The additional heat brought in by the fresh intrusion caused partial melting of the microgranitic carapace, causing changes in the composition of the mixing end-members. Finally, the basalt injection declined and the eruption came to a halt. | 2018-01-14T05:38:13.218Z | 2011-12-02T00:00:00.000 | {
"year": 2011,
"sha1": "5ed7913d872f1137bd07e5b5a05fa81e750d29da",
"oa_license": "CCBY",
"oa_url": "https://www.solid-earth.net/2/271/2011/se-2-271-2011.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "3397ee8e5c30f89d0413ddcefad9a71fde1f34d8",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": [
"Geology"
]
} |
134034048 | pes2o/s2orc | v3-fos-license | Urban Flood Simulation Using MODCEL—An Alternative Quasi-2D Conceptual Model
: Urban flood modelling has been evolving in recent years, due to computational facilities as well as to the possibility of obtaining detailed terrain data. Flood control techniques have also been evolving to integrate both urban flood and urban planning issues. Land use control and flow generation concerns, as well as a set of possible distributed measures favouring storage and infiltration over the watershed, also gained importance in flood control projects, reinforcing the need to model the entire basin space. However, the use of 2D equations with highly detailed digital elevation models do not guarantee good results by their own. Urban geometry, including buildings shapes, walls, earth fills, and other structures may cause significant interference on flood paths. In this context, this paper presents an alternative urban flood model, focusing on the system behaviour and its conceptual interpretation. Urban Flood Cell Model-MODCEL is a hydrological-hydrodynamic model proposed to represent a complex flow network, with a set of relatively simple information, using average values to represent urban landscape through the flow-cell concept. In this work, to illustrate model capabilities, MODCEL is benchmarked in a test proposed by the British Environmental Agency. Then, its capability to represent storm drains is verified using measured data and a comparison with Storm Water Management Model (SWMM). Finally, it is applied in a lowland area of the Venetian continental plains, representing floods in a complex setup at the city of Noale and in its surroundings.
Introduction
Urban floods are complex phenomena that usually require a mathematical model to support diagnosis and design procedures. The first mathematical flood flow models that were developed tended to focus mainly on the main flow paths or on the drainage network and its hydraulic structures. In this early period, computational limitations imposed the use of one-dimensional (1D) approach. Even nowadays, it may be a possible (and useful) approach, mainly when designing storm drains and channels, once it is not expected that overflows occur in the design process. However, if a drainage system is already failing and flooding a city portion, it is very difficult to work with a 1D branched network to make it representative of the phenomenon. In this case, the use imprecisions, especially if one relies exclusively in the mathematical model complexity as a guarantee of quality results.
Leandro et al. [1] also highlighted this problem, recognising that special key features found in urban areas, such as buildings and roads for example, can obstruct the expected natural flow paths and can cause sudden vertical drops, changes in the flow directions and local head losses. Thus, despite the fact that highly detailed Digital Elevation Models (DEM) may be available, containing detailed information about the morphology of such key features, they cannot be simply inserted into the 2D grid, without any further overland flow interpretations.
Considering this initial picture, this paper aims to present the use of the Urban Flood Cell Model-MODCEL, a mathematical model developed to couple with the urban diversity discussed in the previous lines, as an alternative for urban flood modelling. The early development of MODCEL started in a research about urban flood modelling developed at the Federal University of Rio de Janeiro, in Brazil. This research pointed out the importance of representing hydrologic surface processes and surface flows, jointly with storm drains, drainage channels and rivers. The main challenge was to put these demands together, giving primary importance to the physical interpretation, while maintaining data needs as simple as possible. MODCEL was developed as an interpretative conceptual model that uses a quasi-2D approach to represent physical reality to fulfil the proposed objectives of its construction. This model was presented in a first version in Mascarenhas and Miguez [8] and its evolution is shown here as an alternative tool for aiding in the process of urban water management and flood control design, especially when urban diversity and hydraulic structures play an important role in defining flood patterns.
MODCEL concept was based on the original work from Zanobetti and Lorgerè [9]. The flow cell concept was initially developed to overcome computation limitations when simulating large flood plains. However, due to a modular construction, this concept can be very useful, even nowadays, to simulate urban systems, where natural topography patterns, urban landscape, and hydraulic structures merge in a complex setting.
The modelling process partitions the watershed into a number of cells, which work as ponds (with storage capacity) interconnected by a vast array of hydraulic links, where the dynamic De Saint Venant equation occupies a prominent place.
In this work, MODCEL will be presented and applied, first in a benchmark test, then in a comparison with a storm drain model and, at last, in a complex urbanised flood plain, where the Italian city of Noale lies.
Urban Floods and Mathematical Modelling Importance
Urban floods are usually associated with a complex terrain set-up, where several structures take part. Urban environments may be responsible for a multitude of flow possibilities when storm drains fail. It is quite usual that water spilling out of the drainage system may cause inundation over vast areas and city structures may interact with hydraulic structures, composing an unplanned flow network that includes urban surfaces, mainly through streets acting as channels. At the same time, several undesired reservoirs are addicted to the system, with parks, public squares and buildings temporally retaining waters, without having been designed for this function (and in an undesirable way).
This situation usually demands the aid of a mathematical model to help the design team understand how the resulting system behaviour makes the drainage network interact with the urban landscape.
Another important and current aspect regarding urban drainage discussions refers to the changing paradigm associated with designing flood control alternatives. Burian and Edwards [10] reviewed the evolution of the drainage systems from 3000 BC to present days and pointed out interesting aspects:
•
The urban drainage systems often evolved through trial-and-error modifications after the systems were initially constructed; Water 2017, 9,445 4 of 28 • Changes in perspective of urban drainage in a city were most often caused by disease outbreaks, scientific discoveries, or technical advances in planning, design, and construction.
The traditional drainage design approach works to improve channel conveyance, which often falls on the canalisation or rectifications of the watercourses (focusing on adapting the drainage net to the generated discharges). This approach arouse during the industrial city development, when the drainage systems were designed to face sanitation problems, conveying jointly storm waters and wastewaters. The urban growth of the industrial city occurred with very few controls [11], leading to several urban infrastructure gaps. Although the hygienist concept related with the traditional urban drainage design was important to equate public health problems at that time, this approach showed to be unsustainable, especially when the urbanisation process became more intense. Urbanisation itself limits river canalisation enlargements. Once city growth increases flow generation, this approach tends to transfer problems to downstream reaches of the basin.
On the other hand, Andoh [12] reviewed the urban drainage and wastewater practices, stating the need for a shift from a reactive framework of the traditional approach, which acted on the consequences of the increasing flows, to a proactive-preventative approach, centred on sustainable principles and involving the social dimension. To do this, this author discusses the adoption of distributed facilities to attenuate and/or store and manage urban waters, dealing with the problem at its early stages, near the sources of flow generation.
In this context, in the last decades, several approaches were developed to better adjust flow patterns in space and time. Among these approaches, it is possible to mention (not being exhaustive): low impact development [13,14]; sustainable urban drainage systems [15][16][17][18]; and water sensitive cities [19][20][21]; among others, with similar objectives.
Thus, the new trends in urban drainage points to distributed measures on the watershed (not only in the drainage net) intending to manage flow generation and minimising impacts of urbanisation over natural flow patterns. Once again, mathematical models arise as important tools to help in the design process of flood control alternatives, allowing to act in a distributed way over the watershed and to combine effects of the proposed measures in space and time.
Urban Drainage Modelling
Urban floods, as previously mentioned, involve a great variety of flow patterns over a very complex surface, passing through different hydraulic structures. Waters spilled out of the system may flow on the streets as though they were channels. Depending on water levels reached on the streets, open spaces and even buildings may store and retain waters, acting as reservoirs. Walls may work as weirs, for example, dividing a certain portion of the urban area, with different flow depths and discontinuous flow paths at each side. In such cases, flows may occur quite differently from the original (confined) expected behaviour, picturing their own patterns, in various directions and deeply influenced by interactions with local structures.
One of the most important aspects to consider in flood modelling of urban environments is the way which the resistance caused by buildings or other structures is represented in the model [22].
In this context, a model capable to fulfil the requisites for urban floods simulation should be able to represent hydrologic and hydraulic processes in a distributed way, integrating different possibilities of superficial flows and their interactions with urban landscape and with the formal drainage system.
In this situation, if the superficial flow is mainly governed by topography and urban occupation patterns, it is difficult to adequately represent the flooding behaviour with strict 2D models. High imperviousness and short time of concentration are common elements of urban hydrology. Under such conditions, sewer drainage networks play a primary role in modern cities when transporting runoff during storm events [23]. However, 1D models focused on the drainage network will also miss the representation of important superficial psychical processes.
Djordjevic et al. [3] presented the concept of dual drainage modelling, providing a more realistic alternative for representing urban flooding. In this concept, the network of storm drains is able to Water 2017, 9, 445 5 of 28 interact with superficial flows over open spaces, through the streets or between houses, composing a high-varied model in space and time. Besides that, underground pipes, in this case, might present free or surcharged flows. Note that, in 1999, when this work was published, this was an evolution from the simple 1D modelling, bringing 2D needs into discussion.
Following this tendency, Nasello and Tucciarelli [24] proposed a dual multilevel urban drainage model, representing the system as a double network, formed by an upper network of open channels (the street gutters) and a lower network of closed conduits (the sewer pipes), mainly focusing on this interaction. This model considered that a significant part of the flow moves along streets, until reaching an available inlet. Two overlapping cells occupy the same space location, in different levels, representing the street and the sewer network at each inlet basin, connected by a vertical link. Mass and momentum conservation laws are applied to each pipe/channel section, and the diffusive approach is used. When the sewer is under pressure, the hydrodynamic behaviour of the lower layer connects with the upper one. Leon et al. [25], for example, proposed a model to be developed in four modules: (1) hydrology; (2) street flows; (3) flow interception at inlets; and (4) storm-sewer flows in urban areas. More recently, Chang et al. [23] developed a "novel approach" to represent flow interactions between storm sewer system and overland surface flows. Noh et al. [26] compared urban flood modelling approaches with results obtained in laboratory scale experiments for the interaction of manholes, sewer pipes and surface flows. These examples show that this matter is still a current modelling challenge.
Maksimovic and Prodanovic [27] stressed that modelling improvement of surface-subsurface interactions requires not only change in the modelling concepts, but also detailed spatial data resolution. This data refers to: land use, terrain elevation and other features affecting surface runoff, available in digital form at a desirable horizontal and vertical resolution. At the time of this publication, the authors stressed that this could be a limiting fact in evolving models, once the required information could be not always easy or cheap to obtain. In fact these limitations are being surpassed, although detailed information on terrain modelling is still not cheap. Leandro et al. [28] discussed 1D/1D and 1D/2D models, respectively, for sewer and surface representations. One of the observations made is that 2D models are computationally much more expensive than 1D models, resulting in a greater computation time. In general, Leandro et al. [28] say that the choice between using a 1D surface network model or a 2D surface system model depends on the physical behaviour mapped for the case study. This choice determines the reliability of the results and the computational time required to obtain them. Considering the simplicity of 1D models, it is easy to adjust their parameters and the result may be even better than that obtained by 2D models, where the uncertainties in the required data are greater and the difficulties for calibration are higher. In the presence of major overbank flows and storm drains failure, however, the flows occurring over the urbanised flood plain may determine the use of 2D models as the best option.
The results obtained by Leandro et al. [28] showed evidence that it is possible to set up an accurate 1D/1D model, depending on the problem characteristics. To accomplish this task it is necessary to define a detailed 1D surface network of pathways and possible ponds based on the physical reality (expressed, in this case, by a digital elevation model). Manholes should be linked to pathways, pathways to ponds and ponds to ponds. The connection between sewer and surface used a multiple linking element, depending on the characteristics of the flow. This conception is similar to that of the flow cell.
If the overland flow paths are well known, 1D surface models may be built in a sound and effective way, being an economical alternative to 2D models [29].
Simões et al. [2] presented a proposal that joined the 1D/1D and the 1D/2D approaches, in a hybrid configuration, aiming to take advantage of the benefits of each one and overcoming the drawbacks. A case study was developed for the Canbrook catchment in London (United Kingdom). Several tests were performed. The 1D/1D modelling was used to represent higher areas of the basin, while 1D/2D concept was applied in the areas where larger flooded areas were expected. Results showed that the hybrid model was almost as fast as the 1D/1D model, with results similar to the 1D/2D model written to the entire area.
Eleutério and Mosé [30] compared 1D, 1D/2D and 2D modelling for mapping riverine flooding in the town of Fislis, eastern France. Complementarily, 28 scenarios were also built varying the choices made by the modeller when representing the topography of the river-floodplain system. Considering the modelled results for the flooding areas and water depths, the authors highlighted that special attention should be paid both on modelling and modeller choices for representing physical situation. Although defining the model dimensions (1D, 1D/2D and 2D) appeared to be the most important choice, the fact is that physical modelling interpretation to accommodate different information scales and detailing levels might also significantly influence the precision of the modelled results.
Abily et al. [31] raised a pertinent and important discussion about possibilities, performances and limits of the use of standard modelling tools for high-resolution runoff simulations over an industrial site as an example of a dense environment with complex aboveground structures that affect (and alter) drainage paths. These are common characteristics also found on an urban environment. This work highlights (like previously cited works) that strict 2D models are being used to represent urban areas and high-resolution topography data are becoming available to support these models. However, these authors introduce new concerns arguing that the modelling of complex sites deserves special attention on topics such as rapid changes in flow regime, small water depths and high gradient properties, and vertical effects in runoff hydrodynamics introduced by some of the aboveground components. We can add to this discussion some other potential problems such as the wetting and drying of model elements, and discontinuities in 2D flood surface moving over the urban watershed (generated by physical obstacles). Walls or earthworks may produce independent surface flows connected by weirs or orifices, for example.
In particular, Abily et al. [31] tested two 2D models and mapped some of the difficulties mentioned. The authors also stress that field measurements would have been desirable to validate models. This final remark seems to be valid in most urban cases. Although 2D modelling is becoming more common and efforts are being made to benchmark models [32,33], the lack of observed data seems to be a common problem when comparing model simulations to real floods in spatially distributed real world events (e.g., [34,35]).
Urban Flood Cell Model-MODCEL
MODCEL is a quasi-2D model [36], as defined in the classical literature. It represents the two-dimensional characteristics of the watershed, but uses only 1D equations. MODCEL is able to describe natural or artificial watercourses and elements of the urban fabric (streets, squares, roofs, etc.), the flow in the subterranean storm drains, and the mutual "upper layer-lower layer" connections amongst such elements, including overflows. Because it integrates an upper and a lower layer of flows (superficial flows and storm drains), MODCEL can be seen (in a particular interpretation) as a quasi-3D model. However, by doing the flow representation through 1D equations written for pre-defined possible flow paths, the model preserves simplicity and spares computational time. On the other hand, this is also one of the weaknesses of this representation, once it fails to accurately model real 2D flow surfaces by not considering the cross influences of flow velocities in x,y-Cartesian axis. MODCEL also conjugates a simple hydrological distributed representation, performing rainfall-runoff transformations in each cell.
Topographic and hydraulic representations of the physical reality are two core elements in MODCEL. It is important to point out, however, that MODCEL does not need a Digital Terrain Model (DTM) in the strict sense-it is the responsibility of the modeller to interpret terrain features to provide synthetic geometrical information characterizing each individual cell and each connection between cells to the model. This way, it is important to note that, MODCEL cannot function by a blind, automatic application of geomatics tools and the objective description of boundary conditions. It rather obliges the modeller to investigate and understand how the real system works and how can each component be represented to simulate system functioning. At a first glance, this could Water 2017, 9, 445 7 of 28 be seen as another weakness (because it is neither automatic nor easy), but, through this process, indeed, inevitably, the particularities of the drainage system come to light and an actual, very useful understanding of the system behaviour is gained.
Background
In the 1960s, the first relevant mathematical model capable to describe two-dimensional flow patterns was proposed and implemented. This model was constructed for the Mekong delta river area by Societé Grenobloise d'Etudes et Applications Hydrauliques (SOGREAH), in a work for UNESCO [9,37]. This model represented the basin using storage cells, which were able to connect river reaches and floodplains to simulate flow patterns in the inundated Mekong delta areas, considering both natural floods and floods modified by a regulation dam, which would act in flood damping, flow regularization for navigation purposes and irrigation. The modelled area had about 50,000 km 2 and was inhabited by approximately 10 million people.
In 1990, a different kind of cell model, considering a mesh of rectangular cells, was developed for heavy floods simulation, including dam break [42]. In this latter case, zones near the dam were treated in a particular way, considering the shock wave and privileging inertia effects. Distant dam zones were considered to be characterized by great flood plains, terrain topography, land use, presence of dykes, among other factors, so the shock waves could be considered to have suffered significant dissipation. In this way, cells could represent distant flood plains.
The first version of a cell model developed in Brazil was built to represent Pantanal Matogrossense, a large marsh in west-centre region of Brazil [43]. Later, this model was adapted to an urban environment, gaining a series of new facilities, being presented by Mascarenhas and Miguez [8], in an initial version, that is in continuous improvement and which is presented in this paper on its current stage of development. This model was called MODCEL. Some applications of this model can be found in the literature (see, for instance, [44][45][46][47]).
Basic Concepts and Mathematical Modelling Structure
MODCEL is based in the basic principles of mass conservation, energy conservation and momentum conservation laws. The entire basin and its different elements are represented by cells, including the rivers, channels, storm drains, flood plains, hilly areas, urban areas, reservoirs, and other structures. Therefore, several urban structures and flow patterns can be simulated by the combination of a pre-defined set of cell types and cell links. The cells act as the storage elements of the model, representing terrain elevation and land occupation characteristics, while the links, on the other side, activate the flows between each pair of cells. Thus, cells and links are the computational basic elements of MODCEL.
The mass balance is applied to all cells. Thus, at each time step, the stored amount of water in one cell depends on the discharges exchanged between this cell and its neighbours, as well as on the rainfall contribution added through a rainfall run-off transformation. Two simple hydrologic models are available in MODCEL: the use of the Rational Method per cell, applying a runoff coefficient, evaluated according to the land use characteristics of each cell; or the use of a simple hydrological model to represent infiltration, vegetal interception and depression retentions, being the two latter parcels considered in a combined way as a limited reservoir. On the other hand, infiltration can occur as long as there is water accumulated over the surface of the modelled cell, even though the rainfall has finished. Abstraction (vegetal interception + retention) and infiltration are considered to occur simultaneously in the model, as part of the abstraction may occur in impervious surfaces.
The mesh of cells composes a hydrodynamic looped network, in a spatial representation that links surface flows, channel flows and storm drains, allowing discharges to occur in different directions on the modelled watershed, depending on the calculated water levels. All mathematical relations written are one-dimensional. Figure 1 illustrates the basic concepts of a cell. There are some important characteristics associated to every cell to adequately represent land surface properties: the total plain area, where the rainfall occurs; the storage area, where the mass balance is applied; and the land use and occupation characteristics, which is essential to estimate run-off generation. In urban cells, the different levels established by streets, sidewalks and buildings affect the storage capacity. Different patterns may be pre-defined in MODCEL, accounting for different characteristics of neighbourhoods in the urban tissue. These patterns influence storage availability (and consequent flooding levels) by establishing superficial areas associated with different terrain elevations. Figure 2 illustrates the different levels defined by the urban patterns inside an urbanised cell. The street level refers to the lowest area available for storage purposes. The sidewalk level refers to sidewalks themselves, but also to gardens, parks, and parking lots. The building level indicates the threshold related with initiating the flooding of houses and their contents. total plain area, where the rainfall occurs; the storage area, where the mass balance is applied; and the land use and occupation characteristics, which is essential to estimate run-off generation. In urban cells, the different levels established by streets, sidewalks and buildings affect the storage capacity. Different patterns may be pre-defined in MODCEL, accounting for different characteristics of neighbourhoods in the urban tissue. These patterns influence storage availability (and consequent flooding levels) by establishing superficial areas associated with different terrain elevations. Figure 2 illustrates the different levels defined by the urban patterns inside an urbanised cell. The street level refers to the lowest area available for storage purposes. The sidewalk level refers to sidewalks themselves, but also to gardens, parks, and parking lots. The building level indicates the threshold related with initiating the flooding of houses and their contents. total plain area, where the rainfall occurs; the storage area, where the mass balance is applied; and the land use and occupation characteristics, which is essential to estimate run-off generation. In urban cells, the different levels established by streets, sidewalks and buildings affect the storage capacity. Different patterns may be pre-defined in MODCEL, accounting for different characteristics of neighbourhoods in the urban tissue. These patterns influence storage availability (and consequent flooding levels) by establishing superficial areas associated with different terrain elevations. Figure 2 illustrates the different levels defined by the urban patterns inside an urbanised cell. The street level refers to the lowest area available for storage purposes. The sidewalk level refers to sidewalks themselves, but also to gardens, parks, and parking lots. The building level indicates the threshold related with initiating the flooding of houses and their contents. The set of pre-defined cell types used in MODCEL is listed below: • River/channel cells: This type of cell is used to model the main free open channel flows, in which the cross section is taken as a rectangular equivalent shape. • Storm drain cells: This type of cell represents enclosed sections in the drainage network, with a top limit to the flow depths. It allows superficial or surcharged flows. • Urban surface cells: These cells are used to represent urban flood plain surfaces, where run-off occurs, as well as flood flows in inundated areas. They are larger areas, when compared with river cells, and they act as storage areas linked to each other by a hydraulic link (a street functioning as a channel, for example). They also may represent slope areas, if a little storage area is assigned, leaving the most part of the total area just to receive rainfall contribution. • Natural surface cells: These cells are similar to the preceding ones, but they do not consider any kind of urbanisation pattern. The connection between two of these cells may be done by natural channels or thalwegs. When there are local elevations inside this kind of cell, the storage area is reduced to calculate adequately water levels inside the cell. • Reservoir cells: These cells are used to simulate a temporary pond or reservoir, represented by the relation of the terrain elevation with the surface area. Departing from this curve, it is possible to evaluate the stored volume variation from the water depth variation. The reservoir cell type may play the role of damping inflow discharges when representing flood control measures.
The water level variation in a cell i, at a time interval t, is given by the continuity equation applied for that cell as stated in Equation (1), in discrete terms.
where Q i,k is discharge between two neighbouring cells i and k; Z i is the water surface level at the centre of the cell i; A Si is the water surface area for the cell i; and P i is the discharge related to the rainfall over the cell. The discrete time interval (n + 1)·∆t, represented by the index t + 1, is taken as the calculation time, when the variables are unknown. On the other hand, at the discrete time n·∆t, represented by the index t, all variables are known, due to their previous calculation or because they were prescribed as initial conditions. The water surface area of the cell i, A t Si , is taken as a function of the known water level Z t i , implying that a first order approach, (∆A Si /A Si ) 1, is adopted. P i is a known value, since the rainfall is considered a known entry for all time intervals. Z t+1 i and Q t+1 i,k are unknowns and the numerical scheme is implicit. To solve this equation, the unknown discharges may be written as a function of the water levels in the cell i and all its k neighbour cells. Thus, to avoid the presence of a non-linear term, it is possible to develop Q t+1 i,k in Taylor series, taking only the first order terms, as shown in Equation (2).
Taking into account the previous discussion, Equation (1) may be re-written as Equation (3).
This way, analysing Equation (3), discharges are written in the time interval n·∆t, when all the variables are already known. The variables ∆Z t+1 i and ∆Z t+1 k are related respectively to the water levels Z t+1 i and Z t+1 k , written for each cell, and they are the only unknowns. They refer to the considered cell i and to its immediate adjacent k neighbours. Therefore, this system can be solved in a relatively simple way, once the mathematical relations for the discharges between cells are written for values of Z t i and Z t k . Cells are arranged in a topological scheme, which is numerically solved by a double sweep method [36,37].
The discharge links between cells can be expressed through known hydraulic laws. The possibility of introducing different mathematical relations is one of the features that inspire the use of the cell model to represent the urban floods diversity. The types of links considered in the current version of the model are briefly presented and discussed in the following.
Types of Links: • River/Channel link: This link is related to river and channel free flows corresponding to the De Saint Venant dynamic equation. Equation (4) results from the consideration of rectangular cross sections.
where A i,k is the wetted cross-section area between cells i and k; S f is the energy line slope; and the other variables were previously defined.
The parameter A i,k , and any other that appears in the flow section between cells i and k, may be evaluated through a weighting procedure departing from the values of water levels Z i and Z k .
Note that, when working with natural river stretches, the hypothesis of a rectangular cross section may be strong and should be used carefully. However, when modelling channelized urban rivers and considering artificial drainage systems, this is a reasonable approach. Besides that, the cell concept allows the modeller to compose a cross section using multiple cells with different terrain levels, representing the main channel, the secondary channels and the flood plains.
Multiplying Equation (5) Remembering that the discharges relations were written in the time interval n· ∆t, the expression to evaluate this discharge explicitly provides the entrance required in the modified continuity Equation (3). Writing Equation (5) in discrete terms leads to Equation (6), considering cell i in an upstream position: S f , by its turn, may be approximated by Expression (7): where: n is Manning's roughness coefficient; R i,k is the hydraulic radius of the flow cross-section between cells i and k.
Combining Equations (6) and (7), the term Q t i,k is the focus of interest. To explicate this term and obtain a direct solution for this equation, the quadratic terms were "factored", using a numerical simplification as it can be seen in Expression (8).
From this discussion, Equation (9) for the river flow link is obtained and may be used in the mass conservation balance: • Surface flow link: This link corresponds to the free surface flow without inertia terms, as presented in Zanobetti et al. [37]. MODCEL uses this link frequently to represent flow between surface cells (natural and/or urban). There are two possibilities to use this link. The first one, which refers to the original representation, considers the main drainage line entering in a closed cross section. In this case, MODCEL considers the closed storm drain as cells linked between each other. They also communicate with superficial cells, by manholes to a street above them. In this situation, flow over the street, represented by an urban surface cell, occurs with free surface and the water level associated to this flow can be considered equal to the piezometric line level for the drowned storm drain flow. In this case, cross sections are also rectangular. It is important to note that mass balance is calculated accordingly with real geometry, and the model keeps track of the cross section water levels and possible surcharge.
In the second possibility, which is used to represent minor drainage network, the manholes are represented by cells, but the storm drains are simple links, without the need to introduce a formal cell. Cross sections are taken as circular ones.
Thus, departing from Bernoulli Equation (10), Equation (11) is developed, where cell i is in the upstream position. Figure 3 shows the storm drain link representation. It is important to stress that, with these considerations, depending on the water levels involved in the calculation, flow might be forced backwards (what can really happen in physical reality). In MODCEL, junctions of river reaches are treated as special cells with "Y" shape, where mass balance equates inflows and outflows and, therefore, this kind of confluence does not need a special link.
•
Inlet link: This link promotes the interface between the street gutters of the surface cells and the underground storm drain cells. When not drowned, it acts as an equivalent weir conveying flow from streets to storm drains. This weir has the length of the perimeter of a single inlet multiplied by the number of inlets along the street modelled by the considered cell. When drowned, this link considers flow occurring through a certain number of orifices associated to the inlet grates in the street. It is useful to simulate the damping effect of a reservoir, in the design condition, and to verify reservoir operation in more severe conditions (those in which the weir can start to be used). In MODCEL, junctions of river reaches are treated as special cells with "Y" shape, where mass balance equates inflows and outflows and, therefore, this kind of confluence does not need a special link.
•
Inlet link: This link promotes the interface between the street gutters of the surface cells and the underground storm drain cells. When not drowned, it acts as an equivalent weir conveying flow from streets to storm drains. This weir has the length of the perimeter of a single inlet multiplied by the number of inlets along the street modelled by the considered cell. When drowned, this link considers flow occurring through a certain number of orifices associated to the inlet grates in the street. • Broad crested weir link: This link represents the flow over broad-crested weirs. It is used, mainly, to represent the flow between a river and its riverine areas. The classic formula of flow over broad-crested weirs is used here. Flow over a weir may be free or drowned, depending on water levels of the cells connected by this link. • Orifice link: This link represents the classic formula for flow through orifices. • Reservoir link: This link combines orifices (at different possible heights), as the outlet discharge of a reservoir, with a weir, which can enter or not in charge, depending on reservoir operation. It is useful to simulate the damping effect of a reservoir, in the design condition, and to verify reservoir operation in more severe conditions (those in which the weir can start to be used).
This section intends to show MODCEL performance in a benchmark test proposed in a research commissioned by the British Environment Agency in the context of the Flood and Coastal Erosion
Risk Management Research and Development Programme. The resulting report of this research [33] describes "the results from a benchmarking exercise assessing the latest generation of 2D hydraulic modelling tools for a variety of purposes in Flood and Coastal Risk Management to support Environment Agency decision making". This benchmarking exercise involves 10 test cases and one of the objectives of this research is to provide a data set against which a model can be evaluated by its developer.
Test 8, in particular, was designed to benchmark models for urban flood modelling and it is divided in two parts, 8A and 8B.
Test 8A assumes that the flood arises from two sources: a uniformly distributed rainfall event (peaking at 400 mm/h over a time base of 3 min), applied to the modelled area; and a point discharge source occurring over a time base of approximately 15 min, reaching a peak at 5 m 3 /s, 35 min after the rainfall event. This point source occurs at the location represented by the "INFLOW" indication in Figure 4. In Test 8B, a culverted watercourse of circular section is assumed to run through the site, with a single manhole at the location indicated as "MANHOLE" in Figure 4. An inflow boundary condition is applied at the upstream end of the pipe, forcing a surcharge at the manhole, aiming at assessing the model capability to simulate shallow inundation resulting from a surcharged underground pipe.
The modelled area is a rectangle of approximately 0.4 km by 0.96 km as it is shown in Figure 4. Model grid resolution is 2 m, resulting in~97,000 nodes (0.388 km 2 ). The test considers initial dry bed, Manning's coefficient of 0.02 for roads and pavements and 0.05 elsewhere Water 2017, 9,445 13 of 28
This section intends to show MODCEL performance in a benchmark test proposed in a research commissioned by the British Environment Agency in the context of the Flood and Coastal Erosion
Risk Management Research and Development Programme. The resulting report of this research [33] describes "the results from a benchmarking exercise assessing the latest generation of 2D hydraulic modelling tools for a variety of purposes in Flood and Coastal Risk Management to support Environment Agency decision making". This benchmarking exercise involves 10 test cases and one of the objectives of this research is to provide a data set against which a model can be evaluated by its developer.
Test 8, in particular, was designed to benchmark models for urban flood modelling and it is divided in two parts, 8A and 8B.
Test 8A assumes that the flood arises from two sources: a uniformly distributed rainfall event (peaking at 400 mm/h over a time base of 3 min), applied to the modelled area; and a point discharge source occurring over a time base of approximately 15 min, reaching a peak at 5 m 3 /s, 35 min after the rainfall event. This point source occurs at the location represented by the "INFLOW" indication in Figure 4. In Test 8B, a culverted watercourse of circular section is assumed to run through the site, with a single manhole at the location indicated as "MANHOLE" in Figure 4. An inflow boundary condition is applied at the upstream end of the pipe, forcing a surcharge at the manhole, aiming at assessing the model capability to simulate shallow inundation resulting from a surcharged underground pipe.
The modelled area is a rectangle of approximately 0.4 km by 0.96 km as it is shown in Figure 4. Model grid resolution is 2 m, resulting in ~97,000 nodes (0.388 km 2 ). The test considers initial dry bed, Manning's coefficient of 0.02 for roads and pavements and 0.05 elsewhere Figure 5 shows the test area divided into cells, resulting in a mesh of 163 cells (contrasting with the expected 97,000 simulation nodes). Roads were individualised to act as channels, while blocks (with buildings and open green areas) were mainly taken as storage areas, although they are also allowed to represent surface flows. Figures 6 and 7 show the results obtained with MODCEL, respectively, for Test 8A and Test 8B, compared with the published results [33]. Time of simulation using MODCEL to obtain these results was 3 min and 10 s. The simulation time step gave answers at each 1 s. Figure 5 shows the test area divided into cells, resulting in a mesh of 163 cells (contrasting with the expected 97,000 simulation nodes). Roads were individualised to act as channels, while blocks (with buildings and open green areas) were mainly taken as storage areas, although they are also allowed to represent surface flows. Figures 6 and 7 show the results obtained with MODCEL, respectively, for Test 8A and Test 8B, compared with the published results [33]. Time of simulation using MODCEL to obtain these results was 3 min and 10 s. The simulation time step gave answers at each 1 s. Observing the results obtained with MODCEL, when compared with the set of 2D models that participated in the test and that had their results published in Néelz and Pender [33], it is possible to note that there are no significant differences. In Test 8A, the major differences refer to the velocity measures-and this is, probably, the major weakness of MODCEL, due to the hydrodynamic 1D relations, that are not able to capture transversal interferences. Test 8B showed equivalent results among the different models and MODCEL, both for discharge and water levels observations.
MODCEL and Minor Drainage Representation at La Riereta Watershed, Saint Boi de Lobregat/Spain
This test was introduced to compare MODCEL with a well-known software for urban storm drains simulation-the Storm Water Management Model (SWMM), developed by the United States Environmental Protection Agency (US-EPA), was used as reference. La Riereta watershed is located in the old town of Sant Boi de Llobregat, near Barcelona. This urban basin has a superficial area of nearly 180,000 m 2 , with a high level of impervious surfaces and medium slope values. Runoff generated by rooftops is dropped directly to the streets where a large set of grate inlets are located on sidewalks to assure the capture of the superficial flows. The sewer of "La Riereta" is a combined system, consisting mostly of circular conduits with variable diameters. The majority of the pipes are made of concrete. Figure 8 shows a map of this watershed. According to topographic characteristics, This Figure also shows 17 sub-catchments in a topological scheme defined for SWMM modelling purposes. The same division was assumed in MODCEL, to test its capabilities in the same conditions and without increasing the level of details. In general terms, the manholes were taken as cells, and storm drain links ware written between each pair of manholes, connecting them. Then, the 17 subcatchments were defined as urban surface cells and they were connected with manholes by inlet links, and they were also connected with each other by surface flow links. Observing the results obtained with MODCEL, when compared with the set of 2D models that participated in the test and that had their results published in Néelz and Pender [33], it is possible to note that there are no significant differences. In Test 8A, the major differences refer to the velocity measures-and this is, probably, the major weakness of MODCEL, due to the hydrodynamic 1D relations, that are not able to capture transversal interferences. Test 8B showed equivalent results among the different models and MODCEL, both for discharge and water levels observations.
MODCEL and Minor Drainage Representation at La Riereta Watershed, Saint Boi de Lobregat/Spain
This test was introduced to compare MODCEL with a well-known software for urban storm drains simulation-the Storm Water Management Model (SWMM), developed by the United States Environmental Protection Agency (US-EPA), was used as reference. La Riereta watershed is located in the old town of Sant Boi de Llobregat, near Barcelona. This urban basin has a superficial area of nearly 180,000 m 2 , with a high level of impervious surfaces and medium slope values. Runoff generated by rooftops is dropped directly to the streets where a large set of grate inlets are located on sidewalks to assure the capture of the superficial flows. The sewer of "La Riereta" is a combined system, consisting mostly of circular conduits with variable diameters. The majority of the pipes are made of concrete. Figure 8 shows a map of this watershed. According to topographic characteristics, This Figure also shows 17 sub-catchments in a topological scheme defined for SWMM modelling purposes. The same division was assumed in MODCEL, to test its capabilities in the same conditions and without increasing the level of details. In general terms, the manholes were taken as cells, and storm drain links ware written between each pair of manholes, connecting them. Then, the 17 sub-catchments were defined as urban surface cells and they were connected with manholes by inlet links, and they were also connected with each other by surface flow links. It is possible to note that both models performed reasonably well, representing adequately flood peaks, peak phase and hydrograph form. Simulated volumes, however, are a little bit greater than the observed ones.
Case Study of Noale-MODCEL and the Simulation of Complex Flood Plains
The main purpose of bringing here this last case is to illustrate model capabilities when representing flood plains with a complex topology, in which artificial structures heavily changes the natural patterns and creates several local disconnected flows.
Particularly here, the proposed example refers to Noale, a city in the Venetian continental plains, in Italy, where urban flood occurs frequently and this phenomenon is driven by anthropic actions that modified the watershed with artificial hydraulic structures. This study was conducted in the context of The European Project SERELAREFA-SEmillas REd LAtina Recuperación Ecosistemas Fluviales y Acuáticos (Seeds of a Latin American Network for the Restoration of Fluvial and Aquatic Ecosystems), funded by the UE programme FP7 IRSES-PEOPLE 2009. The main aim of this project was to improve the way watercourses are managed by achieving benefits for both the environment and socio-economic activities. SERELAREFA fostered the adoption of River Restoration concept and developed several case studies that were supported by mathematical modelling. The Project started in September 2010 and lasted four years. Partners from Italy, Spain, Mexico, Chile and Brazil were involved in several researches regarding the use of river restoration approach to restore riparian areas, recover hydrological and morphological processes, and control floods, improving river ecosystems in general, as well as city environment, in the urban cases. The general coordination was issued to the Centro Italiano per la Riqualificazione Fluviale-CIRF (Italian Centre for River Restoration). It is possible to note that both models performed reasonably well, representing adequately flood peaks, peak phase and hydrograph form. Simulated volumes, however, are a little bit greater than the observed ones.
Case Study of Noale-MODCEL and the Simulation of Complex Flood Plains
The main purpose of bringing here this last case is to illustrate model capabilities when representing flood plains with a complex topology, in which artificial structures heavily changes the natural patterns and creates several local disconnected flows.
Particularly here, the proposed example refers to Noale, a city in the Venetian continental plains, in Italy, where urban flood occurs frequently and this phenomenon is driven by anthropic actions that modified the watershed with artificial hydraulic structures. This study was conducted in the context of The European Project SERELAREFA-SEmillas REd LAtina Recuperación Ecosistemas Fluviales y Acuáticos (Seeds of a Latin American Network for the Restoration of Fluvial and Aquatic Ecosystems), funded by the UE programme FP7 IRSES-PEOPLE 2009. The main aim of this project was to improve the way watercourses are managed by achieving benefits for both the environment and socio-economic activities. SERELAREFA fostered the adoption of River Restoration concept and developed several case studies that were supported by mathematical modelling. The Project started in September 2010 and lasted four years. Partners from Italy, Spain, Mexico, Chile and Brazil were involved in several researches regarding the use of river restoration approach to restore riparian areas, recover hydrological and morphological processes, and control floods, improving river ecosystems in general, as well as city environment, in the urban cases. The general coordination was issued to the Centro Italiano per la Riqualificazione Fluviale-CIRF (Italian Centre for River Restoration).
Rationale of Case Study
The lowland area of Mestre/Venice is naturally subject to harsh flooding as a significant part of it lies in areas of low absolute elevations, and even below average sea level. Consequently, it is artificially drained and it has been heavily modified over time. Dramatic events in 2006 and 2007 led to a mandate to a special Commissioner who issued a specific law that introduced the "hydraulic invariance" concept. The idea is that if a new building/infrastructure would increase the peak discharge of a reference flood event (return period of 50 years)-owing to the loss of natural storage volume and infiltration capacity-then building permission could be granted only if provided that suitable compensatory measures are put in place to maintain the original peak value under control. This concept has been fully developed by the Piano Territoriale di Coordinamento Provinciale (Territorial Coordination Provincial Plan) [48], approved by Venetian province administration (December 2010), which stated the need to elaborate specific Water Plans.
More recently, guidelines for the ecological restoration of the extensive network of natural and artificial irrigation and drainage canals [49] were issued, and several concepts of sustainable water management and river restoration were included, much in line with SERELAREFA philosophy.
Within the framework of SERELAREFA project, a first study trip happened in September 2010, during which the project partners could understand the problems faced in that region. After this visit, the group of the Federal University of Rio de Janeiro took part in a relevant Italian conference (Acqua e Città held in Venezia, 2011) and then in a specific workshop in Mestre. This workshop, hosted by Consiglio di Bacino Laguna di Venezia (Venetian Lagoon Watershed Council), counted with the participation of several key actors, amongst which the technical staff of hydro-geological defence of the Venetian Province. In this opportunity, the modelling framework of MODCEL was presented and this event opened a door for setting up an Italian case study around the Venetian area, conceived as an academic opportunity of applied research and know-how exchange.
Noale
Noale is a city that has its origins in ancient times. From the middle of the 1800s until today, great changes happened and its population was approximately multiplied by four. The centre of the city is flooded by Marzenego River, where one of its branches overflows the dike of the historic city castle's moat. The discharges that surpass the dyke go superficially to the south, towards Roviego River.
Urban grow and consequent land use modifications generated an increase in the hydraulic risks reaching important central areas and affecting a greater exposed population. Today, flood risk management is of prime concern for the protection of properties and economic assets. The first step in this path refers to understanding and mapping the interrelations between the city and the river. Therefore, this first step is the main objective of this case study, which also intends to illustrate MODCEL capability to map the current flooding situation in this complex flood plain.
General Hydraulic Description
The Marzenego River passes through Noale's centre and defines the main river course of the city. However, this watershed is not as usual and very few natural characteristic (if any) are preserved. The Marzenego River runs from northwest to southeast. In the north region, Dragonziolo River is its main tributary to the left bank. South to Marzenego runs Roviego River. Figure 11 shows a general plan view of this region and Figure 12 highlights some points located and numbered from 1 to 4 in Figure 11.
All Rivers running west to east in this region are contained by dikes on the right bank, so that the riverine areas that should be connected to a river by its right bank are artificially drained to the left riverbank of the next river to the south. The dyke in the centre of the city, referred to the castle moat, has a low height (approximately 50 cm) and frequently overflows.
River running north to south may experience "river crossings", both by siphons or by elevated channels (similar to aqueducts over earth works), as shown in Figure 13. The main urban storm drain passes below Marzenego River, reaching a channel that outflows in the Roviego River.
River running north to south may experience "river crossings", both by siphons or by elevated channels (similar to aqueducts over earth works), as shown in Figure 13. The main urban storm drain passes below Marzenego River, reaching a channel that outflows in the Roviego River. Marzenego River also presents a series of watermills in the main river course, which limit discharges flowing through these reaches. There are also some pumping stations and gates that artificially control (and changes) the flows in this complex network. Marzenego River also presents a series of watermills in the main river course, which limit discharges flowing through these reaches. There are also some pumping stations and gates that artificially control (and changes) the flows in this complex network. Water 2017, 9, 445 20 of 28
Modelling Noale's Case
Marzenego River was modelled from Molino Cosma to Robegano City. The upstream reach of Marzenego River, until reaching the city of Castelfranco, was transformed into a boundary condition. The upper reach of Dragonziolo River and Scolo Fossalta were also taken as boundary conditions. Roviego River was modelled as an outfall. The whole area between Marzenego and Roviego may drain to this river and all the drainage network-conducting waters into this river may discharge there, but the Roviego hydrodynamics was not represented. Figure 13. Example of a complex hydraulic site in the watershed, in a reach upstream the city centre: (a) Marzenego River running between dykes-on the right bank stands a deviation structure working for flood alleviation; (b) Deviation channel-looking to the deviation structure from downstream to upstream-note that on the left bank it is possible to see the outfall of a channel that crosses below Marzenego River; (c) Schematic view of the site, showing the river, its dykes, the deviation weir, the deviation channel and a former tributary passing below Marzenego River, also going to the deviation channel and (d) physical interpretation of the system working and how it is modelled.
Modelling Noale's Case
Marzenego River was modelled from Molino Cosma to Robegano City. The upstream reach of Marzenego River, until reaching the city of Castelfranco, was transformed into a boundary condition. The upper reach of Dragonziolo River and Scolo Fossalta were also taken as boundary conditions. Roviego River was modelled as an outfall. The whole area between Marzenego and Roviego may drain to this river and all the drainage network-conducting waters into this river may discharge there, but the Roviego hydrodynamics was not represented.
The topographic information was obtained from maps in the scale of 1:5000, from Segreteria Regionale All'Ambiente e Territorio (Territory and Environment Regional Secretariat). The modelled area covered an area of approximately 42 km 2 and 20 km of Marzenego River, with a medium slope of 0.0006. The time of concentration for the whole watershed was estimated as 12 h, using the Kirpich formula.
The model implemented in this study has 63 channel cells representing Marzenego River and other 54 cells representing the other water courses. There are also 219 cells representing the flood plains and the city of Noale, while 11 cells represent the storm drains, totalising 347 cells. It is important to stress that this model includes only the main storm drains. Figure 14 shows the watershed, including the boundary conditions and the cell division, while Figure 15 shows the cell division over an aerial view.
A compilation of available information on Venetian floods was made, resulting in the map shown in Figure 16. The flooding event of 2006 was used to calibrate the model, using the rainfall measured. The envelope of important flooding events, corresponding to their superimposed effects, was used to validate the model. In this latter case, a rainfall of 50 years of return period was introduced uniformly in the model intending to reproduce this larger flood map.
Two rainfall gauges were taken as representative of the modelled area: Istrana (14 km far from Noale's city centre) and Stra (14 km far from Noale's city centre) [50].
Considering a 24 h event, but introducing a time variation component, using the information of the rainfall values occurred in 1, 3, 6 and 12 h, a design rainfall of 50 years of return period [51] was built, using the alternate blocks method from the US Soil Conservation Service. The resulting rainfall was defined for time intervals of 0.5 h, intending to represent a critical situation to the whole basin and also to the sub-basins with different times of concentration. That is, if we are working with a storm drain, in a street in the middle of the city, the higher rainfall intensity related with a single time interval (of 30 min) will be critical in the local scale, while the composed rainfall intensity for the entire modelled basin scale (12 h) will also be critical. Figure 17 shows the design rainfall built for the return period of 50 years.
The topographic information was obtained from maps in the scale of 1:5000, from Segreteria Regionale All'Ambiente e Territorio (Territory and Environment Regional Secretariat). The modelled area covered an area of approximately 42 km 2 and 20 km of Marzenego River, with a medium slope of 0.0006. The time of concentration for the whole watershed was estimated as 12 h, using the Kirpich formula.
The model implemented in this study has 63 channel cells representing Marzenego River and other 54 cells representing the other water courses. There are also 219 cells representing the flood plains and the city of Noale, while 11 cells represent the storm drains, totalising 347 cells. It is important to stress that this model includes only the main storm drains. Figure 14 shows the watershed, including the boundary conditions and the cell division, while Figure 15 shows the cell division over an aerial view.
A compilation of available information on Venetian floods was made, resulting in the map shown in Figure 16. The flooding event of 2006 was used to calibrate the model, using the rainfall measured. The envelope of important flooding events, corresponding to their superimposed effects, was used to validate the model. In this latter case, a rainfall of 50 years of return period was introduced uniformly in the model intending to reproduce this larger flood map.
Two rainfall gauges were taken as representative of the modelled area: Istrana (14 km far from Noale's city centre) and Stra (14 km far from Noale's city centre) [50].
Considering a 24 h event, but introducing a time variation component, using the information of the rainfall values occurred in 1, 3, 6 and 12 h, a design rainfall of 50 years of return period [51] was built, using the alternate blocks method from the US Soil Conservation Service. The resulting rainfall was defined for time intervals of 0.5 h, intending to represent a critical situation to the whole basin and also to the sub-basins with different times of concentration. That is, if we are working with a storm drain, in a street in the middle of the city, the higher rainfall intensity related with a single time interval (of 30 min) will be critical in the local scale, while the composed rainfall intensity for the entire modelled basin scale (12 h) will also be critical. Figure 17 shows the design rainfall built for the return period of 50 years. Figure 19 shows the inundation map for a rainfall event with 50-year return period. It is possible to see that the modelled results are adherent in both cases. Figure 19 shows the inundation map for a rainfall event with 50-year return period. It is possible to see that the modelled results are adherent in both cases. Figure 19 shows the inundation map for a rainfall event with 50-year return period. It is possible to see that the modelled results are adherent in both cases. When considering a rainfall with 50-year return period distributed over the entire basin, the basic intention was to reproduce (for this representative rainfall intensity) the enveloped results of all previous important observed floods. This aim was reached and the results shown in Figure 19 are quite similar to those shown in Figure 16. An important thing to be added is that this result was submitted to the appreciation of the Venetian Lagoon Watershed Council and the local staff considered that the general flooding representation and paths of flows were quite accurate.
Concluding Remarks
Mathematical models are useful tools for managing drainage systems, especially when flooding problems produces large inundation areas. In these cases, the drainage system tends to interact with urban landscape and urban structures may act hydraulically composing a new flow pattern distinct from that predicted in the original project. Most times, the superficial processes may play a major role in the flood problem, being affected by urban structures and local obstacles in a complex arrangement. This is where the main potential of MODCEL arises, focusing on the interactions between surface flows and the drainage net flows, also allowing the introduction of typical urban structures in this representation. This is an important feature to consider in this kind of situation, providing the possibility to explore different integrated flood control alternatives by placing different types of interventions over the basin landscape, even outside the drainage network.
In this paper, MODCEL was benchmarked in a 2D test proposed by the British Environmental Agency, showing water level results compatible with 2D Models, but using only 163 cells, against 97,000 elements. Then MODCEL was compared with a storm drain model to show its capacity to represent the 1D branched network. Finally, MODCEL was applied to a complex watershed in the lowland area of Mestre/Venice, where the city of Noale lies in the Marzenego River basin. The results were compared with known flooding events. However, the case of Noale could only allow a spatial calibration (flooded area) and not its temporal evolution. The simulation of this watershed, in particular, was important to show MODCEL capability to deal with a mixed and complex setup, where a plain territory interacts with artificial structures and non-natural flow paths, including the When considering a rainfall with 50-year return period distributed over the entire basin, the basic intention was to reproduce (for this representative rainfall intensity) the enveloped results of all previous important observed floods. This aim was reached and the results shown in Figure 19 are quite similar to those shown in Figure 16. An important thing to be added is that this result was submitted to the appreciation of the Venetian Lagoon Watershed Council and the local staff considered that the general flooding representation and paths of flows were quite accurate.
Concluding Remarks
Mathematical models are useful tools for managing drainage systems, especially when flooding problems produces large inundation areas. In these cases, the drainage system tends to interact with urban landscape and urban structures may act hydraulically composing a new flow pattern distinct from that predicted in the original project. Most times, the superficial processes may play a major role in the flood problem, being affected by urban structures and local obstacles in a complex arrangement. This is where the main potential of MODCEL arises, focusing on the interactions between surface flows and the drainage net flows, also allowing the introduction of typical urban structures in this representation. This is an important feature to consider in this kind of situation, providing the possibility to explore different integrated flood control alternatives by placing different types of interventions over the basin landscape, even outside the drainage network.
In this paper, MODCEL was benchmarked in a 2D test proposed by the British Environmental Agency, showing water level results compatible with 2D Models, but using only 163 cells, against 97,000 elements. Then MODCEL was compared with a storm drain model to show its capacity to represent the 1D branched network. Finally, MODCEL was applied to a complex watershed in the lowland area of Mestre/Venice, where the city of Noale lies in the Marzenego River basin. The results were compared with known flooding events. However, the case of Noale could only allow a spatial calibration (flooded area) and not its temporal evolution. The simulation of this watershed, in particular, was important to show MODCEL capability to deal with a mixed and complex setup, where a plain territory interacts with artificial structures and non-natural flow paths, including the operation of gates, pumping stations and channel crossings (where overflows cause discharges transpositions from one watershed to another), MODCEL showed to be able to reproduce this great variety of hydraulic patterns in the urban landscape, as well as the set of hydraulic structures working jointly with natural and artificial drainage network. Although using a relative simple squasi-2D scheme, the physical interpretation phase that is needed to use this type of model allowed obtaining adequate and representative results, with low computational costs and less information need. The main weakness of this proposal refers to great flooding areas, where significant high flood levels occur (producing a continuous and extensive flooding surface). In this case, real 2D models are expected to produce better results. | 2019-04-27T13:04:52.031Z | 2017-06-21T00:00:00.000 | {
"year": 2017,
"sha1": "0c48f73dc4055f041791f4190430d357ee61608e",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4441/9/6/445/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "8eef78b7a28029d9d3ebe595bb01e80195cda14d",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Geology"
]
} |
18069793 | pes2o/s2orc | v3-fos-license | Radiative Corrections to P-Levels in the Two-Body QED Problem
The physical origin of the $m\alpha^5$ radiative corrections to $P$-levels in the two-body QED problem is elucidated. Then we demonstrate that the next order, $m\alpha^6$, corrections to those levels are due to the anomalous magnetic moment only.
1. The problem of mα 6 corrections to 1s 2p 3 P J helium levels was treated numerically many years ago [1,2,3,4]. Corrections of the same order to positronium P -levels were recently calculated analytically [5]. Those corrections are strongly dominated by the relativistic effects generated by the (v/c) 4 expansion terms. The true radiative corrections are numerically small, being suppressed by the "geometrical" factor 1/π 2 . It was assumed in Refs. [2,5] that such corrections are due only to the anomalous magnetic moment contributions.
However, quite recently radiative corrections of a different type were discussed in relation to the same problems [6]. Though those contributions are small numerically, the problem is certainly interesting from a general point of view. Moreover, in helium such a contribution would be essential for the comparison with experiment.
In the present note we demonstrate accurately that it is indeed only the anomalous magnetic moment contributions that generate true radiative corrections of the order mα 6 to P -states. Unfortunately, paper [6] is too concise and lacks calculational details; therefore we cannot point out the exact cause of disagreement.
2. Let us start with the discussion of the mα 5 radiative corrections to P -levels. There are three sources of terms. First, the infrared divergence in the scattering amplitude, i.e. in the electron charge radius (see Fig.1; here and below the dashed line refers to the Coulomb field, the wavy one to a transverse photon). Being cut off at atomic energies, the would be divergence generates both the log α term and the Bethe logarithm L n0 in the S-state Lamb-shift. For P -states the same divergence leads only to the corresponding Bethe logarithm L n1 .
The second kind of correction in the two-body problem originates from the double exchange diagrams presented in Figs. 2-3. Diagrams in Fig.2 with one Coulomb and one magnetic exchange are also infrared divergent, generating again log α and L n0 in S-states, and L n1 in P -states (see, e.g., Ref. [7]). On the other hand, those diagrams are effectively cut off from above at atomic momenta q. Then, diagrams in Fig.3 with double magnetic exchange are cut off from below at q. The log q contributions from diagrams in Figs.2-3, being transformed into the coordinate representation, generate a potential ∼ r −3 which has nonvanishing (and convergent) expectation value in P -states.
And finally, mα 5 radiative corrections to P -levels are generated by the α/2π contribution to the anomalous magnetic moment.
3. Consider now all potential sources of order mα 6 corrections to the energy shifts of P -levels in the same order as we have done above for the corrections of order mα 5 .
First of all, the two-loop contribution to the electron charge radius is infrared finite. This can be most easily demonstrated using the Fried-Yennie (FY) gauge for radiative quanta. Therefore there is no analog of the Bethe logarithm L nl either for S-or for P -states.
As to the double infrared divergence in the scattering amplitude connected with emission of two brehmsstrahlung quanta (see Fig.1 for the case of one brehmsstrahlung quantum), it contains additional power of momentum transfer squared in comparison with the case of one brehmsstrahlung photon and is thus capable of producing corrections no larger than mα 8 .
Let us pass over now to the two-photon exchange generating the mα 5 approximation log q contribution to the scattering amplitude and correspondingly r −3 to the interaction potential in the coordinate representation. We are going to consider radiative corrections to these graphs and to prove that respective diagrams do not produce corrections of order mα 6 to the energy levels. Note first that insertion of a one-loop polarization in any of the exchanged photons in Figs. 2-3 provides an additional factor q 2 in the respective integral for the energy shift. This gets an additional factor α 2 after integration. Hence, polarization insertions are irrelevant in order mα 6 in the case of exchanged photons with atomic (∼ mα) momenta.
The case of one-loop radiative insertions in the electron lines seems to be more involved. Once again, we confine for the moment our consideration to the case of small momenta of exchanged quanta; large (∼ mα) exchanged momenta will be considered below separately. First of all in the case of two exchanged magnetic quanta the sum of radiative corrections to the Compton scattering amplitudes entering diagrams in Fig.3 vanish. This is simply a direct consequence of the well-known low-energy theorem [8] for the Compton scattering.
Consider next radiative photon insertions in the diagrams with one magnetic and one Coulomb quantum in Fig. 2. FY gauge for the radiative photons is again most suitable for our goals since it provides the most smooth low frequency behavior for all graphs. Let us start with insertion of the self-energy operator in the electron line in Figs.2. Explicit expression for the renormalized self-energy operator in the FY gauge [9] (see also [10]) is proportional to the squared Dirac operator (p − m) 2 . Then it is easy to estimate the contribution of the graphs in Figs.2 with an inserted self-energy operator. The Coulomb line may be swallowed up by the Schrödinger-Coulomb wave function, one of the Dirac operators in the numerator of the expression for the self-energy is canceled by the remaining electron propagator and we are left with the product of the magnetic exchange and the Dirac operator between the Schrödinger-Coulomb wave functions. It is evident that the free Dirac operatorp − m applied to the Schrödinger-Coulomb wave function then produces a factor α 2 . Therefore, self-energy insertion in the electron line suppresses the previous order effect not by a factor of α but by α 3 .
Consider now radiative photon insertion in one of the vertices in Fig.2. Respective anomalous magnetic moment contribution produces correction of order mα 6 as was mentioned above. All other terms in the one-loop vertex correction in the FY gauge contain at least one additional suppression factor (see [10]) which after loop integration leads to contribution to the energy shift of order mα 7 . We also have to consider insertion of a spanning radiative photon. Once again, as was shown in [10], respective diagrams contain in the FY gauge an additional suppression factor which turns into an additional factor α 2 , leading to the contribution of order mα 7 which is too small to be of interest for us now.
As to the diagrams with multiple exchange by soft (q ∼ mα) Coulomb quanta, insertion of a radiative spanning photon leads only to the mα 5 Bethe logarithm L n1 .
Consider finally graphs with two exchanged photons of high (∼ m) momenta. It is easy to see that in these diagrams all one-loop radiative insertions either in the electron, or in the exchange photon line produce corrections of order mα 6 only to S-states.
Up to this moment we deliberately omitted graphs with the anomalous magnetic moment corrections. These graphs produce indeed the mα 6 corrections to the P -levels. There are no other sources of corrections of this order.
to them at Penn State University where part of this work has been done. http://arxiv.org/ps/hep-ph/9407335v1 | 2014-10-01T00:00:00.000Z | 1994-07-19T00:00:00.000 | {
"year": 1994,
"sha1": "05fb9b28be9d270436800e85cf29a8bc2e4589a7",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-ph/9407335",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "05fb9b28be9d270436800e85cf29a8bc2e4589a7",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
271119483 | pes2o/s2orc | v3-fos-license | The BDNF Val66Met polymorphism and health‐related quality of life in youth with obesity
Abstract The brain derived‐neurotrophic factor (BDNF) Val66Met polymorphism causes functional changes in BDNF, and is associated with obesity and some psychiatric disorders, but its relationship to health‐related quality of life (HRQoL) remains unknown. This study examined, in youth with obesity, whether carriers of the BDNF Val66met polymorphism Met‐alleles (A/A or G/A) differed from noncarriers (G/G) on HRQoL. The participants were 187 adolescents with obesity. Ninety‐nine youth were carriers of the homozygous Val/Val (G/G) alleles, and 88 were carriers of the Val/Met (G/A) or Met/Met (A/A) alleles. Blood samples were drawn in the morning after an overnight fast for genotyping. HRQoL was measured using the Pediatric‐Quality of Life core version. Compared to carriers of the Val66Met Val (G/G) alleles, carriers of the Met‐Alleles reported significantly higher physical –HRQoL (p = 0.02), school‐related HRQoL, (p = 0.05), social‐related HRQoL (p = 0.05), and total HRQoL (p = 0.03), and a trend for Psychosocial‐HRQoL. Research is needed to confirm our findings and determine whether carriers of the BDNF Val66Met homozygous Val (G/G) alleles may be at risk of diminished HRQoL, information that can influence interventions in a high‐risk population of inactive youth with obesity.
| INTRODUCTION
Obesity during adolescence is a serious public health concern given its high prevalence worldwide (World Health Organization, 2021), association with cardiometabolic complications (Freedman et al., 1999), and tracking from childhood to adulthood (DiPietro et al., 1994).Moreover, youth with obesity are often victims of weight-based teasing and bullying (Goldfield et al., 2010) and other forms of societal stigma, discrimination, and bias (Puhl & Latner, 2007), which has been shown to have untoward consequences on psychological health and well-being (Eisenberg et al., 2003).
Health-related quality of life (HRQoL) is an important outcome to study as it embodies the World Health Organization's (WHO) definition of health: to not only include physical health, but also social and emotional well-being (World Health Organization, 2002).Youth with obesity have lower HRQoL, and longitudinal studies show that obesity is a determinant of lower HRQoL rather than a consequence (Tsiros et al., 2009).Although the etiology of obesity involves a complex combination of genetic, environmental, and social factors (Kopelman, 2000), genetics contribute an estimated 40%-70% of variance in childhood obesity (El-Sayed Moustafa & Froguel, 2013), with single gene mutations or single nucleotide polymorphisms (SNPs) contributing between 1% and 5% of the variance (Blakemore & Froguel, 2010).Thus, identifying candidate SNPs that predispose youth with obesity to a diminished HRQoL would be important for informing screening and prevention.
Brain-derived neurotrophic factor (BDNF) is a neurotrophin integral to neuronal development, survival, and plasticity through the tropomyosin-related kinase B (TrkB) and p75 neurotrophin receptors (Noble et al., 2011).BDNF is expressed in the central nervous system (CNS), primarily in the hippocampus, striatum, brain stem, amygdala, cerebral cortex, cerebellum, and hypothalamus (Baj et al., 2016).Alterations in the valine (val) to methionine (met) substitution in the 5′ promoterregion of the human BDNF protein, known as the BDNF gene (val66Met single nucleotide polymorphism [G196A; SNP rs 6265]) has a functional impact on BDNF levels via alterations in intracellular processing, trafficking and activity-dependent secretion that lead to deficiencies of BDNF (Chen et al., 2004;Egan et al., 2003).BDNF impacts many neurotransmitters related to mental disorders (Gratacòs et al., 2007;Hyman et al., 1991), thus, it is not surprising that the BDNF Val66Met polymorphism has been recognized as a potential biomarker for many psychiatric disorders (Gratacòs et al., 2007).Moreover, alterations in BDNF levels as a function of the Val66met polymorphism have also been implicated in appetite regulation and food intake (Lebrun et al., 2006), eating disorders (Chen et al., 2004), obesity (Zhao et al., 2009), and metabolic disorders (Krabbe et al., 2007), providing evidence that BDNF might play an important role in the pathophysiology of these conditions.Many studies have suggested the BDNF Val66Met homozygous Met/ Met allele (A/A) or heterozygous Val/Met alleles (G/A) were associated with a reduced risks of certain psychiatric disorders such as substance abuse and anxiety-related personality traits (Gratacòs et al., 2007;Lang et al., 2005;Sen et al., 2003), although other studies with other conditions show different relationships (Gratacòs et al., 2007).These varying associations may not be surprising given that, although correlated, positive indicators of well-being and indicators of psychopathology or ill-being each have unique antecedents and consequences as they represent distinct constructs, and should not be conceptualized as simply falling on opposite ends of the same continuum (Huppert & Whittington, 2003;Ryff et al., 2006).Thus, it is important to empirically examine how these BDNF alleles may be associated with important indicators of wellbeing, such as HRQOL, and not simply extrapolate findings from the psychiatric literature.
To our knowledge, no study has examined the relationship between the Val66Met polymorphism and
K E Y W O R D S
brain derived-neurotrophic factor (BDNF), health-related quality of life (HRQoL), obesity, polymorphism, youth HRQoL, either in adults or youth, with or without obesity.Accordingly, the aim of this study was to examine, in a sample of inactive youth with obesity, whether carriers of either one or two copies of the Met-alleles (A/A or G/A) would differ from noncarriers (i.e., Val G/G alleles) on HRQoL dimensions, including Physical-HRQOL, Social-HRQOL, Emotional-HRQOL, School-HRQOL, and overall HRQOL.
| Participants
The current study uses a cross-sectional design, representing a secondary analysis of baseline data from the Healthy Eating Aerobic and Resistance Training in Youth (HEARTY) exercise intervention, which examined the effects of exercise training on percent body fat as the primary outcome, as well as broad set of physiological and psychological health indicators in youth with obesity (Alberga et al., 2012).
Inclusion criteria for participants in the HEARTY study included: being physically active less than 2 days/ week, being postpubertal (Tanner stages IV-V), aged 14-18 years with a BMI >95th percentile for age/sex and/ or ≥85th percentile for age/sex with an additional diabetes or cardiovascular disease risk factor as described elsewhere (Alberga et al., 2012).Exclusion criteria included participation in regular or structured exercise or sport activities done more than twice a week for more than 20 min during the previous 4 months, diabetes mellitus, use of any performance enhancing medication, significant weight change (increase or decrease of ≥5% body weight) during the 2 months before enrollment, pregnancy at the start of the study, activity restrictions due to disease (unstable cardiac or pulmonary disease or significant arthritis), other illness (e.g., eating disorders/clinical depression) judged by the participant or study physician to make participation in this study inadvisable.
This study received approval from the Research Ethics Boards at the Children's Hospital of Eastern Ontario (protocol #05/04E) and the Ottawa Hospital Research Institute (protocol #2004219-01H).Study protocols conformed to the Declaration of Helsinki (World Medical Association, 2013).The study began in March 2005 and was completed in June 2011.Informed consent was obtained from all the individual participants parents or legal guardian or next of kin in order to participate in the study.
| Design and procedures
For the baseline assessments, the research coordinator performed a complete medical, drug, and physical activity history as well as a physical examination.Clinical interviews were also performed to assess history of dieting, eating disorders, pubertal growth and development, and sedentary activities.Potential participants reporting any history of eating disorder (e.g., uncontrolled binging, binging and purging, anorexia) or clinical depression were excluded from the study (Alberga et al., 2012).Sociodemographic characteristics, pubertal status, lifestyle behaviors, and HRQoL were completed by self-reported measures in the laboratory, while body composition was quantified using MRI (Alberga et al., 2012).
), and Val/Met (G/A) alleles.DNA was extracted from twelve-hour (overnightfasting) blood samples of approximately 20 mL of venous blood taken in the morning, from a forearm or antecubital vein and stored in a freezer at −80C.Samples were obtained at baseline before the run-in period.Isolation of genomic DNA from buffy coat samples was completed following manufactures instructions (FlexiGene DNA Kit (250), Qiagen, Cat No. ID: 51206, Germany).DNA concentration was measured using a spectrophotometer (NanoDrop™ 2000, Thermo Scientific, Waltham, USA).PCR reactions were carried using the following primers (P1: CCTACAGTTCCACCAGGTGAGAAGAGTG, P2: TCATGGACATGTTTGCAGCATCTAGGTA, P3: CTGGTCCTCATCCAACAGCTCTTCTATAAC and P4: ATCATTGGCTGACACTTTCGAACCCA) and the genotyping based on the methods described by Sheikh and colleagues (2010) (Sheikh et al., 2010).The four primers amplify two allele-specific amplicons (253 and 201 bp) and the entire region as an internal control.The PCR reaction was carried out in a 25 μL reaction volume including, 25 ng of genomic DNA template, the primers, 100 mmol/L of dNTP (Invitrogen, California), 3 mmol/L of MgSO4 (Invitrogen, California), 1× PCRx Amplification Buffer (Invitrogen, California), 1× PCRx Enhancer Solution (Invitrogen, California), and 1 U Taq DNA Polymerase (Invitrogen, California).The PCR cycling conditions used an initial denaturation temperature of 94°C for 5 min, followed by 30 cycles of 94°C for 45 s, 62.5°C for 60 s and 72°C for 60 s, and a final extension step of 5 min at 72°C.PCR amplicons were resolved on a 1.5% polyacrylamide gel, stained with BlueJuice™ Gel Loading Buffer (Invitrogen, California) and visualized on the ChemiDoc™ Gel Imaging System (Bio-Rad Laboratories, Mississauga, Canada).
| Primary dependent variable
Health-related quality of life.The Adolescent Core version of the Pediatric Quality of Life (PEDSQL) was used to measure HRQoL.This is a 23-item self-report survey consisting of generic core scales encompassing physical functioning (eight-items), emotional functioning (five-items), social functioning (five-items), and school functioning (five-items).A total HRQoL scale score is derived from the mean of all 23 items.A psychosocial HRQoL summary score is derived from the mean of aggregated items from the emotional, social and school functioning subscales.Scores on all scales range from 0 to 100, with higher scores reflecting more positive HRQoL.The PEDSQL has good reliability and it has been validated in healthy youth and with a variety of pediatric clinical populations (Varni et al., 2002), including adolescents with obesity (Schwimmer et al., 2003).
| Demographic and developmental variables
Background socio-demographic information was obtained from all participants, including age, sex, ethnicity, and highest level of parental education as measured by self-report.
| Anthropometrics
Height and weight were recorded with a manual stadiometer and scale, respectively, with participants wearing light clothing and no shoes.Waist circumference was measured at a level midway between the lowest rib and the top of the iliac crest, as previously described (Alberga et al., 2012).Body composition was assessed by MRI with a 1.5-T system (EchoSpeed, signal 11 version; GE Medical Systems).Participants lay prone for whole-body cross sectional images using protocols by Ross and colleagues (Ross et al., 1992).The MRIs were analyzed using a Slice-OMaticTM software, version 4.3 (Tomovision, Magog, Canada).Fat-free mass (FFM) is defined as total lean tissue mass, which includes all fat-free skeletal muscle, organs, intestines, and bones, without adipose tissue, while fat mass (FM) represents the amount of visceral and subcutaneous adipose tissue.Percent body fat was calculated by dividing the amount of FM by total body mass (FM + FFM) × 100.
| Statistical analyses
Baseline characteristics of the sample were computed and are presented in Table 1 using means and standard deviations for continuous data and frequencies and percentages for categorical data.Since the frequency of the homozygous Val66Met Met/Met (A/A) genotype is low (1%-8%) in populations primarily comprised of youth with obesity from European descent (Shen et al., 2018), we combined this group (n = 6) with the carriers of the Val/Met (G/A) alleles.These carriers of the Met-alleles (n = 88) were compared to carriers of the homozygous Val/Val (G/G, n = 99) genotype on HRQoL dimensions and other continuous variables using independent t-tests or Chi-Square tests for categorical data.Because there were no group differences on demographic, anthropometric, or behavioral variables, group differences on HRQoL indicators were evaluated by univariate statistics (independent t-tests) to conserve statistical power rather than statistically controlling for these variables using multivariate modeling, which would reduce power unnecessarily.Effect sizes were based on Cohen's d formula, where d = 0.20, d = 0.50, and d = 0.80 indicate small, moderate, and large effects, respectively.The genotype distribution of the current sample was within Hardy-Weinberg equilibrium based on a Chi-square test (p = 0.65) in comparison to a population of youth from European descent (Skledar et al., 2012).Statistical significance was defined as a two-tailed alpha <0.05.Analyses were conducted using SPSS, version 24.
| RESULTS
A total of 187 participants (out of 304 or 62% of the full baseline HEARTY sample) provided informed consent for genetic analysis, had been genotyped for the Valmet66 BDNF, and provided complete HRQoL and demographic data (participants with missing data were excluded from the analyses).Most of the sample (74%) was white.Eightythree percent reported coming from parents who completed some university or community college.
Table 1 shows the Valmet66 polymorphism allele frequency breakdown for the sample.Very few participants were carriers of the homozygous (A/A) Met allele (N = 6), with approximately equal proportions of participants carrying either the homozygous G/G (N = 99) or heterozygous Val/Met (G/A) variants (N = 88).There were no significant group differences on age, sex, parental education, ethnicity, adiposity or physical activity.The sample was, on average, 15.5 years old, living with obesity, primarily white and coming from well-educated parents/families.Approximately 67% of the sample was female.On average, the sample was not physically active.
Table 2 shows that compared to carriers of the Val66Met G/G Allele, carriers of the Met-Alleles (A/G or
| DISCUSSION
The present study was the first to examine the effect of the BDNF polymorphism on HRQoL amongst youth with obesity.We found that compared to carriers of the homozygous Val alleles (G/G), carriers of the Met-alleles reported significantly greater overall HRQoL, along with several domains of HRQoL, such as Physical, School and Social HRQoL, with a trend toward better Psychosocial HRQoL.BDNF Met-allele carriers in our study reported significantly higher physical-HRQoL than homozygous Val/Valcarriers.This may not be surprising given that BDNF has been shown to play an important role in locomotor activity in animals (Kernie et al., 2000) and in humans (Marosi & Mattson, 2014).Acute and sustained physical activity increases serum BDNF levels in adults (Dinoff et al., 2016(Dinoff et al., , 2017) ) as well as in youth with (Lee et al., 2014) and without (Jeon & Ha, 2017) obesity.Interestingly, the more positive perceptions of HRQoL physical functioning and abilities associated with the Met alleles were observed in our study even though physical activity levels did not differ by BDNF genotype.Although no direct comparisons exist in the literature, somewhat consistent findings demonstrated that those with at least one copy of the Met allele reported greater increases in intrinsic motivation during exercise (Caldwell Hooper et al., 2014).Our findings coupled with Caldwell Hooper et al. (2014) suggest that the BDNF gene may influence perceptions about physical competencies and/or the motivational aspects of physical activity.This is an important area of future inquiry, given physical-self perceptions and abilities predict physical activity duration and intensity (Bauman et al., 2012), raising the possibility that youth with obesity with one copy of the Met alleles could be more adherent to a physical activity intervention.
Given the established role that BDNF plays in brain health and plasticity (Noble et al., 2011), and that the BDNF gene regulates the secretion of BDNF in an activitydependent manner (Chen et al., 2004;Egan et al., 2003), the BDNF gene has been implicated in playing an important role in the development of cognition and memory (Egan et al., 2003).Although no studies, to our knowledge, have examined the BDNF gene in relation to academic or school functioning, some studies exist in neurocognition.
A few studies showed that carriers of at least one copy of the Met allele performed more poorly on cognitive tasks (Hariri et al., 2003;Savitz et al., 2006).Our finding that Met-allele carriers reported greater school-related HRQoL is consistent with those of Foltynie (Foltynie et al., 2005) who reported carriers of the Met allele showed better performance on a measure of executive function, while another study showed null associations (Tsai et al., 2008).Future research is needed to verify if carriers of the Metalleles have better or worse academic performance and cognition based on objective measures, as well as perceived attitudes and abilities relating to scholastic behaviors and functioning.We found that Met-allele carriers reported a greater overall HRQoL, social-HRQoL and a trend for improved psychosocial HRQoL compared to carriers of the homozygous Val allele.The overall HRQoL scores in both groups were significantly lower than for nonclinical populations of youth, which was expected given the findings that youth with obesity have HRQoL scores that are comparable to youth undergoing cancer treatment (Schwimmer et al., 2003).However, it is important to note that homozygous Val (G/G) carriers scored 4-5 points lower on overall HRQoL and on several sub-domains, which is a difference considered to be clinically meaningful (Varni et al., 2002).Thus, carriers of the homozygous Val (G/G) allele may be particularly vulnerable to a reduced HRQoL, which is a serious clinical concern given youth with obesity already face widespread weight-based teasing, bullying and discrimination (Puhl & Latner, 2007), known to lead to greater psychosocial distress and diminished emotional well-being (Eisenberg et al., 2003;Szwimer et al., 2020).Our findings are consistent with systematic review data showing that carriers of the Met-allele show a reduced risk of certain psychiatric disorders such substance-related disorders (Gratacòs et al., 2007) and anxiety-related personality traits (Lang et al., 2005;Sen et al., 2003), although other studies show Met-allele carriers are at greater risk of other disorders such as eating disorders and schizophrenia (Gratacòs et al., 2007).This varied pattern of results highlights the complex involvement that BDNF has in the pathophysiology of mental states/disorders.
On a mechanistic level, BDNF is distributed widely in the CNS, including many regions of the brain that regulate mood and behavior.BDNF also gives trophic support for many neurotransmitters known to impact mental disorders (Croll et al., 1994;Hyman et al., 1991;Knusel et al., 1991;Mamounas et al., 1995), and human (Polyakova et al., 2015) and animal (Angelucci et al., 2005) data implicate BDNF in the response to psychotropic medications.Given the regulatory or activity dependent secretions of the BDNF is severely reduced (about 30%) in Met BDNF carriers (Chen et al., 2004;Egan et al., 2003), and the majority of BDNF protein is released from the regulated secretory pathway in neurons (Egan et al., 2003), it has been theorized that Met carriers are at greater risk of depression and other psychiatric disorders given low BDNF is considered a risk factor (Groves, 2007).Although there is consistent support for this hypothesis in the area of eating disorders and schizophrenia (Gratacòs et al., 2007), consistent with our findings, data from population-based studies and meta-analyses show that Met carriers have a lower risk of obsessive and compulsive-related disorders (Gratacòs et al., 2007), while the evidence for relationships to depression is mixed in both youth (Xia & Yao, 2015) and adults (Groves, 2007;Verhagen et al., 2010).This inconsistency has led to calls to revisit the original hypothesis that the hypofunctioning BDNF Met-alleles serve as biological risk factors for depression and other forms of psychiatric illness or ill-being (Groves, 2007).One theory postulated to explain these inconsistent results is that the allele (Met vs Val) or their neurobiological expression that confers risk may change across development, resulting in different trajectories and associations between disorders (Casey et al., 2009).More specifically, Carriers of the met-allele showed greater brain connectivity between the paralymbic neural network and the neocortical associations areas and amygdala, brain regions that govern the processing of sensory and emotional stimuli.Thus, researchers postulate that the greater connectivity between the cortico-limbic structures may underlie the increased risks associated with this Met-allele in some disorders (i.e., schizophrenia), while conferring a protective effect against others (i.e., substance abuse, anxiety) (Thomason et al., 2009).In addition, the impact of any single polymorphism on complex diseases (and perhaps HRQoL) may be moderated by demographic factors (Gratacòs et al., 2007;Verhagen et al., 2010) and/or interact with many other environmental factors (i.e., life stress or childhood trauma) (Hosang et al., 2014;Zhao et al., 2018) resulting in complex gene-environment or epigenetic effects.Regardless of the mechanisms, our study is the first to establish that genetic variations in BDNF are associated with HRQoL, an important finding given that good health is not simply reflected in physical health or in the absence of disease, but one that includes good quality of life (World Health Organization, 2002).
| Limitations and strengths
Our sample was limited in size, and primarily comprised of white, physically inactive adolescents with obesity presenting for weight loss who were mostly offspring of well-educated parents, so the results may not be generalizable to all adolescents, with or without obesity, with higher levels of physical activity.Second, our data are cross-sectional in nature, thus causality cannot be inferred.Lastly, although there were no differences between carriers and noncarriers of the Val66Met-alleles on demographic, anthropometric and/or lifestyle factors, it is possible additional environmental factors (stress, trauma, etc.) that were not accounted for in our analysis may have impacted the results.The strengths of our study included its novelty, a high risk sample of youth with obesity who exhibit a greater incidence of BDNF-related complications such as neurocognitive deficits (Liang et al., 2014), metabolic dysregulation (Krabbe et al., 2007), and reduced HRQoL (Tsiros et al., 2009) compared to peers without obesity.Additionally, the PEDS-QL is a widely used measure of HRQoL that has been validated in healthy and clinical populations of youth (Schwimmer et al., 2003;Varni et al., 2002), strengthening the internal validity of the findings.
| CONCLUSION
In conclusion, we found that carriers of the Met-alleles of the BDNF Val66Met polymorphism reported statistically and clinically significantly greater overall HRQoL, Social-HRQoL, School-related HRQoL, and a trend for better Psychosocial-HRQoL compared to carriers of the homozygous Val-allele.Given that youth with obesity typically exhibit lower HRQoL and longitudinal studies show diminished HRQOL is more likely a consequence of obesity rather than a determinant, future research is needed to verify our initial findings and determine whether carriers of this BDNF genotype are genetically predisposed to developing a diminished HRQoL, as well as elucidate the mechanisms that may underlie these relationships.In the era of personalized medicine, this information could potentially inform early intervention efforts in a high-risk population to promote better quality of life and mental health trajectories during development and into adulthood.
AUTHOR CONTRIBUTIONS
G.S.G was responsible for conceptualization, funding acquisition, investigation, methodology, and writingoriginal draft; J.D.C was responsible for writing-review and editing; R.J.S was responsible for conceptualization, funding acquisition, investigation, methodology, project administration and writing-review and editing; G.P.K was responsible for conceptualization, funding acquisition, investigation, methodology, project administration and writing-review and editing; D.P was responsible for writing-review & editing; M.N was responsible for data curation and writing-review & editing; A.S.A was responsible for data curation, project administration, and writing-review & editing; S.D was responsible for data curation, formal analysis, and writing-review & editing; D.B.G was responsible for writing-review and editing; HT was responsible for writing-review & editing; H.T was responsible for writing-review & editing; K.R.S was responsible for writing-review & editing; and Jeremy Walsh was responsible for writing-review & editing.
significantly higher physical -HRQoL (p = 0.02), school-related HRQoL, (p = 0.05), social-related HRQoL (p = 0.05), and total HRQoL (p = 0.03), and a trend for Psychosocial-HRQoL.Research is needed to confirm our findings and determine whether carriers of the BDNF Val66Met homozygous Val (G/G) alleles may be at risk of diminished HRQoL, information that can influence interventions in a high-risk population of inactive youth with obesity.
Comparison of Health-Related Quality of Life (HRQoL) Indicators by BDNF Val66Met Allele.
a HRQoL, health-related quality of life. | 2024-07-14T05:13:19.748Z | 2024-07-01T00:00:00.000 | {
"year": 2024,
"sha1": "998a42577dadb7e514078578ce48b4bfe0ebcf45",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "998a42577dadb7e514078578ce48b4bfe0ebcf45",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
181645010 | pes2o/s2orc | v3-fos-license | Board Characteristics and Intellectual Capital Performance: A Comparison of Service vs. Manufacturing Firms in India
Purpose: This study aims at analysing the board characteristics of select manufacturing and service firms in India. It attempts to explore whether these characteristics influence Intellectual capital (IC) efficiency of service sector firms or manufacturing firms. Design: The study uses panel data and multiple regression to examine the firms’ performance. The period of study is three years from 2015-16 to 2017-18; thirty firms each from manufacturing and service sector is taken for study. Findings: The empirical evidence is quite interesting. The board characteristics of both groups have varied differences. It can also be observed clearly that the impact of board characteristics on IC performance is more in the service sector than in the manufacturing sector. Research limitations/implications: the study focuses only on select firms from manufacturing and service sector as a preliminary study. The study can be expanded to cover sectors and industries. Originality and value: There are several research studies that try to explore the impact of corporate governance on the financial performance of firms. However, there are none which looks into their impact on the intangible performance of firms.
1.Introduction
More and more firms across the nation are moving towards acknowledging the role of intangibles in value creation. It was a prevalent notion that intangible asset is for only the service sector to measure and manage and manufacturing sectors must focus only on fixed and financial assets. But today it is widely recognized that intangibles or intellectual capital are a business asset for all firms irrespective of the sector they belong to.
Corporate governance ensures the commitment of the board towards their role in increasing and preserving the stakeholder's wealth. The regulator frames guidelines and mandates for the firm's board and monitors it so that there is little deviance from the set path on the part of the directors. The research literature has proved the relationship between the corporate governance characteristics of the firms and their financial performance without any doubt. There are theoretical as well as empirical evidence to support the strong impact which board members impose on the profitability and market valuation of firms. The role of board characteristics on the intangible performance of the firms is yet to be confirmed through robust research results.
Firms face severe competition in the present times and try out innovations product or processes to preserve their competitive advantage over their competitors. In such circumstances, intangibles such as relational capital, human capital and structural capital play an important role in enhancing the firm's competitive edge. The board members have to ensure that they sharpen the firm's ability to compete in the market place. In this context if the relationship between board characteristics is established through empirical evidence, the firms can use the data to strengthen their board and revamp the composition of the board to the optimal level so that it can have a positive impact on the intangible performance of the firms and can contribute towards value-addition in the long run. Thus, the results of the study would not be useful only to the owners of the firm, but also to the policy makers and regulators.
The first section of the paper introduces the concept; this is followed by a review of earlier works in this area. The objective of the paper is spelled out in the next section. A detailed account of the variables used in the study along with data sources and models is presented. This is followed up with results of the study and a detailed analysis of the same. The major conclusions and summary of the findings are put forward in the last section.
Review of Literature
Hermes et al, 2011 attempt to analyze whether corporate governance practices in different companies result in differences in their performance. They analyzed 124 firms listed on the Athens stock exchange, and find that the performance is positively related to good governance.
Phan and Vo, 2013, took 77 companies listed Vietnam and evaluated whether CG characteristics is useful in improving the financial performance of the firms for a period of 2006-2011. They found that specific factors such as duality, women members on board, their remuneration and also their experience have a positive impact on the profitability. Further, the size of the board has a negative impact on their performance.
Hussein et al, 2012 looked at the impact of CG practices of banks in UAE on financial performance and distress. They discovered that there is a strong positive link between disclosure and transparency of the banks. Protecting shareholders' interests were also seen to be associated with the board of director's characteristics.
Yet another study which focussed on 156 listed firms in Indonesia found that the size of the board has a positive impact on manufacturing firm's performance. They also find that ownership does not have any significant relationship with performance. (Herdjiono and Sari, 2017) Bhagat and Bolton, 2008 find that CG characteristics and firm's operating performance in the subsequent year is positively related. Board independence and ownership has a positive influence on disciplinary management and turnaround given the poor performance of firms.
As most studies focus on CG and financial performance, there are few studies that focus on whether board characteristics influence the intangible performance of firms.
Mahmudi and Nurhayati, 2014 analyzed the banking companies listed on Indonesian stock exchange. They studied the 31 banks for a four year period. Their results proved that there was strong empirical evidence towards a close association between the size of the audit committee, independence of the board and the academic credentials of the board members. They also found that the meetings and audit committee independent members did have no significant impact on IC performance of banks. Williams, 2001, studied the firms in South Africa to analyze the impact of gender differences and ethnic diversity on IC performance. She found that there is a strong and positive linkage between a number of female directors on board as well as non-white representation on board on IC performance of firms. This evidence helps policymakers to keep the diversity on the basis of race and gender in the board composition.
Yet another study which was placed in the Taiwanese country context, found a strong positive intervening influence of CG characteristics on organizational performance. The researchers used the confirmatory factor analysis (CFA) to study the impact of CG as an intervening variable in the firm's performance. The study was limited to IC designs firms. (Wu, et al, 2012) Abidin, et al 2009 in their study of firms in Malaysia, found a strong influence of board size on the IC performance of firms. They also reported that for the 75 sample firms of their study, they could not establish any relationship or influence of ownership or CEO duality on performance. The results were considered extremely useful in designing policies to ensure a long term sustainable performance of firms.
Veltri and Mazzotta, 2016 in their study of Italian non-financial firms for a three year period, found that ownership concentration and the board composition both had a positive impact on the profitability of these firms.
There is no study which tries to compare the manufacturing and service sector. The studies related to the impact of CG on IC performance in India does also not exist. Therefore, this study has some specific objectives to fill in the gaps of earlier studies and focus on the impact of board characteristics on IC performance of the firms in the Indian context.
Objectives:
a. To examine whether Board characteristics of firms have a significant impact on the IC performance in the service sector than manufacturing firms b. To analyze the Board characteristics of firms in from service and manufacturing.
To systematically evaluate the objectives, it is hypothesized that H1: Board characteristics, especially board size and its independence have a statistically significant impact on IC performance of firms H2: Board characteristics vary significantly between firms belonging to the service sector and manufacturing sector.
Research Design:
3.1. Sample: The study uses firm-level data of 60 firms, equally divided between service sectors and manufacturing sector. The firms are randomly selected from the top 100 firms listed on the Bombay stock exchange. The period of study is 2015-16 to 2017-18. The data required for analysis both in case of dependent and independent for all periods of study is collected from PROWESS database. The data gaps are filled from the annual reports of the firm accessed through www.moneyconrol.com
3.2.
Research methods: the data collected is arranged to form a panel, and then analysed using multiple regression. Descriptive statistics of all the variables are also presented to understand the characteristics of the data. The model that is used for the study is presented below in detail followed by a detailed explanation of the dependent and independent variables. Measurement of all the variables has been described in detail in the above section.
Explanation of Dependent variables:
VAIC is the value-added intellectual capital coefficient whose sub-components are Human Capital Efficiency (HCE), Capital Employed Efficiency (CEE) and Efficiency of Structural capital (SCE) Employed Efficiency for Firm i (CEE i ); VA i , Value Added for the firm i ; CE i , book value of the net assets for firm i. The VA is measured by using:
Independent variables used in the model:
Five board characteristics are taken to represent each firm's corporate governance aspects. First and most important is the size of the board, it shows the number of members on the board of each company. Some firms have very large boards and some firms have a two member board. Next is the number of independent members on board. Board independence is a very important factor in taking strategic and crucial policy decisions of the firm. A number of executive members on the board is also considered for analysis. The total number of meetings attended by the members is also taken into account for understanding the seriousness of the members towards the firm's policymaking. The total salary of the board members is also an important variable that can influence the IC efficiency of firms.
Besides these factors that reflect the board characteristics, the paper also takes into account the age of the firm and whether the firm is owned by foreign or not. Some characteristics of the firm such as Market capitalization, its debt-equity ratio and its return on assets is used as control variables.
Descriptive statistics
The descriptive statistics for both the dependent and independent variables used in the study are presented in table 1 and table 2 for the service sector and manufacturing sector. The results clearly show that the overall IC efficiency measured in terms of VAIC is higher in service sector firms than in manufacturing firms. Service sector firms on an average have almost more than double IC efficiency than their manufacturing counterparts. The firms in the service sector also fare better in case of human capital efficiency. Whereas, both in case of structural capital efficiency and capital expended efficiency, the manufacturing sector firms show a better performance than service sector firms on an average. The average board size of manufacturing firms is slightly higher than the firms in the service sector. Both have a similar number of independent members on board. The number of independent members on board of both the group is also comparable. The members of service sector firms attend a higher number of meetings than the manufacturing firm's board members. The firms in the service sector have higher levels of leverage and lower levels of return on their assets when compared to the firms in the manufacturing sector. Thus, we can say that though there are some similarities between the firms in these two groups, they are quite different in many aspects.
Analysis of regression results
As can be seen in Table 3, all four models are statistically significant. This indicates that corporate governance does have an impact on the performance of the firms. It can also be seen that none of the board characteristics are seen to have a statistically significant impact on IC performance of service firms in India. However, it can be seen that the profitability of firms and their leverage does have a positive and significant impact on the intangible efficiency of firms.
In the case of Capital expended efficiency of the firms, it can be seen in the table that the ownership of the service firms has a high impact. The foreign owner ensures that intellectual capital is efficiently utilized. Debt equity ratio and the size of the firm also are important determinants of the capital efficiency of firms. The structural capital efficiency of service firms is heavily determined by board characteristics. Board size has a negative influence on efficiency. Large size boards result in lower efficiency levels of firms. It can also be observed from the results that a large proportion of independent members on board as well as executive members result in an increase in efficiency of these firms. The salary of the board also plays a significant role. As in Human capital efficiency of the service sector, none of the board characteristics influence. It's only the leverage of the firms and their profitability which seem to have an impact on the firm efficiency. Table 4 presents the results of the models for the manufacturing sector. Here too all four models are statistically significant. However, it can be observed that board characteristics do not influence the overall IC efficiency of firms. Older firms are seen to have a higher IC efficiency. The debt-equity ratio of the manufacturing sector firms, as well as their profitability, also has some influence on their efficiency.
The capital expended efficiency of these firms has a negative influence on the size of the board. Larger boards reduce the CEE sharply. Similarly, greater independence of the directors on board results in a higher level of efficiency of these firms. The larger size of firms and their profitability too impacts the CEE of manufacturing sector firms. In the case of Structural efficiency, only independence of the board plays a significant positive role. Human capital efficiency does not depend on any of the board characteristics. Only the age of the firm has a strong impact on the efficiency of these firms. Leverage and profitability of the firms affect both the structural and human capital efficiency of the firms in the manufacturing sector.
Conclusions
This study was aimed at analyzing the impact of board characteristics on the intellectual capital performance of firms for a three year period. The classification of the firms into the service sector and manufacturing sector enabled the analysis in a systematic way to find out whether board characteristics have a greater influence on the intellectual capital of service or manufacturing sector firms. the board characteristics were represented using board size, number of executive directors, the number of independent directors, and the salary of the board members. The firm's intellectual capital was represented using the VAIC, CEE, HCE, and SCE.
The results of the study provide clear evidence that board characteristics, especially board size and its independence have a strong and statistically significant impact on the service sector firms IC performance. The impact of board characteristics also has an impact on various sub-components of intellectual capital. This study paves the way for future studies in this area. Studying the impact on specific industries would help frame policies that are relevant to the specific segments. The country comparisons would be possible if uniform sets of firms are analyzed across industries and nations. The policy makers can take the preliminary information derived from the empirical results of this study to frame fruitful policies which will prove to be beneficial for increasing the overall shareholder's value and wealth. | 2019-06-07T23:33:01.528Z | 2019-05-14T00:00:00.000 | {
"year": 2019,
"sha1": "5c396f32504cc6d51ffe454be3c709f49640aea3",
"oa_license": "CCBYNC",
"oa_url": "https://thescholedge.org/index.php/sijbpg/article/download/525/519",
"oa_status": "GREEN",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "0f90e71847d45ee60d741da01f2c848f3f7a3806",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Business"
]
} |
229297518 | pes2o/s2orc | v3-fos-license | Engineering Three-Dimensional Moiré Flat Bands
Twisting two adjacent layers of van der Waals materials with respect to each other can lead to flat two-dimensional electronic bands which enables a wealth of physical phenomena. Here, we generalize this concept of so-called moiré flat bands to engineer flat bands in all three spatial dimensions controlled by the twist angle. The basic concept is to stack the material such that the large spatial moiré interference patterns are spatially shifted from one twisted layer to the next. We exemplify the general concept by considering graphitic systems, boron nitride, and WSe2, but the approach is applicable to any two-dimensional van der Waals material. For hexagonal boron nitride, we develop an ab initio fitted tight binding model that captures the corresponding three-dimensional low-energy electronic structure. We outline that interesting three-dimensional correlated phases of matter can be induced and controlled following this route, including quantum magnets and unconventional superconducting states.
S I. COMPUTATIONAL DETAILS FOR 3D TWISTED GRAPHENE, WSE2 AND BORON NITRIDE
For the calculations of 3D twisted graphene, we construct the unit cell with a twisted double bilayer graphene at twist angles close to 0 degree, and impose periodic boundary condition along all three dimensions. As it is not realistic to optimize such a large system with density functional theory (DFT) calculations, we fix the lattice constant along the out-of-plane direction to be 13.415 Å, and set the in-plane lattice constant according to the twist angles such that it corresponds to 2.46 Å for a 1x1 cell. The atomic structure is relaxed using the LAMMPS code [1] with the same parameters as described in [2]. The intralayer interactions within each graphene layer are modeled via the second-generation reactive empirical bondorder (REBO) potential [3]. The interlayer interactions are modeled via the Kolmogorov-Crespi (KC) potential [4], using the recent parametrization of [5]. The relaxation is performed using the fast inertial relaxation engine (FIRE) algorithm [6].
We calculate the band structures for 3D twisted graphene using the tight-binding parametrization proposed in Ref. [7] Here, the operator c ( †) i annihilates (creates) an electron in the p z orbital of the carbon atom at site r i . The p z electrons are coupled via Slater-Koster hopping parameters t ij = t(r i r j ) t(d) = t k (d) + t ? (d) Due to the internal twist between adjacent graphene sheets, a sufficient description of the interlayer hopping must include contributions from pp⇡ bonds pp⇡ = 2.8 eV as well as from pp bonds pp⇡ = 0.48 eV [7]. To this end, the factor n = d·êz |d| captures the out-of plane component of the electron transfer integral. Furthermore, e z is a unit vector which points perpendicular to the graphene sheets, c = 3.364 Å is the interlayer spacing of graphite, a = 1.42 Å is the distance between neighboring carbon atoms and 1 = 3.15 and 2 = 7.462 describe the exponential cutoff of the electron hopping. For the calculations of 3D twisted WSe 2 and boron nitride, we perform first principles calculations based on DFT as implemented in the Vienna Ab initio Simulation Package (VASP) [8] following similar methods used in previous works [9,10]. Plane-wave basis sets are employed with an energy cutoff of 550 eV for WSe 2 and 400 eV for boron nitride. The projector augmented wave method (PAW) [11] is used to construct the pseudopotentials felt by the valence electrons. For the calculations of 3D twisted WSe 2 , the exchange-correlation functionals are treated within the generalized gradient approximation (GGA) [12]. All the atoms are relaxed until the force on each atom is less than 0.01 eV/Å. Van der Waals interactions are included using the method of Tkatchenko and Scheffler [13] during the relaxation.
For the calculations of 3D twisted boron nitride, the exchange-correlation functionals are treated within the local density approximation (LDA). As shown in the previous work [10], the flat bands near the top of the valence band of twisted boron nitride do not change much upon relaxation. Therefore, as the calculations for 3D twisted boron with twist angles down to 2.28 degree are very heavy, we perform these large scale calculations for 3D twisted boron nitride without relaxation. The effective structure defined by the charge accumulation points resembles AA-stacked graphene multilayers, where one of the two inequivalent sites, i.e. Q 1 , is shifted by D/2 in z-direction. Hence, in each of the two "effective" planes with z-coordinate 0 and D/2 , the charge puddles form a triangular lattice with lattice constant L.
The simplest SU(2) symmetric TB model that can be constructed for this configuration is a single-orbital two-band model that takes up to next-nearest neighbor intra-and interlayer hopping terms between the charge puddles into account Here, t 1 denotes the hopping amplitude between neighboring Q 1 -and Q 2 -sites , whereas t 2 and t 3 denote hopping processes between two Q 1 (Q 2 ) sites in either the same or different layers. The hopping parameters are determined by fitting the energy eigenvalues of H 0 to the flat bands of the ab-initio band structure of thBN. The single-particle spectrum for the periodic system is then modeled by the following Bloch Hamiltonian which is labeled in the order of the two charge localization points Q 1 ,Q 2 . The matrix elements are obtained by a Fourier transform of the real-space hopping matrix Eq. (3) to The matrix h k can then be diagonalized in orbital space for each momentum k to obtain the bandstructure ✏ b (k) and orbital-to-band transformation u b r (k), b = 1..N : Fig. 3 in the main text. The structure constants D and L (see Fig. 3(a)) describe the spatial extent of the moiré cell in in-plane and out-of-plane direction, respectively.
S III. FLUCTUATION EXCHANGE APPROXIMATION IN MULTI-ORBITAL SYSTEMS
A. 3D multi-orbital susceptibility We define the free Matsubara Green's function in orbital-momentum (frequency) space where u b r are the orbital-to-band transformations that render the unperturbed Hamiltonian H 0 and the free Green's function g b (i!, k) = (i! e b (k)) 1 diagonal. The orbital indices r = {Q 1 , Q 2 } are restricted to the same unit cell and the momenta k lie in the first Brillouin zone. To this end, we define the free polarization functionˆ 0 (q) = 0 r,r 0 (q) as The Matsubara summation occuring in Eq. (8) can be evaluated analytically giving the well-known Lindhard function for multi-orbital systems where n F (✏) = (1 + e ✏ ) 1 is the Fermi function.
B. Random-phase approximation for multi-orbital systems
To study correlated states of matter in thBN that arise due to the presence of electronelectron interaction, we employ a repulsive Hubbard term for electrons with opposite spin Here, the occupation number operator is defined as n R,r i , = c † R,r i , c R,r i , . We calculate the renormalized interactions within the random-phase approximation (RPA) to analyze the electronic instabilities mediated by spin-fluctuation exchange between electrons to high order in the bare coupling U . Admittedly, this approach is biased as it does not capture the interwind fluctuations in different two-particle scattering channels, which would require the use of unbiased fRG techniques.
C. Pairing Symmetry
We may write the general particle-particle scattering vertex between electrons with op- For interaction values U < U crit the magnetic instabilities prescribed by the RPA analysis might not be strong enough to actually occur. In this paramagnetic regime, spin and charge fluctuations contained in the transverse and longitudinal spin channel can give rise to an effective interaction between electrons that may lead to the formation of Cooper pairs. The diagrams can be separated into spin-singlet and spin-triplet contributions, depending on whether pairing same/opposite spins, i.e. s 6 = s 0 (singlet) or s = s 0 (triplet). In general, we may separate the dependence of the gap parameter on momentum, spatial and spin degrees of freedom r 1 r 2 Since for spin singlet gaps the spin function (s 1 , s 2 ) is antisymmetric under exchange of indices, i.e. (s 1 , s 2 ) = (s 2 , s 1 ), the spatial and momentum dependence must be symmetric in order to fulfill the Pauli principle. For spin triplet gaps we hence require f (k, r 1 , r 2 ) = f ( k, r 2 , r 1 ). Since the system is assumed to be paramagnetic, pairing same/opposite spins yields the same result after explicitly symmetrizing/anti-symmetrizing the interaction vertex in orbital-momentum space.
Restricting the pairing to Kramer's degenerate pairs (k 1 , ") and ( k 1 , #), the particleparticle scattering vertex in FLEX approximation is given by transverse (t) and longitudinal (l) spin fluctuations. For simplicity, we will use the abbreviation r 1 r 2 !r 3 r 4 k 1 , k 1 !k 2 , k 2 = r 1 ,r 2 k 1 ,k 2 in the following. The diagrams contributing to these spin channels are shown below.
The effective spin-mediated interaction in the opposite spin channel thus becomes The spin-dependence of the susceptibilities occuring in the diagrammatic expansion above can be neglected due to the emergent SU(2) symmetry in the paramagnetic phase. To obtain the effective interaction in the singlet (s) and triplet (t) channel, we symmetrize/antisymmetrize the interaction vertex, i.e. s/t = 1 2 r 2 , k 1 , s 0 r 4 , k 2 , s 0 r 1 , k 1 , s r 3 , k 2 , s + r 2 , k 1 , s 0 r 3 , k 2 , s
D. Linearized Gap Equation
Assuming that spin-and charge fluctuation provide the superconducting glue in the system, we confine our considerations to the vicinity of the Fermi surface and only treat . The momentum transfer occurring in the polarization function in RPA is given by q t = k 1 +k 2 due to momentum conservation.
... . The momentum transfer occurring in the polarization function in RPA is given by q l = k 1 k 2 due to momentum conservation. Only an even number of particle-hole bubbles is allowed in the diagrammatic expansion in order to preserve the spin in the upper and lower leg of the pairing interaction. The diagrams that are resummed in the longitudinal channel are connected to the particle-hole susceptibility describing screening effects of the bare Coulomb interaction.
scattering processes of a Cooper pair from state (k, k) on fermi surface C b to the state (k 0 , k 0 ) on fermi surface C b 0 . To this end, we project the pairing vertex Eq. (14) from orbital to band space and only take intra-band scattering into account bb 0 The momenta k and k 0 are restricted to the various fermi surface sheets {C}, such that Here, v b F (k) = |r✏ b (k)| is the Fermi velocity at k 0 in band b. The largest eigenvalue > 0 for a given interaction kernel bb 0 s/t (k, k 0 ), will lead to the highest transition temperature T c and the corresponding eigenfunction b (k) determines the symmetry of the gap. The effective lattice model obtained from the charge accumulation points has point group D 3h .
The symmetry of the gap can thus by classified according to the irreducible representations of D 3h that are listed in Table 2.
The linearized gap equation (17) only accounts for the leading pairing symmetry at the transition temperature T c of the superconducting phase. In the case of degenerate eigenvalues (e.g. d-wave instabilities {d xz , d yz }) belonging to a two-dimensional irreducible representation, an arbitrary linear combination might be favored for T < T c . In order to find the linear combination that is preferred by the system below the transition temperature, we compute the free energy of the system Here, E b (k) is the energy of the Bogoliubov quasi-particles resulting from diagonalization of the BdG Hamiltonian where the form factors are are given by d xz (k) = sin(k x )sin(k z ) and d yz (k) = sin(k y )sin(k z ).
The free parameters ✓ and are extracted by minizing the free energy of the system Eq. (18). In Fig. 3 we show that the linear combination b k / [d xz (k) ± id yz (k)] = [sin(k x )sin(k z ) ± isin(k y )sin(k z )] is generally preferred for the given filling.
singlet triplet f y(y 2 3x 2 ) · p z f y(y 2 3x 2 ) Table 2. Pairing symmetries for the effective lattice model of thBN separated into contributions to spin singlet and triplet channel. | 2020-12-18T02:15:57.837Z | 2020-12-17T00:00:00.000 | {
"year": 2021,
"sha1": "0945883b16692908773c44c74b7acca5b5227d4c",
"oa_license": "CCBY",
"oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acs.nanolett.1c01684",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "a4d39c2e46cc8bf892a0f15570990614ae740656",
"s2fieldsofstudy": [
"Engineering",
"Physics",
"Materials Science"
],
"extfieldsofstudy": [
"Physics",
"Materials Science",
"Medicine"
]
} |
73590533 | pes2o/s2orc | v3-fos-license | Simulating Alpine Tundra Vegetation Dynamics in Response to Global Warming in China
Global temperatures are increasing due to the effects of greenhouse gases emission. It is projected that climate changes will have profound biological effects, including the changes in species distributions as well as vegetation patterns (Walther et al., 2002; Klanderud & Birks, 2003; Pauli et al., 2003; Tape et al., 2006). Many results from observations and experiments (Parmesan, 1996; Molau & Alatalo, 1998; Parmesan et al., 1999; Welker et al., 2000, 2005; Schimel et al., 2004; Sullivan & Welker, 2005 ), and simulation studies (Cramer & Leemans, 1991; Harras & Prentice, 2003) have depicted alterations in C and N cycling, trace gas exchanges and shifts in the distribution of vegetation boundary and the mixture of shrubs and grasses. The Tibetan Plateau covers approximately 2.5 million km2 with an average altitude of more than 4000 m dominated by alpine tundra (Zheng, 2000). Alpine tundra vegetation is predicted to be one of the most sensitive terrestrial ecosystems to changing climate (Korner, 1992; Grabherr et al., 1994; Chapin et al., 1992, 2000). This type of ecosystem is composed of slow-glowing plants and are dominated by the soils which can be concentrated with high organic matter near surface soil that undergo frost heave and cryoturbation (Billings, 1987; Xia, 1988). Both plant growth and possible organic matter decomposition are predicted to increase under warmer climates, which may cause alpine ecosystem carbon flux and energy flow changes (Chapin et al., 1997; Kato et al., 2006). Simultaneously, warmer weather may increase plant growth, and primary production (Bowman et al., 1993; Wookey et al., 1995) as well as changes in species dominance (Walker et al., 1994; Klein et al., 2007). We report findings that are derived from a short–term responses to simulated environmental warming, focusing on aboveground biomass of three dominated life forms and community compositional attributes. Based on 38 years (1959-1996) of climate observations and statistical analysis, the annual mean temperature increased during this period ranged from 0.4 to 0.6°C in the area of
Introduction
Global temperatures are increasing due to the effects of greenhouse gases emission. It is projected that climate changes will have profound biological effects, including the changes in species distributions as well as vegetation patterns (Walther et al., 2002;Klanderud & Birks, 2003;Pauli et al., 2003;Tape et al., 2006). Many results from observations and experiments (Parmesan, 1996;Molau & Alatalo, 1998;Parmesan et al., 1999;Welker et al., 2000Welker et al., , 2005Schimel et al., 2004;, and simulation studies (Cramer & Leemans, 1991;Harras & Prentice, 2003) have depicted alterations in C and N cycling, trace gas exchanges and shifts in the distribution of vegetation boundary and the mixture of shrubs and grasses. The Tibetan Plateau covers approximately 2.5 million km 2 with an average altitude of more than 4000 m dominated by alpine tundra (Zheng, 2000). Alpine tundra vegetation is predicted to be one of the most sensitive terrestrial ecosystems to changing climate (Korner, 1992;Grabherr et al., 1994;Chapin et al., 1992Chapin et al., , 2000. This type of ecosystem is composed of slow-glowing plants and are dominated by the soils which can be concentrated with high organic matter near surface soil that undergo frost heave and cryoturbation (Billings, 1987;Xia, 1988). Both plant growth and possible organic matter decomposition are predicted to increase under warmer climates, which may cause alpine ecosystem carbon flux and energy flow changes (Chapin et al., 1997;Kato et al., 2006). Simultaneously, warmer weather may increase plant growth, and primary production (Bowman et al., 1993;Wookey et al., 1995) as well as changes in species dominance (Walker et al., 1994;Klein et al., 2007). We report findings that are derived from a short-term responses to simulated environmental warming, focusing on aboveground biomass of three dominated life forms and community compositional attributes. Based on 38 years of climate observations and statistical analysis, the annual mean temperature increased during this period ranged from 0.4 to 0.6°C in the area of www.intechopen.com Haibei Alpine Tundra Ecosystem Research Station (Li et al., 2004), that is located on northeastern part of Qinghai-Tibetan Plateau (37°N, 101°E). In order to study alpine tundra vegetation changes at the regional scale, we model alpine tundra vegetation spatial and temporal dynamics in response to global warming by integrating a raster-based cellular automata and a Geographic Information System (Zhang et al., 2008). Temperature changes across the study area are not only due to elevation, but also to aspect and distance from the nearest stream channel. The liner regression model provided a temperature spatial distribution based on elevation alone, which is the primary step. The normalized temperature surface created by the Multi-Criteria Evaluation (MCE) is highly representative of the potential temperature distribution in a normalized fuzzy format. Assuming each vegetation type in the raster cell unit reacts as homogeneous entity, we conduct a spatial and temporal simulation by combining cellular automata and MCE provided in the IDRISI software (Eastman, 2003). Global changes have strong effects on terrestrial ecosystems but with significant regional differences. The Tibetan Plateau is currently experiencing rapid changes in temperature (Zhang et al., 1993). Fluctuations in temperature have had significant effects on alpine tundra ecosystem, which produces the important changes in the global energy balance and carbon budget (Cao & Woodward, 1998;Zhou, 2001;Kato et al., 2006 ). The Qinghai-Tibetan Plateau is situated in southwestern China (Fig. 1), and is the highest continental landmass in the world. Elevation ranges from 2500 to 8000 m with an average altitude of more than 4000 m. Uplifting of the plateau created and then strengthened the South Asia Monsoon, and affects terrestrial ecosystems in China owing to its unique location and high elevation (topography) (Zhang, 1993;Thompson et al., 1989). The development and evolution of species and vegetation on the Qinghai-Tibetan Plateau were influenced significantly by a fluctuating climate during the uplift. Ni (2000) simulated biomes on the Tibetan Plateau using the improved BIOME3 model (BIOME3-China) under the present climate conditions, as well as under a scenario with a CO 2 concentration of 500 ppmv. A combined biogeography biochemistry model, BIOME4 (Kaplan et al., 2003) was improved to simulate the alpine vegetation changes at the biome level (Song, et al., 2005). In this chapter, we review the important ecological findings from simulated environmental changes on the alpine tundra vegetation (Zhang & Welker, 1996). We present a changing alpine tundra vegetaion using Vegetation Dynamic Simulation Model (VDSM) integrated with scenarios of global temperature increase of 1 to 3°C (Zhang et al., 2008). With BIOME4 model (Song et al., 2005), we illustrate the vegetation biomass changes and the vegetation distribution dynamics in the region of Qinghai-Tibetan Plateau in responses to global warming.
Tibetan alpine tundra above ground biomass and community responses to simulated changes in climate
A suite of abiotic conditions may be modified as weather patterns and regional climates change altering biospheric and atmospheric processes in tundra ecosystems (Maxwell, 1992;Shaver et al., 1992;Jonasson et al., 1993;Grabherr et al., 1994;Larigauderie & Korner, 1995). For instance, warmer air temperatures will likely alter the flux of water from these ecosystems to the atmosphere drying soils and contributing to increased cloud formation. Simultaneously, warmer conditions may increase plant growth, primary production and carbon sequestration, so long as cloud cover is not affected and other factors such as water or nutrients do not limit photosynthesis and growth (Haag, 1974;Bowman et al., 1993;Wookey et al., 1995). The ecological consequences of changes in tundra environmental conditions will be manifested in a host of processesincluding shifts in primary production (Bowman et al., 1993;Walker et al., 1994), trace gas fluxes (Brooks et al., 1995), plant and soil mineral nutrition (Nadelhoffer et al., 1991;Shaver & Chapin 1991), reproductive plant biology , leaf carbon isotope discrimination , as well as changes in species dominance (Walker et al., 1994). However, it is unclear whether all these processes are sensitive to short-term changes in environmental conditions in all tundra habitats or whether multiple years of climate change are necessary to elicit detectable alterations in plant performance and species abundance. To date, most studies of alpine tundra responses to in situ changes in climate, using field manipulations, have been confined to sites in North America and in Western Europe (Kmrner, 1992;Chapin et al., 1995;Kennedy, 1995) without the consideration of the extensive alpine tundra in Asia, and in particular, western China.
Experimental treatments and obervations
Our research site is located near Haibei Alpine Meadow Ecosystem Station (37°N, 101°E) at an elevation of 3250 m (Xia, 1989;Cincotta et al., 1992). The vegetation of our frield site is typical of a Kobresia humilis meadow (Zhou et al., 1987. Our field experiment was initiated in June 1991 and the first season was complated in October 1991. Four treatments were implemented as (1) Minigreenhouses (G) (2) Shade (S) (3) Side Fences (SF) (4) Control plot (C). The size of experimental plot is 2 m x 5 m. A completely randomized design was used to establish the 16 treatment plots consisting of four treatments (G, S, SF, C) replicated four times. The detail site setup, microclimate monitoring and frield observation were described by Zhang and Welker (1996).
The greenhouse treatment increased mean air temperature by 20% from 12.4 to 17.8° Cover the course of the growing season (Table 1). Warmer air temperature subsequently caused higher soil temperatures at 5, 10, and 15 cm under greenhouse (G) as opposed to ambient (C) conditions (Table 1). The mean vapor density was significantly increased under Table 1. Abiotic conditions from the four treatments between July and October 1991 warmer temperatures of the greenhouse (G) from 4 to 12 g m -3 . The soil suction was essentially the same between all treatment plots, except for under shaded (S) conditions, and the soil suction was consistently higher indicating a very lower soil water content for a dryer envrimental condition. The shade treatment (S), while reducing irradiance, also resulted in a slight increase in air temperature and soil temperature at 5 cm. The shade treatment (S) had no effect on soil temperatures at 10 cm or 15 cm nor did the shade treatment alter the vapor densities. Side fences (SF) had no effect on ambient air temperatures and subsequently no effect on soil temperatures.
Results and discussions
Total community aboveground biomass in all four treatments was not significantly different in July (Table 2). The peak aboveground biomass between Greenhouse (G), occurred in September 351.36 g m -2 , and ambient (C) condition, occurred in October 346.19 g m -2 have no significant difference at Haibei Apline Meadow Ecosystem Research Satation. However, lowered irradiance (S) resulted in a 23% decrease in total community biomass within 5 wk of treatment applications. Total biomass under reduced irradiance (S) continued to be the lowest over the course of the season reaching a maximum of only 80% of the peak biomass under ambient (C) conditions. Total maximum aboveground biomass at our Tibetan alpine tundra site ranged from 161 to 351 g m -2 under ambient conditions (Table 2). These ranges in biomass are similar to the peak aboveground biomass at other alpine tundra sites such as on Niwot Ridge, Colorado, U.S.A., where the intercommunity aboveground biomass in different vegetation types ranges from 71 to 309 g m -2 (Walker et al., 1994). Our environmental manipulations simulating climate warming resulted in warmer air and soil temperatures between 1 and 5°C, which is within the ranges of increase reported for higher elevations in Western Europe www.intechopen.com over the past 15 years (Rozanski et al., 1992;Grabherr et al., 1994) and is within the ranges predicted for tundra habitats under a doubling of CO 2 over the next 50 yr (Maxwell et al., 1992). The season long average increases are also similar to those accomplished in other tundra experimental warming treatments though our lack of nighttime measurements means our averages are slightly higher than those actually experienced by plants and soil in these treatment plots (Chapin & Shaver, 1985;Wookey et al., 1993;Parsons et al., 1994;Kennedy, 1995). However, most importantly, higher temperatures were maintained in our warmed plots into October and may partially explain the extended growing season observed for grasses. Table 2. Total aboverground biomass (g m -2 ) from the four treatments in July, August, September, and October 1991 Aboveground biomass was initially similar among all treatments for forbs, sedges and grasses (Fig. 2a). Within 5 weeks after the warming treatments were implemented, grass biomass was significantly higher in the warmed as compared to control conditions (Fig. 2b). Conversely, grass biomass was significantly reduced during this same period under shaded conditions (Fig. 2b). Reductions of wind using side fences (SF) had no significant effect on grass, sedge or forb biomass (Fig. 2b). By September, grass biomass differences between control and warmed plots were nonsignificant though forb biomass was significantly (p < 0.05) lower in the greenhouses (G) as opposed to control conditions (C) (Fig. 2c). Lower irradiance had a significant effect on grass growth and in September, grass biomass was 36% less in shaded (S) as opposed to control conditions. Forb biomass was slightly higher in sidefenced areas as compared to control conditions. Between September and October grass in control plots started to senesce and biomass began to decline (Fig. 2c, 1d). However, under warmed (G) conditions, grass biomass was significantly (p < 0.01) higher in warmed as opposed to control conditions in October which postponed community senescence (Fig. 2d). This prolonged growth, or postponed senescence during the fall in warmed plots occurred as the greenhouses maintained warmer air and soil temperatures than ambient conditions. Biomass of grasses and forbs were slightly lower under shaded (S) conditions in October, while sedge biomass was significantly (p < 0.05) higher under these same reduced irradiance conditions (Fig. 2d). Species importance values as a measure of community level responses are presented in Table 3. Under reduced radiation (S) reductions in Elymus and Festuca were associated with increases in Stipa and Scirpus which dramatically altered the composition and structure of these plant communities. Changes in community composition and structure under warmer conditions (G) were manifested by lower importance values for Poa and Kobresia with corresponding increases in importance values for Stipa and Oxytropis (Table 3). Grass and forb biomass production was especially sensitive to warmer conditions (Fig. 2). Grass aboveground biomass was 25% greater under warmer conditions after only 5 week of warming while forb biomass decreased by 30% (Fig. 2b). Differences in aboveground grass biomass between warmer and control conditions were diminished by September when grass biomasses were not significantly different (Fig. 2c).
However, it appears that community senescence, which usually starts in September, was postponed until sometime in October under warmer (G) conditions as evidenced by no decline in aboveground community biomass between September and October (Table 2). This postponing of senescence and subsequently an extension of the growing season under warmed conditions, resulted in part because peak grass biomass was not realized until early October amounting to 177 g m-2 (Fig. 2d). The ability of the grass life form at our site to exhibit a rapid, positive response to warmer conditions and to extend the season of growth is likely the result of (1) the existence of a large leaf area at the time of treatment application, (2) the inherent physiological capacity of grasses to alter patterns of resource allocation (Welker et al., 1985(Welker et al., , 1987Welker & Briske, 1992), (3) their morphological and demographic capacity to elongate fall tillers (Briske & Butler, 1989), and (4) the ability to grow when environmental constraints are temporally removed (Sala et al., 1992). Grasses at other tundra sites have also exhibited an ability to respond rapidly to simulated changes in climate as exemplified by Calamagrostis biomass increases in the sub-arctic at Abisko, Sweden under warmer conditions . The grass growth response reported by , in what is typically a dwarf shrub dominated ecosystem, was due in large part to Fig. 2. The aboveground biomass of grasses, sedges, and forbs in control, greenhouses, shaded, and side fenced treatment plots sampled in July, August, September, and October 1991. Superscripts of the different letters denote biomasses which were significantly different (p < 0.05) for each individual sampling date. Table 3. The Important value of domiant plant species between four treatment plots a an extensive, preexisting network of underground Calamagrostis meristems, capable of rapid shoot extension and leaf development up through the dwarf shrub understory. The shift in alpine tundra community biomass characteristics whereby maximum biomass is m a i n t a i n e d i n t o t h e a u t u m n i s d i f f e r e n t f r o m w h a t m i g h t b e o b s e r v e d i n a r c t i c t u n d r a dominated by deciduous dwarf shrubs. Prolonged growth of many arctic plants in autumn is unlikely due to photoperiodic cues which control senescence (Murry & Miller, 1982). Thus, even if conditions in arctic tundra were warmer in fall, the ability of many dominant life forms to either produce new fall foliage or continue expansion of existing leaf and shoot biomass is limited by life history traits. And while graminoids, such as Eriophorum may constitute a large fraction of the biomass in these systems , extended growth in fall under warmer temperatures may be unlikely due to the low solar angles in autumn. The ability of grasses to utilize favorable conditions at the end of the season is a trait similar to that observed for other tundra lifeforms such as evergreen shrub species (Karlsson, 1985;Welker et al., 1995). For instance, Welker et al. (1995) have found evidence that Dryas octopetala, a wintergreen species, has the capacity to exhibit net carbon assimilation at the end of the season under warmer, wetter, and fertilized conditions when plants in control conditions have ceased gaining carbon, which is made possible in part by its evergreen nature. In addition, Karlsson (1985) found that 20% of the carbon acquired by the evergreen dwarf shrub, Vaccinium vitisidaea occurred in spring and in autumn, before leaf emergence or after leaf senescence in the deciduous species, Vaccinium uliginosum. Thus, evergreen dwarf shrubs are also a tundra life form which due to their inherent life history characteristics can respond to changes in environmental conditions which occur in spring, and fall Welker et al., 1995).
The opportunistic behavior of grasses we observed was not evident for forbs. During the initial 5 weeks, forb biomass was reduced under warmer conditions while grass biomass was increasing (Fig. 2b). The opposite response for forbs may have been due in part to the grasses out-competing forbs for water, nutrients and or light. However, the overall community level response was that total biomass was not different between warmed (G) and control (C) conditions after 5 weeks of experimental applications (Table 2). This observation of similar community biomass under modified environmental conditions is consistent with the observations of Chapin and Shaver (1985). These authors found that arctic tundra total community production (current years growth) in perturbed and in control plots remained the same. This inherent buffering was achieved because some species or life forms increased growth while others exhibited reduced growth. They concluded that conditions favorable for one species or life form are less favorable for others, though the total community or ecosystem production changes annually very little (Chapin et al., 1995). This attribute of tundra ecosystems may be the result of the inherently low nutrient levels available to plants in tundra which constrains system level primary production . The one life form in our study which appeared to be the least responsive to simulated climate warming were the sedges, consisting primarily of Kobresia humillis. The lack of significant increases in biomass until the end of the first season under warmer or shaded conditions indicates that this life form has a relatively low sensitivity to temperature and irradiance. However, other sedges, such as Kobresia myosuroides on Niwot Ridge, Colorado, exhibits an increase in biomass under elevated nutrient availability (Bowman et al., 1993). This would suggest that while the warmer conditions in soils under our minigreenhouses may have elevated soil mineralization and increased nutrient pools available to plants (Jonasson et al., 1993;Robinson et al., 1995) the increases were either not sufficient to alter Kobresia growth, or that Kobresia root uptake rates are low, and its ability to compete for soil nutrients with grasses is low (Black et al., 1994;Falkengren-Grerup, 1995). Even though soil nutritionm may have been altered under warmed conditions, the ability of sedges at our site to acquire these resources in a competitive setting appears to be limited, in part due possibly to resource capture by soil microbes (Jackson et al., 1989). However, in future years changes in rooting patterns may enable this species to capitalize on changes in soil resources. In conclusion, our findings suggest that Tibetan alpine grasses are predisposed to rapid increases in biomass under simulated climate warming due in part to their inherent life historytraits. In addition, the ability of grasses to produce tillers late in the season under warmer conditions extends the period of carbon gain and extends the period in which the community exhibits maximum aboveground biomass. We find that sedges at our site are insensitive in the short term to changes in environmental conditions, while forbs may decrease at the expense of grass biomass. Increases in cloudiness over the Tibetan alpine tundra would likely result in lower aboveground biomass, but if accompanied by higher rainfall the effects may be counter-acting. The extension of peak community biomass into the autumn may in the long term have cascading effects on net ecosystem CO 2 fluxes, nutrient cycling, and forage availability to grazers .
Cellular automata: simulating alpine tundra vegetation dynamics in response to global warming
a series of discrete time steps, where single raster cells can be influenced by their neighborhood or other data in an overlay. All map layers are imposed on the same grid system. This type of GIS environment provides a sophisticated tool to help us target the real problem in a complex system (Wolfram, 1984;Coulelis, 1985;Itami, 1994;White & Engelen, 2000;Giles, 2002). In our study, we use GIS analysis, linear regression, MCE, cellular automata (CA), and a raster image calculator to build a unique Vegetation Dynamic Simulation Model (VDSM). Global warming scenarios are interpreted as inputs of the spatial parameters. Large processing tasks are completed by the computer system. The predicted outcome of this study is that individual vegetation types will respond to a global mean temperature increase (GMTI) in 2100 of 1 or 3°C by either expanding or shrinking their range because of plant species' suitability to the warmer and drier climate conditions. This corresponds to 0.1 or 0.3°C per decade, respectively (Johnes & Briffa, 1992;Leemans, 2004).
Methods and simulation model 3.1.1 Data set
Our study area is located near the Haibei Alpine Meadow Ecosystem Resaerch Station, Qinghai Province, China (37°29'-37°45'N, 101°12'-101°33'E) . The elevation in the study area varies from 3000 to 4500 m a.s.l. The model uses the following data: • A 90 m x 90 m resolution DEM; • Temperature derived from the DEM using an empirical linear regression model (Zhang, 2005); • Land surface parameters (aspect, slope, stream channel density) derived from the DEM using GIS analysis tool provided in the IDRISI software (Eastman, 2003); • A raster vegetation map (30 m x 30 m pixels) produced in 1988 (Zhang, 2005). The vegetation map with a total of 10 vegetation classes is resampled to match the DEM resolution (90 m x 90 m) (Fig. 3).
Multi-criteria evaluation
Constrains are defined as the limited area that are not considered to be natural vegetation, such as water bodies, glaciers, gravel slpoes and artificial grasslands. They represent area where the natural vegetaion cannot grow or are otherwise constrained. A Boolean image is created to display inclusion and exclusion of the constraint conditions.
The factors (F1, F2, ..…Fn) used in the MCE are selected based on the most important variables that determine the output y in Equation (1). We use MCE in step 1 (Fig. 4) to determine a normalized surface temperature, which calibrates the temperature by these spatial parameters: aspect, suitable surface temperature, and distance to the nearest stream channel. The temperature varies along with these spatial parameters: (1) It increases from the north (a=0°) to the southeast (b=145°), with the highest values from the southeast to the southwest (c =275°), then decreasing to the north (d =360). The change pattern can be described as a sigmoidal fuzzy function type, in a symmetric shape with specific values (a, b, c, d) in Table 4. (2) In the lower valley of our study area, the monthly mean temperature in July is 10.1°C www.intechopen.com (Li et al., 2004). The temperature decreases with increasing elevation. We define the temperature less than a=0°C as unsuitable for alpine plant growth. The temperature from 0°C to b=5°C is defined as less suitable for alpine plant growth; the temperature from 5°C to c=13°C is defined as most suitable. The temperature from 15.5°C to d=18°C reduces the suitability for alpine plant growth, representing the dry, south-facing areas (Table 4). In the Analytical Hierarchy Process (AHP), the most important temperature is in the range of b=5°C to c=13°C, which present the sigmoidal fuzzy function type and symmetrical shape.
(3) The distance to a stream affects the temperature. If the area is within 10 m of a stream or water body, its temperature is closer to the stream or water temperature; beyond 600 m from a stream or water body, the temperature is minimally affected by the nearest water bodies. The values are defined as a = 10 m, b = 600 m, with a sigmoidal fuzzy function type and monotonically deceasing shape. In step 2 (Fig. 4), we use the MCE method and create a suitability map for each vegetation type using the aspect, slope, and the distance to stream channels. The suitability values (a, b, c, d) in Table 5 are defined based on the vegetation distribution in the study area . The fuzzy membership function type and shape and AHP weights for each vegetation are calculated and reported in Table 5 (Eastman, 2003).
Macro modeler
The macro model is created to simulate changes in each vegetation type through time and space, integrating operations such as overlay, scalar, fuzzy module, and cellatom (Fig. 5). These operations are available in the IDRISI software package (Eastman, 2003) and have to be built in Macro Modeler with an initial scalar value (0.0-1.0) and fuzzy set values (0-255).
The GMTI scenarios are implemented in the Macro Modeler by adjusting the scalar operation to increase the temperature by 0.1°C in a discrete time period. The same logic is applicable to 0.3°C in a discrete time period (Leemans, 2004). Running the simulation for 10 iterations, the effects of increasing temperature on each vegetation type are accumulated in the output image.
Note: Slope:0-90°; DEM: indicates elevation range for relatively flat area; for others, refers to the note on Table 4 Table 5. Ten vegetation types' Fuzzy membership function types and shape, factor's suitability values (a,b,c, and d), and AHP weights The cellular automata module is implemented in the Macro Modeler (Fig. 5) and used to form a uniform raster image to represent global warming effects in a spatial context, which operates over discrete time steps (Coulelis, 1985;Giles, 2002;White & Engelen, 2000). The change in each cell depends on the parameters or requirements set by the user and the surrounding neighbors (Wolfram, 1984;Itami, 1994;Ruxton & Saravia, 1998). This project used the CELLATOM module with a 3*3 filter and reclassifies an output cell if at least 3 neighbors contain non-null values. We define the 10 iterations using the DynaLink module (Eastman, 2003). Winthin each iteration, each vegetation map is dynamically updated by running Cellatom, then it is overlaid with the vegetation suitability map altered by a GMTI of 0.1 or 0.3°C. Thus, after 10 iterations, a final suitable vegetation map is produced. The same dynamic processing is repeated for each vegetation type resulting in a total of 10 vegetation suitability maps in response to warmer weather.
Composite final vegetation map
Our objective is to create a composite vegetation map for each global warming scenario, GMTI of 1 or 3°C over time (Fig. 4). All of the 10 vegetation suitability maps with a GMTI of 1 or 3°C are combined in order to produce a composite map using the image calculator module in IDRISI (Eastman, 2003
Results and discussions 3.2.1 Normalized temperature spatial distribution
Temperature changes across the study area are not only due to elevation, but also due to aspect and distance from the nearest stream channel. The linear regression model provided a temperature spatial distribution based on elevation alone, which is our primary step. Furthermore, the normalized temperature surface created by the MCE is highly representative of the potential temperature distribution in a normalized fuzzy format (Fig. 6).
www.intechopen.com
Temperature distribution is correlated with and controlled primarily by elevation. Numerous spatial interpretation methods have been applied to estimate the spatial distribution of temperature (Li, 2005). The interpolation results do not always agree with the actual sample points, including using geo-statistical methods, and spatio-temporal spline. These methods are highly dependent on the distance to the sample points, and the surface equation. In our study, our first step is to create the primary temperature surface based a linear relation with elevation. The objective is to obtain a more accurate temperature map in terms of aspect, suitable temperature, and distance to the stream. We use the Multi-Criteria Evaluation with Weighted Linear Combination (MCE_WLC) to calibrate the spatial temperature distribution. The fuzzy memberships between the temperature and each factor (aspect, suitable temperature, distance to stream) are based on previous research works Zhang & Welker, 1996;Zhang, 2005). The output, the normalized temperature surface is set into fuzzy format (0-255). Since the temperature is major factor on determining vegetation composition, structure, and distribution, the normalized temperature surface plays an important role when we simulate vegetation dynamics in spatial and temporal dimensions.
Vegetation change comparison
We calculate the percent area change for each vegetation type (Fig. 7)
www.intechopen.com
After we compose the final vegetation map, the highest suitability among the vegetation types is finally selected to represent the successful vegetation type in every cell. For instance, Dry Potentilla shrub and Dry Kobresia meadow expand into areas previously occupied by wet types of vegetation (Fig. 8). The Riverside Blysmus meadow, which requires moist conditions, disappears completely with a 3°C temperature increase. In general, the dry vegetation types demonstrate significant expansion from their original ranges and tend to become more dominant in the study area.
Vegetation dynamics over time
The time dimension in the CA module of IDRISI is interpreted as a discrete time step, which corresponds to a temperature-time dimension. The CA module is integrated into the macro modeler, and uses the DynaLink module (Eastman, 2003) to simulate vegetation change within each iteration. The GMTI is defined as a spatial parameter in the deterministic model with a temperature increase of 0.1 or 0.3°C per decade.
Vegetation Dynamic Simulation Model (VDMS)
The VDSM is an example showing that spatial modeling can solve complex ecosystem problems in terms of having the capability to simulate spatial and temporal vegetation dynamics. In this case study, we model the spatial distribution of temperature and create www.intechopen.com GMTI scenarios as a spatial grid image. The vegetation dynamics are simulated in discrete time by applying CA in a Macro Modeler. In future studies, this model will be capable of modeling the water-time dimension that makes the simulation more adaptable to global warming research. The VDSM could be potentially incorporated with a normal climate change model to assist in a long-term ecosystem simulation. Alternatively, the VDSM is capable of linking with a stochastic model of temperature change, in which we might be able to forecast an ecosystem disaster. The VDSM starts by evaluating how to solve a complex vegetation dynamics problem using CA. The VDSM is built by combining the MCE, Macro Modeler, CA, image calculator, Scalar, and Fuzzy functions in IDRISI. Making a clear objective helps us to look into many available modelers and functions in order to solve the problem within the IDRISI software environment. For instance, using Decision Wizard, we create an objective and define a set of constraints, which eliminates the areas that are not natural vegetation. The spatial patterns of the factors (temperature, aspect, slope, and distance to stream) are created as continuous surfaces using Fuzzy functions. The transition rules in Macro Modeler are defined as the maximum potential suitable vegetation in each cell as well as over the study area. The frame work of VDSM (Fig. 4.) is a summary of the model structure and functionalities. VDSM is flexible enough to be integrated with other sub-models that are available from GIScience technology. Figure 3 provides an example of Macro Modeler incorporating Cellular Automata for the modeling of Caragana Shrub. In the case of a GMTI of 0.1°C, the suitability map for Caragana is weighted at 0.4 and the normalized temperature map is weighted at 0.6. These two factors are combined using the overlay module to produce a map of suitability for Caragana Shrub with an incremental temperature increase of 0.1°C. Ten iterations on one vegetation layer are simulated and updated after each iteration using the Cellatom and Dynalink modules. At the end of the iterations, we obtained an accumulated effect of GMTI of 1°C on the vegetation layer. It would also be possible to incorporate the water layer with the vegetation layer and temperature layer, which can be linked by its weighted value in the VDSM. Thus, the VDSM not only provides a discrete time representation, but also demonstrates how we could develop our model for use with more complex scenarios McGuire et al., 2006). Compositing a final vegetation map demonstrates the power of GIS analysis in IDRISI. The image calculator module in IDRISI successfully carries out interpolation of the image calculation presented in Equation (2) at the grid cell level. The VDSM illustrates how we could study vegetation dynamics and model many other spatio-temporal phenomena. The VDSM integrates the suitability maps created from MCE, Macro-Modeler, CA, and spatial environmental factors. And the temperature-time dimension model is incorporated into the VDSM, which makes the temperature a spatial parameter that affects the vegetation dynamics over a discrete time step. The simulating processes conducted by Macao Modeler generate the temperature increase of 0.1 to 0.3°C per decade, which represents the influences of the different global warming scenarios. The results from Fig. 7 and 7 demonstrate that global temperature increase reduces moisture availability (Zhang & Welker, 1996) such that dry vegetation can invade areas previously occupied by vegetation adapted to moist conditions. The structure of the model is generally applicable to other situations, but the particular factors and constraints used in this model are unique to the Haibei alpine tundra ecosystem. Global warming has strong effects on the alpine ecosystems in terms of altering the biomes and ecosystem biodiversity (Cao & Woodward, 1998;Ni, 2000;Song et al., 2005. The alpine ecosystem in the region of the Qinghai-Tibetan plateau is sensitive and vulnerable to the changing climate (Zhang & Welker, 1996;Kato et al., 2006). The VDSM illustrates that altering global mean temperature changes the alpine vegetation dynamics in terms of having the capability to simulate spatial and temporal vegetation dynamics (Itami, 1994;Leemans, 2004). With the future integration of water condition (Sala et al., 1992;Hodkinson et al., 1999) and disturbance regimes (Zhang, 1990;Cincotta et al., 1992;Zhang & Liu, 2003;Chapin et al., 2006) into the VDSM, the simulation could model more detailed mechanisms and complex feedbacks (McGuire, 2006) of the alpine tundra ecosystem to the changing climate.
Simulating Tibetan Plateau alpine vegetation distribution in response to global warming
Vegetation patterns on the plateau were very sensitive and vulnerable to global change, where the growth and distribution of plants depended heavily on local climate conditions (Hou et al., 1982;. The undisturbed vegetation on the Tibetan Plateau provides an ideal natural laboratory for the research on the sensitivity and responses of alpine vegetation to climate changes. The distributions of the major dominant species and of the vegetation types on the Tibetan Plateau have been investigated since the early 1950s (Anon., 1985). Based on previous research, Zheng (1996) depicted the physiography of the Tibetan Plateau. Ni (2000) simulated biomes on the Tibetan Plateau using the improved BIOME3 model (BIOME3-China) under the present climate conditions, as well as under a scenario with a CO 2 concentration of 500 ppmv. The BIOME3-China used nine plant functional types (PFTs); it did not include the PFTs especially occurring in alpine vegetation, such as cold graminoid or forb, and cushion forb. In this study, a combined biogeography biochemistry model, BIOME4 (Kaplan et al., 2003) was improved to simulate alpine vegetation at the biome level. We apply the model to the present day, and the end of the 21st century in a scenario with unchecked increase in atmospheric CO 2 concentra tion. We compare the modelled present vegetation to a map of present-day natural vegetation distribution. The future scenario allows us then to assess the sensitivity of alpine vegetation to changes in atmospheric CO 2 concentration and climate.
Model description
BIOME4, developed from the BIOME3 model (Haxeltine & Prentice, 1996), is a intergrated carbon and water flux model that predicts the global steady state of vegetation distribution, structure, and biogeochemistry, taking account of interactions among these aspects. The BIOME4 model followed most of the algorithms and rules of BIOME3. It is driven by longterm averages of monthly mean temperature, sunshine and precipitation. In addition, the model requires information on soil texture and soil depth in order to determine water holding capacity and percolation rates. CO 2 concentration is specified. For BIOME4, the improved model using 12 plant functional types (PFTs) that represent broad, physiologically distinct classes, ranging from alpine vegetation (e.g. cushion forbs) to tropical rain forest trees. The PFTs are tropical broad-leaved evergreen, tropical broad-leaved rain-green, temperate broad-leaved evergreen, temperate broad-leaved summergreen, temperate coniferous evergreen, boreal coniferous evergreen, temperate summergreen conifer, temperate grass, temperate xerophytic shrub, cold shrub, cold graminoid or forb, and cushion forb (Ni, 2000;Kaplan et al., 2003;Yu, 1999). Each PFT is assigned a small number of bioclimatic limits which determine whether it could be present in a given grid cell ( Table 6. Bioclimatic limits of each plant functional type in the model (Tc stands for mean temperature of the coldest month, Tw for mean temperature of the warmest month, GDD for growing degree-days on 5°C base, GDD0 for growing degree-days on 0°C base, α for Priestley-Taylor coefficient of annual moisture availability and D for dominance class).
The computational core of BIOME4 is a coupled carbon and water flux scheme, which determines the seasonal maximum leaf area index (LAI) and maximizes NPP for any given PFT, based on a daily time step simulation of soil water balance and monthly process-based calculations of canopy conductance, photosynthesis, respiration and phenological state (Haxeltine & Prentice, 1996). To identify the biome for a given grid cell, the model ranks the tree and non-tree PFTs that were calculated for that grid cell. The ranking is defined according to a set of rules based on the computed biogeochemical variables, which include NPP, LAI, and mean annual soil moisture. The resulting ranked combinations of PFTs lead to an assignment to one of the biomes (Table 7).
Climate scenarios 4.2.1 Modern climate data
A Chinese grid-based long-term mean climatology (temperature, precipitation and sunshine) database Daly (1994Daly ( , 2000 simulated the Chinese 0.05° × 0.05° gridded long-term mean climatology based on 2450 station mean values for monthly temperature, monthly percentage of potential sunshine hours, and monthly total precipitation throughout China and its adjacent regions. An atmospheric CO 2 concentration of 340 ppmv was used to link BIOME4 to the present-day baseline simulation.
Future climate projection
The climatic conditions under increasing greenhousegas concentrations and sulfate aerosols have been simulated by atmospheric general circulation models (AGCMs). These models were commonly used in the construction and application of climate change scenarios for climate change impacts assessments (Neilson et al., 1998;Cramer et al., 2001). HadCM3 is a coupled atmosphere-ocean GCM developed at the Hadley Centre (Cox et al., 1999). The model was driven by computing the averages for 1931-1960 and for 2070-2099. We used the mean climate anomalies, and then interpolated the anomalies to the grid in high resolution (Fig. 9). The anomalies were added to the baseline climatology to produce the climate fields used to drive improved BIOME4 to assess the sensitivity of alpine vegetation to possible future climate changes. The emissions scenario (Anon., 1996) included an increase in atmospheric CO 2 concentration from 340 to 500 ppmv and increase in sulphate aerosol concentration for the 21st century simulation. The simulation is not intended as a realistic forward projection and it was used to illustrate a possible course of climate change and thus to give an impression of the sensitivity of alpine ecosystems to climate change.
Soil data
A digitized soil texture data set for the Tibetan Plateau was derived from Xiong & Li (1987). The soil texture information was interpolated to 0.05° × 0.05° grid cells. Eight soil types were classified.
Vegetation data
A map of potential natural vegetation of the Tibetan Plateau on 0.05° × 0.05° grid cells was derived from a digital vegetation map at a scale of 1 : 4 000 000 (Hou et al., 1982), which presents 113 vegetation units. These units were classified into nine categories based on the physical-geographical regions system of the Tibetan Plateau (Zheng, 1996). Each vegetation type was required to be floristically distinguishable to compare them with simulated vegetation maps (Fig. 10a, b).
Assessment of the simulated results
The agreement between simulated and natural vegetation maps or reconstructed vegetation maps was quantified by the ∆V value. ∆V is a nontrivial and attribute-based measure of dissimilarity between biomes (Sykes et al., 1999). Dissimilarity between two maps (∆V) was obtained by area-weighted averaging of ∆V over the model grid. The criterion of ∆V was cited (Sykes et al., 1999). ∆V values < 0.15 can be considered to point to excellent agreement between simulated and actual distributions, 0.15-0.30 is very good, 0.30-0.45 good, 0.45-0.60 fair, 0.60-0.80 poor, and > 0.80 very poor.
Results and discussions 4.3.1 Present day
In a quantitative comparison between the simulated vegetation map and the modern natural vegetation map, 80.1% of grid cells (80100 cells) showed the same biome (Fig. 10 of each biome suggest that it is in excellent agreement for montane broad-leaved forest, subalpine coniferous forest and montane desert, and very good agreement for sub-tropical montane forest and alpine desert, and a good agreement for montane steppe, and fair agreement for alpine meadow and alpine steppe, and poor agreement for montane shrub steppe (Table 8 and Fig. 10a,b). Table 8. Area (× 1000 km 2 ) and ∆V values for each biome of the Tibetan Plateau. A = areas of simulated biomes under the current climate with CO 2 concentration = 340 ppmv; B = areas of simulated biomes under a scenario at the end of the next century with CO 2 concentration = 500 ppmv; C = ∆V values for comparison between simulated biome under current climate and actual vegetation distribution; D = ∆V values for comparison between simulated biome under a scenario with CO 2 concentration of 500 ppmv and simulated biome under current climate with CO 2 concentration of 340 ppmv.
Sensitivity to future changes
In the illustrative simulation of a 'greenhouse climate', the potentially forested area of the Tibetan Plateau increased substantially (Fig. 10c). The area of sub-tropical montane forest is slightly reduced, with replacement by montane broad-leaved and sub-alpine coniferous forest. The simulated tree line is farther north in most sectors than at present. Trees potentially invade shrubland/ meadow types where only fragments of forest exist today. Thus the simulations indicate a great sensitivity of the forest limit to CO 2 -induced warming (Lloyd & Rupp et al., 2003;. The 'greenhouse climate' simulation also indicates major northward shifts of the alpine meadow biomes and a future reduction in the areas occupied by shrub-dominated montane steppe. The boundary between montane desert and alpine desert is found farther south than today. Our model results indicate that the extension of alpine desert would be reduced, while the area of montane desert would increase under the future climate scenarios with an atmospheric CO 2 concentration of 500 ppmv (Fig. 10c). The improved BIOME4 model captures the main features of vegetation distribution on the Tibetan Plateau, such as the position of the alpine forest limit, its species composition in vegetation, regional differentiation in vertical vegetation, and the extent of alpine meadow, alpine steppe, and alpine desert. The spatial differentiation of physical-geographical regions www.intechopen.com on the plateau is determined mainly by topographic configuration and atmospheric circulation. The climate is warm and humid in the southeast, and cold and arid in the northwest (Zheng, 1996). The reduction in temperature and precipitation toward the northwest is the most important reason for the simplification of species complexity in the vegetation . The vegetation types in this region change gradually from marine humid montane (tropical seasonal and rain forest, warm-temperate broad leaved evergreen forest, temperate deciduous forest, and conifer forest) in the southeastern region to continental semi-arid montane (temperate shrubland/meadow, temperate steppe, alpine meadow/shrubland, and alpine steppe) in the middle region to continental arid montainous (temperate desert, alpine desert, and ice/polar desert) in the north-western region (Ni, 2000). The improved BIOME4 model simulated the biome distribution with very good agreement for the central and northwestern regions of the Tibetan Plateau (DV = 0.26 for non-forests), and with a good agreement for the southeast (∆V = 0.32 for forests). Altogether 13.8% of the forest cells were simulated as non-forest due to misclassification, i.e. cold needle-leaved evergreen or cold deciduous forest cells were simulated as low and high shrub meadow, and 7.1% of non-forest cells were simulated as forest due to low and high shrub meadow cells being simulated as the tree-line forming biome. Under the control of both climate and complex physiognomy, the actual vegetation pattern on the Tibetan Plateau is a mosaic, especially for forest types in flat regions (Anon., 1980). But in our simulation, the model produced vegetation types with continuous distribution leading to unrealistic patterns. The major mismatches (where > 20% of cells assigned to one biome in the natural vegetation map were assigned to a different biome in the simulation) were between adjacent biomes in climate space (Fig. 10a, b). The simulated boundary between alpine meadow and alpine steppe is somewhat too far south. The natural vegetation map shows the boundary between alpine steppe and alpine desert farther northwest than the simulation, apparently because of lower temperature and humidity. Our model results cannot distinguish ice/polar desert from alpine desert (Fig. 10a, b). Vegetation patterns simulated by improved BIOME4 are similar to those modelled by Ni (2000) using BIOME3-China. In our simulation, shrubland and meadow were distinguished using additional PFTs specifically occurring in alpine vegetation (cold shrub, cold graminoid or forb, and cushion forb). Therefore, areas of montane steppe and alpine meadow simulated by improved BIOME4 are more precise.
In the simulation of future developments triggered by increased atmospheric CO 2 concentration both winter and summer temperatures rise throughout the region (Fig. 9). Simulated temperature anomalies in winter are generally higher than in summer. This trend can be confirmed by the climate change on the Tibetan Plateau during recent years, i.e. from 1951 to 1990 (Tang et al., 1998). Thus the CO 2 increase causes a large, year-round warming which produces a stronger effect on vegetation shifts. For example, there would be a reduction in sub-tropical montane forest, alpine meadow, alpine steppe and alpine desert, and an extension of montane broad-leaved forest, sub-alpine coniferous forest, montane shrub steppe, montane steppe and montane desert. These results are consistent with other reports that suggest a northward shift of the vegetation on the Tibetan Plateau under a warming climate (Ni, 2000;Zheng, 1996;.
Acknowledgments
We thank all these people Dr. Preminda Jacob and Chen Bo for assistance in the field at Haibei Alpine Meadow Ecosystem Station; Dr. Suzana Dragicevic, Verda Kocabas, and the www.intechopen.com | 2018-12-26T18:15:03.471Z | 2010-09-27T00:00:00.000 | {
"year": 2010,
"sha1": "ece66d9fd81269281686a852570c4f4deb1583d2",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.intechopen.com/citation-pdf-url/12178",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "68064135094ad34b80af1ad49ce28f3deec04b7a",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
17836247 | pes2o/s2orc | v3-fos-license | Discontinuous Galerkin Deterministic Solvers for a Boltzmann-Poisson Model of Hot Electron Transport by Averaged Empirical Pseudopotential Band Structures
The purpose of this work is to incorporate numerically, in a discontinuous Galerkin (DG) solver of a Boltzmann-Poisson model for hot electron transport, an electronic conduction band whose values are obtained by the spherical averaging of the full band structure given by a local empirical pseudopotential method (EPM) around a local minimum of the conduction band for silicon, as a midpoint between a radial band model and an anisotropic full band, in order to provide a more accurate physical description of the electron group velocity and conduction energy band structure in a semiconductor. This gives a better quantitative description of the transport and collision phenomena that fundamentally define the behaviour of the Boltzmann - Poisson model for electron transport used in this work. The numerical values of the derivatives of this conduction energy band, needed for the description of the electron group velocity, are obtained by means of a cubic spline interpolation. The EPM-Boltzmann-Poisson transport with this spherically averaged EPM calculated energy surface is numerically simulated and compared to the output of traditional analytic band models such as the parabolic and Kane bands, numerically implemented too, for the case of 1D $n^+-n-n^+$ silicon diodes with 400nm and 50nm channels. Quantitative differences are observed in the kinetic moments related to the conduction energy band used, such as mean velocity, average energy, and electric current (momentum).
Introduction
The Boltzmann-Poisson (BP) system is a semi-classical model for electric charge transport in semiconductors. The BP system can be used to describe the hot electron transport in modern semiconductor devices at nano-scales. As stated in [1], this model describes the long range interactions over charge carriers and the statistical evolution of its states that includes an account of the quantum scattering events. The BP system treats charge carriers partly as classical particles by describing them by means of a time-dependent probability density function f (t, x, k) over the phase space (x, k), and using a Boltzmann equation to model the time evolution of the associated probability density function in the phase space. The quantum nature of the carriers is considered in several terms of the Boltzmann equation. The quantum crystal wave-vector k is used as the momentum phase space variable in the model. The model for the local velocity of the charge carriers is their respective group velocity v(k) = 1 ∇ k ε(k), related to the electronic energy band function ε(k) of the considered semiconductor material. As usual, is the Planck constant divided by 2π. The collision integral operator models the quantum scattering mechanisms acting over the charge carriers. The flow of charge carriers is induced by the force over the electron charge −q, which is assumed to be given by a mean electric field, F(t, x) = −q E(t, x). This effective electric field, modeled by the Poisson Equation, takes into account long range interactions made of both internal carrier self-consistent and external contributions, such as an applied potential (bias). Hence, timedependent solutions of the the BP system contain all the information on the transient of the carrier distribution and the time evolution of the total electric field. A phenomenological derivation of the BP model can be found in [1].
The semi-classical Boltzmann description of electron transport in semiconductors is, for a truly 3-D device, an equation in six dimensions plus time when the device is not in steady state. For a 1-D device model, under azimuthal symmetry assumptions, the dimensionality of the problem can be reduced to 3 dimensions plus time. The heavy computational cost is the main reason why the BP system had been traditionally solved numerically by means of Direct Simulation Monte Carlo (DSMC) methods [3]. However, after the pioneer work [4], in recent years, deterministic solvers to the BP system were proposed in [5,6,7,8,9,10,11,12]. These methods provide accurate results which, in general, agree well with those obtained from Monte Carlo (DSMC) simulations, often at a fractional computational time. Moreover, these type of solvers can resolve transient details for the electron probability density function f , which are difficult to compute with DSMC simulators.
The initial methods proposed in [7,8,9,10] using weighted essentially non-oscillatory (WENO) finite difference schemes to solve the Boltzmann-Poisson system, had the advantage that the scheme is relatively simple to code and very stable even on coarse meshes for solutions containing sharp gradient regions. However, a disadvantage of the WENO methods is that it requires smooth meshes to achieve high order accuracy, hence it is not very flexible for adaptive meshes.
Motivated by the easy hp-adaptivity (h refers to the size of the element, and p refers to the polynomial degree of the space generated by the basis functions) and the simple communication pattern of the discontinuous Galerkin (DG) methods for macroscopic (fluid level) models [13,14,15,16], it was proposed in [17,18] to implement a DG solver to the full Boltzmann equation, that is capable of capturing transients of the probability density function. In the previous work [17,18], the first DG solver for the BP system was proposed, and some numerical calculations were shown for one and two-dimensional devices. In [19], the DG-LDG scheme for the Boltzmann-Poisson system was carefully formulated, and extensive numerical studies were performed to validate the calculations.
Such scheme models electron transport along the conduction band for 1D diodes and 2D double gate MOSFET devices with the energy band ε(k) = ε(|k|) given by the Kane band model (valid close to a local minimum) in which the relation between the energy ε and the wavevector norm |k| is given by the analytic formula, referred as the Kane band model: where m * is the effective mass for the considered material, Silicon for the case of this paper, and α is a non-parabolicity constant. This band model can be understood as a first order variation from the parabolic band model, given by the particular case α = 0.
In all of the aforementioned deterministic solvers previous to [20], the energy-band function ε(k) is given analytically, either by the parabolic band approximation or by the Kane non-parabolic band model. The analytical band makes use of the explicit dependence of the carrier energy on the quasimomentum, which significantly simplifies all expressions as well as implementation of these techniques in the collision operator. However, some physical details of the band structure are partly or totally ignored when using an analytic approximation, which hinders its application to transport of hot carriers in high-field phenomena (the so called hot electron transport) where the high anisotropy of the real band structure far from the conduction band minimum becomes important. Full band models, on the other hand, are able to provide an accurate physical description of the energy-band function, portraying this anisotropic band structure far from a conduction band minimum. One of the most commonly used methods to compute full bands is the empirical pseudopotential method (EPM). Such method gives a full band structure truncating the Fourier series in the k-space [22] for a crystal lattice potential model given as the sum of potentials due to individual atoms and associated electrons, with few parameters fitting empirical data such as optical gaps, absorption rates, etc, to finally compute the energy eigenvalues of the Schrödinger equation in Fourier space. A more detailed discussion of this method can be found in [22,23]. While full band models, as the ones given by EPM, have been widely used in DSMC simulators [3], their inclusion in deterministic solvers for the transport Boltzmann Equation is more recent; on [24], [25], full band models have also been combined with spherical harmonic expansion methods used to solve the Boltzmann equation numerically. However, high order accuracy is not always achieved by spherical harmonic expansion methods when energies vary strongly and only a few terms of the expansion are usually employed [26]. In contrast, the simulations for the BP system developed in our line of work, as in [17], [19], do not involve any asymptotics and so are very accurate for hot electron transport regimes. A DG method for full conduction bands BP models was proposed in [20], generalizing the solver that uses the Kane non-parabolic band and adapting it to treat the full energy band case. A preliminary benchmark of numerical results shows that the the Dirac delta functional in the energy variable can be applied in this case to reduce one dimension of the collision integral, and so an accurate high-order simulation with comparable computational cost to the analytic band cases is possible.
The work presented in this paper is focused on simulations for hot electron transport along a single conduction band for Si computed by radially averaging an EPM full band structure. The band obtained by this procedure represents a midpoint between a radial band model and a full band anisotropic model, having then both the desired advantages of a band model with a dependence on r and, at the same time having information of the anisotropic variation of the conduction band in the k-space, by means of the numerical average performed over the angular domain of the conduction band for a given k-sphere. The advantage of DG scheme in this framework is shown in the accurate calculation of the Dirac delta functions in the collisional integrals based on the weak formulation. The EPM-Boltzmann-Poisson transport with this spherically averaged EPM calculated energy surface is numerically simulated and compared to the output of traditional analytic band models such as the parabolic and Kane bands. Quantitative differences are observed in the moments, demonstrating the significance of incorporating the physical band models in the kinetic simulations.
The rest of the paper is organized as follows: in Sections 2 and 3, we present the BP model, and the transformed equations under spherical coordinates in k. Section 4 contains the details of the computations of the spherical average of a local EPM conduction band. The DG formulation is presented in Section 5. The device configuration and numerical results are discussed in Sections 6 and 7. We conclude the paper in Section 8. Some technical details of the schemes are given in the Appendix.
The Boltzmann-Poisson problem
We consider the probability density function (pdf ) for electrons along a single conduction band, denoting it by f (t, x, k) We denote by Ω x the physical domain in the x-space, and similarly Ω k as the domain in the k-space. Following [1], we recall the classical (strong) formulation of the initial value problem for the BP system with boundary conditions, for a pdf of electrons on a single conduction band: with the linear collision operator Q(f ) describing the scattering over the electrons, where several quantum mechanisms can be taken into account. In the low density approximation, the collisional integral operator becomes linear in f , having the form: where S(k, k ) is the scattering kernel, representing non-local interactions of electrons with a background density distribution.
In the case of silicon, for example, one of the most important collision mechanisms are electronphonon scatterings due to lattice vibrations of the crystal, which are modeled by acoustic (assumed elastic) and optical (non-elastic) non-polar modes, the latter with a single frequency ω p , given by: with K, K 0 constants for silicon.
The symbol δ indicates the usual Dirac delta distribution corresponding to the well known Fermi's Golden Rule [2]. The constant n q is related to the phonon occupation factor: where k B is the Boltzmann constant and T L = 300K is the constant lattice temperature. In the Boltzmann equation, the energy band function ε(k) are often taken by a simple band model, e.g. the Kane band model (1) or the parabolic band model. Since this term appears in both the transport and collision part, it is evident that it plays an important role in the numerical simulation. We will discuss more about the treatment of this term in later sections.
The self consistent field E(t, x) is solved from the Poisson equation where the parameter 0 is the dielectric constant in a vacuum, r (x) labels the relative dielectric function which depends on the material. ρ(t, x), the electron charge density, is given by the integral over the domain in the k-space Ω k : and N D (x) is the doping profile, representing an external fixed density of positive charge carriers.
The evolution is subject to the initial condition and suitable boundary conditions for f on ∂Ω x × Ω k and Ω k × ∂Ω x , and for V on ∂Ω x are satisfied.
The boundary ∂Ω x is usually split for the Poisson Equation in Dirichlet ∂Ω D x , Neumann ∂Ω N x , and Examples of boundary conditions used for the Boltzmann Eq. include [1]: • Charge neutrality [35], [10], [18], [19]: This condition is usually employed at the device contacts (Dirichlet boundaries ∂Ω D x ).
• Null x-flux: where n(x) is the normal to the surface ∂Ω N x at the point x. This condition is imposed on the part of the physical domain with an insulating layer (Neumann boundaries ∂Ω N x ).
• Vanishing boundary conditions in the k-space: These conditions correspond to negligible densities for large energy values. We use these vanishing conditions for the Boltzmann Equation in our work. We will just mention that, if we had chosen Ω k as the first Brillouin zone, then periodic boundary conditions in the k-space would be the correct physical conditions. However, it is difficult to apply these conditions on the complex shape of the boundary of a truncated octahedron, which is the shape of the first Brillouin zone for Silicon and Germanium crystals.
Boundary conditions related to the Poisson Equation could be: • Applied potential (bias): This condition is imposed where we have device contacts (Dirichlet boundaries).
• Neumann boundary conditions for the electric potential: where n(x) is the normal to the surface ∂Ω N x at the point x. This condition is imposed on the part of the physical domain with an insulating layer, which is a Neumann boundary.
It is important to mention that the contact boundaries ∂Ω D x for the Boltzmann and Poisson equations must be the same.
Boltzmann equation in spherical coordinates for the k-vector
We show here the Boltzmann equation with the momentum k in spherical coordinates presented in [20]. As opposed to the previous work in [19], the coordinate transformation based on the Kane analytic band relation proposed in [5] can no longer be used for an energy band that does not assume this analytic Kane band model and that takes into account anisotropy for ε(k). The spherical coordinate system is used in k space instead of Cartesian coordinates because of the higher resolution demands near the conduction band minimum (chosen as the origin k = 0), and large cells in k-space are sufficient for describing the tail of the distribution function accurately.
The following change of variables into dimensionless quantities are introduced for a general problem: where the spherical coordinate transformation maps the k-domain Ω k onto the set Ω of the (r, µ, ϕ) space. Typical values for length, time and voltage are given by * = 10 −6 m, t * = 10 −12 s and Thus, a new unknown "weighted" pdf function Φ is obtained by multiplying the pdf f by the Jacobian of the spherical k-transformation: which can be interpreted as the probability density function of an electron being in the neighborhood of the phase-space state (x, y, z, r, µ, ϕ) at time t.
Hence, writing the collisional integral in spherical coordinates and multiplying the Boltzmann equation by the Jacobian associated to the k-spherical transformation, yields the following Transformed Boltzmann Equation (TBE) for the unknown Φ: with the following transport terms a = (a 1 , a 2 , a 3 , a 4 , a 5 , a 6 ) T : x, y, z)) , x, y, z)) , and the linear collision operator where the scattering kernel is accounting for acoustic and optical electron-phonon interaction, the main scattering mechanisms in silicon. The constants above are defined as The details of the derivation of the TBE are collected in the appendix. The dimensionless Poisson equation is where x, y, z, r , µ , ϕ ) dr dµ dϕ .
Geometrical interpretation of the force terms in the TBE
Although the terms (a 1 , a 2 , a 3 ) related to the transport in the x-space due to the electron group velocity in the TBE can be easily interpreted as just the gradient ∇ k ε(k) expressed in spherical coordinates, the terms (a 4 , a 5 , a 6 ) related to the transport in the k-space due to the electric field might be more obscure to understand. A simple expression for them can be identified.
These transport terms express the acceleration field induced by E in spherical coordinates, as they are related to the negatives of the directional cosines of E with respect to the unit vectorsê r ,ê µ , e ϕ . Hence, the TBE (11) is written in conservative, divergence form, as a flow in the k-space due to the electric field decomposed in each of the orthogonal components of the spherical k-coordinates.
This can be easily derived from the expression for the divergence in general curvilinear coordinates, applied to the particular case of spherical coordinates k(r, µ, ϕ). This calculation is included in the Appendix.
Computation of the spherical average of a local EPM conduction band for silicon
The motivation of this work is to incorporate numerically, in a DG solver of the BP system electronic conduction bands whose values are obtained by the radial averaging of the full band structure given by a local empirical pseudopotential method (EPM) around a local minimum of the conduction band for silicon. By performing the radial averaging, it simplifies the discussion of the numerical A local empirical pseudopotential method (EPM) code developed by Chelikowsky et al. [22] is adapted to compute the conduction band structure of silicon in its Brillouin Zone in the k-space.
The local pseudopotentials are used in this EPM code to mimic a silicon semiconductor with crystal diamond structure [23].
A color plot of the local EPM conduction band on the first octant of the k-space enclosing the Brillouin Zone for silicon is shown in Fig. 1.
The calculated EPM band structure ε(k x , k y , k z ) = ε(k(r, µ, ϕ)) is then averaged over the k-spheres r k around the local energy minimum point k 0 = (0.8562, 0, 0)2π/a (where a is the lattice constant for silicon) by means of a 10 point Gaussian quadrature on the angular space. Using the symmetry of the silicon conduction band, the integration only needs to be performed in the (µ, ϕ) domain The values of the radius of these k-spheres are the grid points r k in the DG-BP simulations. In this way we obtain a band model that has a dependence on r, and at the same time it uses the information of the anisotropic energy band values in the angular k-domain via its numerical average.
As a midpoint between a radial band model and a full band anisotropic model, it has the desired advantages of both. A cubic spline interpolation is then performed, using the numerical values of the radial averageε(r) at the midpoints of the r-cells, and the derivative of this spline interpolation is used to obtain a numerical approximation of the derivative dε/dr at these r-midpoints.
The spherical averages of the EPM conduction bandε(r) vs r ∝ |k − k 0 | 2 with the related spline interpolation for Si are shown in Fig. 2 (in red). The parabolic (blue), which is a linear function of r, and the Kane (green) analytic conduction band models for silicon are plotted as well.
It can be observed that there is a quantitative difference between the different energy band models.
The spherical average of the EPM band is below the Kane band model, which is below the Parabolic band.
We show in Fig. 3 the relative l 2 error norm of the spherical average EPM band with respect to the local EPM data ε(r, µ, ϕ) as a function of r, given by the formula It can be observed in Fig. 3 that the relative l 2 error increases with r, which indicates that far away from the local minimum k 0 the anisotropy of the conduction band becomes increasingly more important.
DG formulation for the TBE and the Poisson equation
In this section, we will discuss the DG schemes based on the radially symmetric band. The scheme is implemented based on a piecewise linear polynomial approximation.
Domain and Finite Element Space
Let's consider a 2 D rectangular domain in the physical space and a rectangular domain Ω k in momentum space. We use simple rectangular cells The test functions ψ(x, y, r, µ, ϕ) belong to the linear function space where P 1 (Ω ijkmn ) is the set of polynomials of degree at most 1 on the cell Ω ijkmn .
A set of piecewise linear basis functions for V 1 h in the open cellΩ ijkmn is given by Hence, in the cellΩ ijkmn , we approximate our weighted-pdf Φ by a piecewise polynomial Φ h of first The charge density on under this piecewise linear approximation is The problem is then reduced to find, by means of our numerical scheme, the unknowns:
Discontinuous Galerkin Formulation for the TBE
The corresponding weak DG formulation and its corresponding approximation consists on finding Φ h ∈ V 1 h , such that for any test function v h ∈ V 1 h and a generic cell K of the decomposition of Ω x × Ω k , solves: where v h is a test function in V 1 h , dσ = dx dy dr dµ dϕ, and where the F ± 's terms are boundary integrals over four-dimensional boundary surfaces associated to each 5-dimensional volume element Ω ijkmn , that is:
Calculation of the Collision terms in the DG formulation
Here, we will show the details of the calculation of the collisional integrals since it is the most demanding part of the simulation. For completeness, the calculation of the transport term is also reported, and collected in the Appendix B.
Denote by K kmn = r k− and ξ p k,m,n (r, µ, ϕ), which are given in Ω ijkmn = Ω I by ξ p k,m,n ( r ) p=0,1,..,5 where we define x = (x, y) , r = (r, µ, ϕ) , r = (r , µ , ϕ ) , d r = dr dµ dϕ (25) and Then, in a piecewise continuous linear approximation of Φ, we have (almost everywhere), that Because the phonon collision scatterings only consider the Fermi Golden Rule [2] and the spherical coordinates localize the negative part operator, there is a natural split of the collision operator in gain and loss terms of probability density rates.
Gain Term of the collisional operator. The gain term, when using the piecewise linear In the weak formulation, the gain term is multiplied by the test function η q i,j ( x) ξ q k,m,n ( r) and an integral over the domain ΩĪ ,Ī = (ī,j,k,m,n), with respect to ( x, r) is performed, obtaining: or The integration with respect to x and y gives where the matrix β pq has the following terms: W p ijkmn (t)β pq ∆xī∆yj × Kkmn √ r 2 K kmn S( r , r ) ξ p k,m,n ( r ) d r ξ q k,m,n ( r) d r.
Loss Term of the collisional operator. The weak formulation of the loss term of the collisional operator gives Using the linear approximation of Φ given by (30), integral (34) becomes Therefore, equation (34) reduces to 5 p=0 W p I (t)β pq ∆xī∆yj k,m,n Kkmn K kmn √ r 2 S( r, r ) d r ξ p k,m,n ( r) ξ q k,m,n ( r) d r (35) In the case of an energy band function with radial dependance ε(r), The radial energy band function can be projected on the space of piecewise linear functions of r to and, after this projection, we can calculate the collision integrals involving a delta distribution with the piecewise linear function in their argument. The computation of such collisional integrals are given by W p I (t)β pq ∆xī∆yj k,m,n K kmn Kkmn S(r, r ) ξ p k,m,n ( r) ξ q k,m,n ( r) d r In order to perform the integrations involving √ r numerically by means of Gaussian quadrature, the change of variables r = s 2 is applied, so that the functions of s to be integrated are just polynomials,
The algorithm for time evolution
Starting with given initial and boundary conditions, the algorithm advances from t n to t n+1 by the following way: 1. Compute the density ρ.
Solve the Poisson equation and find the electric field E.
3. Compute the transport terms a i 's.
Compute the collision part.
5. Solve the (large) system of ordinary differential equations for the coefficients of the linear approximation of Φ h (which are obtained from the DG formulation), by using a TVD Runge -Kutta scheme.
6. Repeat the previous steps as needed.
Algorithm 1: DG-BP Algorithm Data: Given Initial and Boundary Conditions Result: Time Evolution of Probability Density Function initialization; while t l < T max do Compute density ρ(x, t l ); Solve Poisson Eq. to find the electric field E(x, t l ); t l+1 = t l + ∆t l+1 ; Compute the collision terms; Compute the transport terms a i (x, k, t l+1 )'s; Solve the large system of ODEs for the coefficients of Φ h | t l+1 (obtained from the DG formulation) by a TVD Runge-Kutta scheme. Repeat the previous steps if needed. end 6 The n + -n-n + silicon diode We consider the symmetric case of a 1D n + -n-n + diode, in which the conduction band energy function is assumed to be of the form ε(|k|) = ε(r). This assumption preserves azimuthal symmetry for the problem if the initial condition is independent of the azimuthal direction ϕ. Therefore, under these assumptions the problem has azimuthal symmetry in k for all times t ≥ 0, so it suffices to consider k = k(r, µ), reducing then the dimensionality of the problem to 1-D in x-space and 2-D in k = k(r, µ), then the problem reduces to a 3-D plus time. Assuming E has null y and z components, this symmetric case reduces the TBE to where the terms a 1 , a 4 and a 5 are now simplified.
The Poisson equation is reduced to
For this case both the potential and electric field have analytic integral solutions, that are easily computed numerically for the piecewise linear approximation of the density ρ h . Then, such electric field solution is projected in the V 1 h space of piecewise linear polynomials.
Device specifics
We consider first a diode of 1 µm length, with an n-channel of 400 nm length, doping of 5×10 23 m −3 in the n + region and 2 × 10 21 m −3 in the n region. We also consider a 0.25µm diode with a 50 nm channel with n + -doping of 5 × 10 24 m −3 , and n-doping of 1 × 10 21 m −3 .
Numerical simulations
The space V 1 h of piecewise linear polynomials in (x, r, µ), with time dependent coefficients, is used as both the trial and test space in our DG scheme. The input data of the numerical simulations is Cut-off in the k-space Φ(t, x, r max , µ) = 0.
No boundary conditions are needed on r = 0, µ = ±1. Upwind fluxes in r and µ are analytically zero at these boundaries, since they are related to points in k-space such as the origin and the poles, which are transformed into boundaries when applying the spherical change of coordinates.
It is very simple to verify that a 4 = 0 at r = 0, and a 5 = 0 at µ = ±1.
Numerical results
The BP transport along the EPM spherical average energy band is numerically simulated by means We use a mesh as in [19] which gives better resolution close to the first juncture at x = 0.3µm, and which also has a finer refinement close to the pole in the direction of the electric field. It uses We show plots of the average velocity, the average energy, the momentum (proportional to the current), the electric field and potential, for both the 400nm channel and 50nm channel diodes.
There is a clear quantitative difference, particularly in kinetic moments such as average velocity, average energy, and momentum (current), whose values depend on the energy band model used in each case then. This should be expected since these kinetic moments are averages of quantities related to ε(k) or its partial derivatives in k-space.
Current -Voltage Characteristic Curves
We also perform a study of the current -voltage characteristic curves for the considered pair of devices with our 3 different band models. We show in Fig. 13 the plots of the Momentum (U) vs. Bias (V ) for voltages of 0, 0.25, 0.5, 0.75, 1.0 V with the mesh described in the previous section for the 400nm channel device, and in Fig. 15 the respective plots for the 50nm channel device.
We observe a clear quantitative difference between the currents according to the used band model.
The currents predicted using the parabolic band are bigger than the ones obtained using the Kane band, and the EPM radial average predicts lower current values than the other two models. We compute as well the IV curves with a refined mesh in the r variable subdividing the intervals of the original mesh in half the size, and with a coarse mesh as well by joining pairs of subsequent intervals, using in all cases the same domain. The current voltage characteristics predicted with the coarser and finer r-meshes are quite close to the predictions with the original mesh used.
We compare as well in figures 14 and 16 the momentum predicted with a finer r-mesh for our set of bias with the differences of these moments with the predicted ones by our original mesh (U − U R ) and a coarser mesh (U C − U R ). For the 400nm channel, we observe in Fig. 14 that the order of magnitude of the current values for nonzero voltages is twice bigger than the difference between those currents for the coarser and the finer mesh, and three times bigger than the difference between For the 50nm channel, we observe in Fig. 16 that the order of magnitude of the current values for nonzero voltages is twice bigger than the difference between those currents for the coarser and the finer mesh, as for the difference between those currents for the original and final mesh. The differences are consistently lower for the EPM average band when compared to the Kane and Parabolic Band.
The set of plots presented for our devices point to several facts. In addition to the clear quantitative difference between the current voltage characteristics predicted according to which band model is used, with the EPM radial average currents below the Kane predictions and these below the Parabolic currents, the fact that the difference in the IV curves obtained with different meshes is at least two orders of magnitude below the current values indicates that IV curves of comparable quantitative accuracy can be obtained with coarser meshes in the r-variable with a reduced computational effort. We also notice that, for nonzero bias, the values of the differences U − U F , U C − U F between predicted currents is always positive, that is, the coarsening of the mesh comes with a slight increase of the predicted current. Except for one point in Fig. 16 related to the Kane band with a 1.0V bias, the differences U C − U F in currents with the coarser mesh are bigger than the difference U − U F between the current predicted with the original mesh and the refined mesh.
Moreover, since we observe that both these differences U C − U F and U − U F , are lower for the EPM average band model, the variations in the current prediction are smaller under the coarsening of the mesh when using the EPM average as a band model. with a coarser mesh (U C − U R ). 400nm channel device. Notice that the order of magnitude of the current values for nonzero voltages is twice bigger than the difference between those currents for the coarser and the finer mesh, and three times bigger than the difference between those currents for the original and final mesh. The differences are consistently lower for the EPM average band when compared to the Kane and Parabolic bands. . 50nm channel device. Notice that the order of magnitude of the current values for nonzero voltages is twice bigger than the difference between those currents for the coarser and the finer mesh, as for the difference between those currents for the original and final mesh. The differences are consistently lower for the EPM average band when compared to the Kane and Parabolic Band.
Conclusions
The implementation of a spherical average of an EPM full band structure as a conduction energy band model in a DG solver for Boltzmann -Poisson represents a computational strategy that is a midpoint between a radial and an anisotropic full band model. This difference in the values of the energy band and its derivatives, introduced via the spherical average of EPM bandstructure values over k-spheres and the spline interpolation of derivatives predicts a quantitative correction in kinetic moments (averages) related to the energy band model, such as average velocity, energy, and particularly the momentum (proportional to the current) given by our solver. It emphasizes then the importance of an accurate physical modeling of the energy band structure and its partial derivatives as these functions drive the mechanisms of collision (electron -phonon scattering) and transport (via the electron group velocity) whose balance is the core modeling of electron transport in semiconductor by the Boltzmann -Poisson system. This highlights the importance of band models and their features (anisotropy, numerical approximation of its values and interpolation of their derivatives, for example) in the BP numerical modeling of electron transport via DG schemes.
Future work will focus on the implementation of anisotropic EPM full bands in the DG solver for BP, and on developing a positivity-preserving DG numerical scheme, along with error estimates for the BP system under consideration.
A TBE in Divergence Form for k in Spherical Coordinates
The divergence in k in the standard spherical coordinates (|k|, θ, ϕ) for a vector field A(x, k, t) = The divergence of A, with a respective orthogonal decomposition: A = A rêr +A µêµ +A ϕêϕ , in terms of the modified spherical coordinates (r, µ, ϕ) used in the TBE is obtained from (42) by taking into account: |k| 2 = (2m * k B T L / 2 )r, µ = cos θ. Therefore following thatê µ = −ê θ , A µ = −A θ , andê |k| =ê r , A |k| = A r . We have then We obtain then the divergence in the modified spherical coordinates used in this work: So, the k-transport term in the TBE (11) can be expressed in the divergence form (43) by using: for the vector field The formula (43) mentioned above for A can be interpreted geometrically as a flow of electric field in the orthogonal directions of the spherical coordinate geometry used, since by definition: B Details of calculation of the transport terms The following notation for boundary terms will be needed. We denote bŷ Φ ξ p k,m,n±1 | n± 1 2 = ξ p k,m,n±1 (r, µ, ϕ n± 1 2 ) , p ∈ {0, 1, · · · , 5}.
We consider first the weak formulation of the transport terms in space, namely ∂ ∂x (a 1 Φ) and Due to the upwind flux rule, we have to consider two cases depending on the sign of a 1 . In the sequel, the symbol ≈ will denote the approximation of given integral terms If a 1 ( r) > 0 in K kmn , for q = 0, ..., 5, one obtains W p I (t) ∆x i Ω I a 1 ( r)η p i,j | i+ 1 2 ξ p k,m,n ( r)η q i,j | i+ 1 2 ξ q k,m,n ( r)d xd r If a 1 ( r) < 0 in K kmn , for q = 0, ..., 5, W p i+1 jkmn (t) ∆x i Ω I a 1 ( r)η p i+1,j | i+ 1 2 ξ p k,m,n ( r)η q i,j | i+ 1 2 ξ q k,m,n ( r) d xd r Ω I a 1 ( r)η p i,j | i− 1 2 ξ p k,m,n ( r)η q i,j | i− 1 2 ξ q k,m,n ( r) d xd r We consider now the weak formulation for the transport terms in momentum space ∂ ∂r (a 4 Φ) + ∂ ∂µ (a 5 Φ) + ∂ ∂ϕ (a 6 Φ) advected by the electric field. It can be noticed in Eq. 11 that all the terms By the Upwind Flux Rule, if a * ( r) E * (t, x) > 0 in ∂ ± r Ω I , for q = 0, ..., 5 , | 2015-12-16T22:51:53.000Z | 2015-12-16T00:00:00.000 | {
"year": 2015,
"sha1": "9974f4a1101fb955dc0681ff8a645fe412f16315",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1512.05403",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "cbb9a3c77ab7e82e8c18ed5d01a8d69ee818a22b",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Computer Science",
"Mathematics"
]
} |
259637067 | pes2o/s2orc | v3-fos-license | Contact and gustatory effects of Spinosad on inhibition of population build-up of Cryptolestes pusillus (Schon.) (Coleoptera: Cucujidae) after different storage periods in wheat
The adult population of Cryptolestes pusillus (Schon.) was significantly (P<0.001) reduced after 3-, 6-and 9-months storage periods at different concentrations of Spinosad than those of control medium. Spinosad at all concentrations reduced mean number of larval 5.33 0.72 to 52.00 0.94, 5.67 1.36 to 32.00 1.41 and 2.67 0.54 to 54.33 0.72 after 3, pupal 3.33 1.09 to 26.00 0.94, 3.33 0.72 to 26.00 1.25 and 2.33 0.29 to 16.33 0.27 after 6 and adult 5.33 1.19 to 62.00 0.47, 2.33 0.27 to 72.00 1.25 and 2.33 1.09 to 58.00 1.25 after 9 months storage period. The percent of repulsion (PRC) were always higher in higher concentrations but lower in lower concentrations in every cases. The total population of C. pusillus was 152.00 1.15 and 790.33 2.60 after 1 st and 2.67 1.09 and 20.67 3.34 after 2 nd generations. Spinosad effectively reduced the survivability and population of C. pusillus of the treated wheat.
Introduction
Control of stored product insects is considered the best achieved through an integration of physical, chemical, and biological methods (Hagstrum et al. 1999, Phillips andThorne 2010). In this context in storage facilities, where light is absent the bacterial insecticide Spinosad could be a potential agent; and was found to be remain stable for a long period, thus it can provide long-term protection for stored grains (Fang and Subramanyam 2003, Arthur et al. 2006, Hertlein et al. 2011). Spinosad, a reduced-risk commercial insecticide based on the fermentation products of an actinomycete bacterium, has been labeled for use on over 250 crops in more than 50 countries (Mertz and Yao, 1990; Thompson et al., 2000). Spinosad has low mammalian toxicity and degrades quickly when exposed to sunlight (Thompson et al., 2000), but it was relatively stable in stored-grain (Fang et al., 2002b;Flinn et al., 2004). Spinosad acts on the nicotinic acetylcholine receptors and this mode of action is unique among other known insecticides (Thompson et al., 2000). These benign properties make it an ideal product for use in stored grain. Laboratory and field tests on stored wheat showed that spinosad at 1 (a.i.) mg/ kg of grain was effective against several insect pests including the lesser grain borer Rhyzopertha dominica (F.), rusty grain beetles Cryptolestes ferrugineus (Stephens), and Indian meal moth Plodia inter punctella (Hübner) (Fang et al., 2002a(Fang et al., , 2002bFlinn et al., 2004;Huang et al., 2004).
A liquid formulation of spinosad (SpinTor® 2SC) containing 240 (a.i.) mg/ mL was obtained from Dow Agro Sciences (Indianapolis, Indiana, USA). Insecticide was diluted in distilled water to make solutions of different concentrations for grain treatment. In 2005, the United States Environmental Protection Agency registered spinosad at 1 mg/kg as a grain protectant on commodities including wheat, corn, rice, millets, oats, sorghum, and barley (Bruggink, 2005).
Cryptolestes pusillus is a coleopteran pest which destroy stored grain. It is commonly known as flat grain beetle. The beetle is an external feeder and a serious cosmopolitan pest of stored product commodities especially cracked grain (Baker 1976). It multiplies rapidly and subsequently build up into a huge population within very short period of time (Rahman et al. 2009).
The present study was therefore, planned to study the efficacy of Spinosad on the population build -up of C. pusillus at different storage periods.
Insects
Ten adults of C. pusillus were taken from the stock culture maintained five years in Control Temperature Room (CT Room) and kept in 500 ml beakers. Wheat was soaked with different concentrations (1.25, 0.63, 0.32, 0.16 and 0.08μg/ml) of Spinosad separately and dried at room temperature for 24h. fifty gram of wheat grains were taken and soaked with one of the Spinosad concentration from each beaker. A separate batch of control was maintained. Thirty (30) unsexed adult beetles of 10-15 days old were introduced into beaker and covered with muslin cloth by the help of rubber bands. Control batch was simultaneously running. Every after 30 days additional 10 g uninfested wheat was added to each beaker. The experiment was conducted at CT room. Three replications were used for each Spinosad concentrations. Beetles were observed after 3-, 6-and 9-months exposure and the number of larva, pupa and adult were counted separately and only the dead ones were discarded.
All data were analyzed by Factorial ANOVA to compare mortality percentage as the respose variable and concentrations, life stages, exposure periods main effects. For the comparison of means the Tukey's test (1953) was used. The percent reduction of population to control (PRC) was calculated according to Mian and Mulla (1982a) by the following formula: Where, C= No. of population in control T= No. of population in treated media
Preparation of concentration
The formulation of liquid spinosad was diluted in distilled water. By using a micropipette (maximum range 10 µl) 20.79µg spinosad was taken in 50 ml beaker. 6ml distilled water by using 2ml syringe (3 times) were added properly in it. For equal mixing of spinosad and water the vial was shaken vigorously. After that 1ml was taken off from the solution containing each 1 ml 3.47µg spinosad. This was the first concentration. The other spinosad concentrations were prepared by serial dilution by taking 1ml of solution and in each step 2ml distilled water was added. Then, the desired Spinosad concentrations as 3.47, 1.74 and 0.87µg were selected for toxicity as a mother dose of C. pusillus . Spinosad stock solutions for wheat and rice treatment were made in distilled water. This solution was preserved in refrigerator carefully.
Then prepared expected doses by serial dilution, by taking one ml of Spinosad in each step and adding 2 ml distilled water. Selected Doses were 1.25, 0.63, 0.32, 0.16 and 0.08μg/ ml.
All experiments were conducted under laboratory conditions at CT room.
Results and discussion
Different gustatory action of Spinosad concentrations drastically decreased the number of larvae of C. pusillus after 3 months of storage periods ( Similarly, the adult population was increased in higher doses and decreased in lower doses. Like 3 months record, the population of C. pusillus at larval, pupal and adult stages were effective on different concentrations after 6 months storage periods. The mean number of larvae was 5.671.36 to 32.001.41 in five doses. In higher doses lower number larvae was recorded but in lower doses higher number of larvae was counted. The PRC were 71.93 to 95.03. Significant effect of larval population was recorded (F=194.225 df= 5, P< 0.001).
The pupal population was also decreased in higher doses (1.25 μg/ ml) and increased in lower doses 0.08 μg/ ml). Mean number of adults were recorded after 9 months storage period and tabulated in Table 3. The PRC values were 73.02 to 98.91.
Spinosad resulted in suppression of total population in C. pusillus after different storage periods are presented in Table 4. The number was positively related to storage periods. The mean total population in control was 1161.002.65 to 43.005.56 in first and second generation. In the treatment it was 152.001.15 to 790.332.60 in the 1 st generation and to 2.671.09 to 20.673.34) in the 2 nd generation. Analysis of variance revealed that highly significant effects of Spinosad on total population of C. pusillus was noted.
Spinosad at 0.08, 0.16, 0.32, 0.62 and 1.25μg/ ml concentrations possess higher insecticidal effect on larval, pupal and adult population of C. pusillus at 3-9 months storage. In all concentrations larval population was significantly reduced in all treatments compared to control. Hertlein et al. (2011) reported that Spinosad residues are extraordinarily stable on grains stored in bins, offering protection ranging 6 months to 2 years. Spinosad formulations were found to be extremely powerful against Plodia interpunctella and Rhyzopertha dominica allowing negligible larval survivability and adult emergence on wheat and maize (Subramanyam et al. 2012). These consequences are consistent with greater susceptibility of P. interpunctella eggs and larvae to a commercial Spinosad formulation as reported by Huang et al. (2004). Yousefnezhad-Irani and Asghar (2007) reported that younger larvae and adults of Trobolium castaneum were more susceptible to spinosad than the older larvae; adult being less prone and the first instars are incredibly susceptible (Towes and Subramanaym 2003). Daglish and Nayak (2006) reported the persistence and efficacy of Spinosad against R. dominica in wheat stored for 9 months. Their result support the results of the present investigation. Spinosad is an effective grain protectant and also have splendid impact on progeny reduction of stored grain insect pests. The present work almost similar. Subramanyam et al. (2006) reported that Spinosad at 1 mg/kg provided fantastic control of adults of the R. dominica, T. castaneum, C. ferrugineus, S. zeamais and P. interpunctella for the duration of the six months to one year with the exception of T. castaneum, which was less prone; progeny production of all of the mentioned species was greatly suppressed. Fang Overall, the present findings the investigation revealed that Spinosad has good potential properties against C. pusillus as target species of insect pest control programs.
Conclusion
The mentioned research concluded that, a reduced-risk bacterial insecticide was very effective in the suppression of C. pusillus in different storage periods. At concentration >1.25μl/ g resulted in 100% control of C. pusillus on wheat after 9 months storage periods. Therefore, in order to maximize the negative effects of the synthetic chemicals on the environment and natural enemies in the pest management programme, Spinosad could be integrated in IPM programme. | 2023-07-11T16:45:46.602Z | 2023-06-30T00:00:00.000 | {
"year": 2023,
"sha1": "896c245569696f1342c46644ea06a7d0a3092b43",
"oa_license": "CCBY",
"oa_url": "https://ijbpsa.com/sites/default/files/IJBPSA-2023-0054.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "c37d5871bdebd47f42114829a933948a6ef8d8b2",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": []
} |
220042028 | pes2o/s2orc | v3-fos-license | A Note on Over-Smoothing for Graph Neural Networks
Graph Neural Networks (GNNs) have achieved a lot of success on graph-structured data. However, it is observed that the performance of graph neural networks does not improve as the number of layers increases. This effect, known as over-smoothing, has been analyzed mostly in linear cases. In this paper, we build upon previous results \cite{oono2019graph} to further analyze the over-smoothing effect in the general graph neural network architecture. We show when the weight matrix satisfies the conditions determined by the spectrum of augmented normalized Laplacian, the Dirichlet energy of embeddings will converge to zero, resulting in the loss of discriminative power. Using Dirichlet energy to measure"expressiveness"of embedding is conceptually clean; it leads to simpler proofs than \cite{oono2019graph} and can handle more non-linearities.
However, a key issue with GNNs is their depth limitations. It has been observed that deeply stacking the layers often 1 Department of Computer Science, Ohio State University, Ohio, USA. Correspondence to: Chen Cai <cai.507@osu.edu>.
Proceedings of the 37 th International Conference on Machine Learning, Vienna, Austria, PMLR 108, 2020. Copyright 2020 by the author(s). 1 Strictly speaking, over-smoothing is a misnomer. As we will show, what is decreasing is tr(X T∆ X), not the real smoothness tr(X T∆ X) ||X|| 2 2 of graph signal X.
results in significantly worse performance for GNNs, such as GCN and GAT. This drop is associated with many factors, including the vanishing gradients during back-propagation, overfitting due to the increasing number of parameters, as well as the phenomenon called over-smoothing. (Li et al., 2018) was the first to call attention to the over-smoothing problem. Having shown that the graph convolution is a type of Laplacian smoothing, they proved that after repeatedly applying Laplacian smoothing many times, the features of the nodes in the ( The goal of this paper is to extend some analysis of GNN in the ICLR 2020 spotlight paper (Oono & Suzuki, 2019) on the expressive power of GNNs for node classification.
To the best of our knowledge, (Oono & Suzuki, 2019) is the first paper extending the analysis of over-smoothing in linear GNNs to the nonlinear ones. However, only ReLU is handled. It is noted by the authors that extension to other non-linearities such as Sigmoid and Leaky ReLU is far from trivial.
In this paper, we propose a simple technique to analyze the embedding when the number of layers goes to infinity. The analysis is based on tracking the Dirichlet energy of node embeddings across layers. Our contributions are the following: • Using Dirichlet energy to measure expressiveness of embeddings is conceptually clean. Besides being able to recover the results in (Oono & Suzuki, 2019), our analysis can be easily applied to Leaky ReLU. In the special case of regular graphs, our proof can be extended to the most common nonlinearities. The proof is easy to follow and requires only elementary linear algebra. We discuss key differences between our proof and proofs in (Oono & Suzuki, 2019) as well as the benefits of introducing Dirichlet energy in Section 4.
• Second, we perform extensive experiments on a variety of graphs to study the effect of basic edge operations on the Dirichlet energy. We find in many cases dropping edges and increasing the weights of edges (to a high value) can increase the Dirichlet energy. arXiv:2006.13318v1 [cs.LG] 23 Jun 2020
Notation
Let N + be the set of positive integers. We define A ∈ R N ×N to be the adjacency matrix and D to be the degree matrix of graph G. Letà := A + I N ,D := D + I N be the adjacent and degree matrix of graph G augmented with self-loops. We define the augmented normalized Laplacian of G by∆ := I N −D − 1 2ÃD − 1 2 and set P := I N −∆ = D − 1 2ÃD − 1 2 . Let L, C ∈ N + be the layer and channel sizes.
We define a GCN associated with G by f = f L • ... • f 1 where f l : R N ×C l → R N ×C l+1 is defined by f l (X) = MLP l (P X). Here MLP l (X) := σ (· · · σ (σ(X)W l1 ) W l2 · · · W lH l ) where σ is an elementwise nonlinear function. Note that weight matrices W l· are not necessarily square. We consider the embeddings X (l+1) := f l (X (l) ) with initial value X (0) . We are interested in the asymptotic behavior of the output X (L) of GCN as L → ∞.
We state the following lemma without a proof.
Main Result
The main idea of the proof is to track the Dirichlet energy of node embeddings w.r.t. the (augmented) normalized Laplacian at different layers. With some assumptions on the weight matrix of GCN, we can prove that Dirichlet energy decreases exponentially with respect to the number of layers. Intuitively, the Dirichlet energy of a function measures the "smoothness" of a function of unit norm, and eigenvectors of the normalized Laplacian are minimizers of the Dirichlet energy.
Definition 3.1. Dirichlet energy E(f ) of scalar function f ∈ R N ×1 on the graph G is defined as Without loss of generality, each layer of GCN can be represented as f l (X) = σ(σ(· · · σ(σ( Next we will analyze the effects of P, W l , σ on the Dirichlet energy one by one. Lemma 3.1. E(P X) ≤ (1 − λ) 2 E(X) where λ is the smallest non-zero eigenvalue of∆.
Proof. Let us denote the eigenvalues of∆ by λ 1 , λ 2 , ..., λ N , and the associated eigenvectors of length 1 by v 1 , ..., v n . Therefore, Extending the above argument from the scaler field to vector field finishes the proof for E(P X) ≤ (1 − λ) 2 E(X).
Proof. By definition, Since for each term we get Remark: Since ||A|| 2 = σ max (A) where σ max (A) represents the largest singular value of matrix A. Our result in Lemma 2 is essentially the same as the Lemma 2 2 of (Oono & Suzuki, 2019). Note that our proof can handle weight matrix not only of dimension d × d but also of dimension d × d while the paper (Oono & Suzuki, 2019) assumes the embedding dimension to be fixed across layers. See detailed discussion at section 4.
Remark:
The proof itself doesn't leverage the structure of graph. In particular, only the fact of Laplacian is p.s.d matrix is needed in the proof. See an alternative proof in the appendix. This also makes sense because W operates on the graph feature space and should be oblivious to the particular graph structure.
Proof. We first prove it holds for scalar field f and then extend it to vector field X.
Therefore, by replacing holds for ReLU and Leaky-ReLU. Extending the above argument to vector field completes the proof.
Remark: For regular graphs, the above conclusion can be extended to more non-linearities such as ReLU, Leaky-ReLU, Tanh, and Sigmoid.
Remark: The proof hinges on the simple fact that for ReLU and Leaky-ReLU, σ(ca) = σ(c)a where c ∈ R + , a ∈ R.
For other activation functions, as long as c 1 a = c 2 b and c 1 σ(a) = c 2 σ(b) (easy to find examples for Sigmoid, Tanh 3 , etc since there are no strong restrictions on a, b, c 1 , c 2 . 4 ), we can not guarantee E(σ(X)) ≤ E(X).
Combining the above three lemmas, and denote the square of maximum singular value of W T lh by s lh and set s l := H l h=1 s lh . Also letλ := (1 − λ) 2 . With those parameters, we arrive at the main theorem.
Theorem 3.4. For any l ∈ N + , we have E(f l (X)) ≤ s lλ E(X) See proof in the appendix A.
Our result shares great similarity with the paper (Oono & Suzuki, 2019). The bounds are similar but our result handles more general cases. As noted in (Oono & Suzuki, 2019), eigengap plays an important role. The analysis of Erdos-Renyi graph G N,p (or any other graphs that have large eigengaps) when log N N p = o(1) in the paper (Oono & Suzuki, 2019) can also be directly applied to our case.
Key Differences
The key quantity paper (Oono & Suzuki, 2019) [M ] is orthonormal basis of null space U of a normalized graph Laplacian∆. The original definition of d M (X) is defined for the case of the same embedding dimension across layers. It needs to be modified to handle the case of varying dimensions. One way to achieve this is to define M = U ⊗ R C , M = U ⊗ R C , respectively. The lemma 2 of paper (Oono & Suzuki, 2019) then can be modified from d M (XW ) ≤ sd M (X) (W ∈ R c×c ) to the following: where s is the singular value of W 5 .
As for the nonlinearity, (Oono & Suzuki, 2019) mentions that their analysis is limited to graph neural networks with the ReLU activation function because they implicitly use the property that ReLU is a projection onto the cone {X > 0} (see Appendix A, Lemma 3 in (Oono & Suzuki, 2019) for details). This fact enables the ReLU function to get along with the nonnegativity of eigenvectors associated with the largest eigenvalues (Perron-Frobenius theorem). Therefore, the authors mentioned that it may not be easy to extend their results to other activation functions such as the sigmoid function or Leaky ReLU.
In contrast, the proof of Lemma 3.3 becomes trivial once we write out the Dirichlet energy as the sum of multiple terms for each of which the effect of nonlinearity can be easily analyzed.
Experiments
To investigate how basic edges operations, removing edges, and increasing edge weight 6 , affect Dirichlet energy and over-smoothing, we perform extensive experiments on both common benchmarks (Cora and CiteSeer) and synthetic graphs. See appendix B for more details on datasets.
In particular, given a graph, we will compute its eigenvalues before and after randomly dropping/increasing weights of a certain percent (10% − 90%) of edges. This is shown in the first/third column for each figure. In the second/fourth column, we generate three signals x, P x(P x), P 2 x(P 2 x), where x = Σ T i c i v i where v i is the first T eigenvectors corresponding to lowest T eigenvalues of normalized Laplacian and c i is uniform random number between 0 and 1. In other words, x is a mix of lower eigenvectors. We then compute the Dirichlet energy of three signals both for original graph (E 0 , E 1 , E 2 ) and graph with edges removed/reweighted (E 0 , E 1 , E 2 ). The same experiments are repeated 20 times and 120 data points are shown in the scatter plot.
We make the following observations.
• First, for nearly all graphs and ratios (except for some cases of Cora and CiteSeer), dropping edges increases Dirichlet energy for x, P x(P x), P 2 x(P 2 x). This coincides with the observation in DropEdge (Rong et al., 2019) that dropping edges help relive over-smoothing.
• Second, in most cases, the effect of increasing the weight of edge (from 1 to 10000) and dropping edges is " dual" to each other, i.e., increasing weights of a few edges to a very high value is similar to dropping a lot of edges in terms of eigenvalue and Dirichlet energy. Intuitively, we can think of increasing the weight of an edge u, v to infinity as contracting node u and v into a supernode. For the planar graph and its dual graph, edge deletion in one graph corresponds to the contraction in the other graph and vice versa. We hypothesize that randomly increasing the weight of a few edges to a high value will also help to relieve over-smoothing. We leave the systematic verification as future work.
Conclusion
We provide an alternative proof of graph neural networks exponentially loosing expressive power. Being able to achieve the same bound as the paper (Oono & Suzuki, 2019), our simple proof also handles Leaky ReLU. We also empirically explore the effect of basic edge operations on the Dirichlet energy.
Some interesting future directions are: 1) The key challenge of analyzing the over-smoothing effect lies in the non-linearity. How to extend our strategy to more general graph learning such as other nonlinearities, normalization strategy (Zhao & Akoglu, 2019), graphs with both node and edge features and attention mechanism (Veličković et al.,
2017) remains largely open. 2)
The assumption on the norm of weight function of GNNs is crucial (may also be too strong) in our proof. Understanding how learning plays a role in resisting the over-smoothing effect is interesting.
A. Missing Proof
To show that Lemma 3.2 is not using any particular graph structure, we present an alternative proof of Lemma 3.2, show simply use the generic matrix inequality.
Note σ max denotes the largest eigenvalue and
B. Experiments
We perform experiments on both synthetic graphs and real graphs benchmarks. The threshold T for the number of lower eigenvectors is set to be 20 for synthetic graphs. For Cora and Citeseer, it is set to be 400 and 600 respectively (due to a large number of nearly zero eigenvalues). The code is available on Github. 7 The details of each graph are listed as follows: • Random graph G(200, 0.05).
• Random geometric graph on the plane. There are 200 nodes uniformly at random in the unit cube. Two nodes 7 https://github.com/Chen-Cai-OSU/GNN-Over-Smoothing are joined by an edge if the distance between the nodes is at most 0.2.
• Stochastic Block Model with 2 blocks. It consists of two blocks where each block has 100 nodes. The edge probability within the block is 0.1 and edge probability between blocks is 0.01.
• Stochastic Block Model with 4 blocks. It consists of four blocks where each block has 50 nodes. The edge probability within the block is 0.1, 0.2, 0.3, 0.4. The edge probability between blocks is 0.08.
• Barabasi-Albert Graph. A graph of n nodes is grown by attaching new nodes each with m edges that are preferentially attached to existing nodes with high degree. We set n, m to be 200 and 4.
• Cora is a citation graph where 2708 nodes are documents and 5278 edges are citation links.
• Citeseer is a citation graph where 3327 nodes are documents and 4552 edges are citation links. | 2020-06-25T01:01:30.828Z | 2020-06-23T00:00:00.000 | {
"year": 2020,
"sha1": "c6d550c3fcecf27b979be84c4cd444cc1c72bf47",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "c6d550c3fcecf27b979be84c4cd444cc1c72bf47",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
14511854 | pes2o/s2orc | v3-fos-license | Perceived barriers to children’s active commuting to school: a systematic review of empirical, methodological and theoretical evidence
Active commuting to school (ACS) may increase children’s daily physical activity and help them maintain a healthy weight. Previous studies have identified various perceived barriers related to children’s ACS. However, it is not clear whether and how these studies were methodologically sound and theoretically grounded. The purpose of this review was to critically assess the current literature on perceived barriers to children’s ACS and provide recommendations for future studies. Empirically based literature on perceived barriers to ACS was systematically searched from six databases. A methodological quality scale (MQS) and a theory utilization quality scale (TQS) were created based on previously established instruments and tailored for the current review. Among the 39 studies that met the inclusion criteria, 19 (48.7%) reported statistically significant perceived barriers to child’s ACS. The methodological and theory utilization qualities of reviewed studies varied, with MQS scores ranging between 7 and 20 (Mean =12.95, SD =2.95) and TQS scores from 1 to 7 (Mean =3.62, SD =1.74). A detailed appraisal of the literature suggests several empirical, methodological, and theoretical recommendations for future studies on perceived barriers to ACS. Empirically, increasing the diversity of study regions and samples should be a high priority, particularly in Asian and European countries, and among rural residents; more prospective and interventions studies are needed to determine the causal mechanism liking the perceived factors and ACS; future researchers should include policy-related barriers into their inquiries. Methodologically, the conceptualization of ACS should be standardized or at least well rationalized in future studies to ensure the comparability of results; researchers’ awareness need to be increased for improving the methodological rigor of studies, especially in regard to appropriate statistical analysis techniques, control variable estimation, multicollinearity testing, and reliability and validity reporting. Theoretically, future researchers need to first ground their investigations in theoretical foundations; efforts should be devoted to make sure theories are used thoroughly and correctly; important theoretical constructs, in particular, need to be conceptualized and operationalized appropriately to ensure accurate measurement. By reviewing what has been achieved, this review offered insights for more sophisticated ACS studies in the future.
Introduction
Childhood obesity has become a global epidemic, with its increasing prevalence in both developed and developing countries [1][2][3]. Active commuting to school (ACS), defined as the use of active means such as walking or biking to and from school, may increase children's daily physical activity and help them maintain a healthy weight [4][5][6]. Despite the significant health implications of ACS, the rates of ACS have declined over the past few decades [7]. In the United States (U.S.), for example, the percentage of children who walked or biked to school declined from 47.7% in 1969 to 12.7% in 2009 [7]. Similarly, in Australia, the percentage of children aged 5-9 who walked to school decreased from 57.7% in 1971 to 25.5% in 2003 [8].
To reverse the declining trend of ACS, one of the first crucial steps is to identify barriers that prevented children from walking or biking. Research in this area has expanded in the past 10 years, and studies have identified various perceived barriers related to children's ACS [9][10][11]. Nevertheless, it is not clear whether and how these studies are methodologically sound and theoretically grounded. A rigorous assessment of existing literature is important because studies with poor designs, methodological flaws, or theoretical weaknesses could result in biased results and consequently render the subsequent interventions less effective.
In ACS research, perceived barriers can be defined as a person's estimated level of challenges related to personal, environmental, social, and policy obstacles to ACS [12]. As a social cognitive construct, perceived barriers have been widely used or incorporated in health behavior theories, including the Health Belief Model, Social Cognitive Theory, Theory of Planned Behavior, and Social Ecological Theory [13][14][15][16]. Previous research has suggested that, compared with objective factors, e.g., urban form, individuals' perceptions of the environment around them have a stronger and more direct relationship with children's active commuting behavior [17]. Given the theoretical and empirical importance of perceived barriers in ACS research, it is essential to ensure that this construct is considered properly.
Therefore, the purpose of this systematic literature review was to critically assess the current literature on perceived barriers to children's ACS. Specifically, we aimed to 1) examine research on perceived barriers to ACS, 2) identify different types and measures of perceived barriers reported by researchers, 3) assess the methodological quality of empirical studies on perceived barriers to ACS, and 4) evaluate the level of theory utilization in the studies, i.e., to what extent theory was used and how the construct of perceived barriers was conceptualized and operationalized. Empirical, methodological and theoretical recommendations for future studies will also be provided.
Search strategy
Following the PRISMA guidelines [18], we systematically searched for peer-reviewed articles related to perceived barriers to children's ACS in the following six databases: Academic Search Complete, Eric, Medline, EMBASE, CINAHL Plus with Full Text, and SportDis. We chose these databases because they are comprehensive and include multidisciplinary journals. Different combinations of the following search terms were used: child, school child, adolescent, teen, or youth; elementary school, middle school, junior school, intermediate school, or high school; commute, travel, journey, walk, bike, cycle, bicycle, skateboard, or transport; to school. Specific terms used in the search were obtained from reviews of literature and the librarians' and researchers' expertise, and the search was adapted to match the specific structure of each database. A supplemental search was also conducted by reviewing the reference lists of the identified articles to further identify any relevant articles missed in the key word searches. Internal and external duplicates among the databases were examined and excluded in the process of article retrieval. In this review, children refers generally to children, adolescents, and young people aged 4 to 19, and active commuting to school (ACS) is a generic term for both active commuting/transport to and from school.
Inclusion and exclusion criteria
To be eligible for inclusion in the review, the articles had to a) be published in a peer-reviewed English journal; b) include children (4-to 19-year-olds) and/or related adults (e.g., parent, teacher) as participants; c) be about active commuting, e.g., walking, biking, skateboarding, not passive commuting; d) have school as the origin or destination of active commuting; e) present empirical studies; f) use ACS as the outcome variable; and g) investigate perceived barriers to ACS, rather than objective barriers only. Further, we focused only on studies that used quantitative measures to examine perceived barriers for the present review to facilitate the process of synthesizing and comparing. A separate systematic review is in progress to analyze the findings of the qualitative studies. The date of the last search was February, 2013, and we limited the search to all studies published before that date.
Data extraction
Data from the reviewed articles were abstracted using Garrard's matrix method of literature review in health science [19]. Information extracted from each article included study characteristics (e.g., study area/setting, study design), participant characteristics (e.g., sample size, children's age/grades), research methods (e.g., independent/ dependent variables, data collection/analysis methods), and main findings (e.g., rates of ACS, identified perceived barriers to ACS). To ensure the credibility of data extraction, the first author and another researcher (both with research methods training) drew a sample of 16 articles (41%) and extracted the data independently. The researchers agreed on approximately 90% of the extracted data, indicating high inter-rater reliability.
Methodological quality assessment
The authors tailored a methodological quality scale (MQS) for the current review based on previously established instruments [20][21][22][23][24][25][26] and the characteristics of the reviewed studies. For example, school characteristics was not included in any previous instruments, but it is an important consideration in ACS research: Findings from studies conducted in multiple locations are more generalizable, compared with those obtained from single location studies. All studies were assessed on 11 methodological criteria listed in Table 1. Possible points ranged from 4 to 24 with a higher number indicating greater methodological rigor. Each study's point was first rated by the first author and then reviewed by another researcher majored in Statistics and trained in research methodology. Disagreements were resolved by discussion until agreement was reached.
Theory utilization assessment
A theory utilization quality scale (TQS) was created based on previously developed instruments [27,28] and tailored for the current review. For example, previous instruments assessed the conceptualization of perceived barriers by two scales: 1 = Reported, mentioned, or described; 0 = No report, mention, or description [27,28]. In this review, we added another scale in between (i.e., contextually described, but within a broader category), considering that perceived barriers were embedded, rather than clearly stated, in broader perceived environmental and social characteristics in some ACS studies. The reviewed studies were evaluated following the criteria described in Table 2. We first assessed whether and to what extent the authors used theories in the studies. For example, studies that proposed a conceptual framework based on previous theories and clearly measured related constructs received the highest score. In contrast, studies that did not clearly identify a theory but inferred, or studies that claimed to use a theoretical framework to guide the overall study design but did not provide evidence for it received a lower score. By using this scoring scheme, we focused mainly on the description of how theory was used in a study, rather than assessing whether or not a particular theoretical framework was considered as appropriate for investigating perceived barriers to ACS. We also evaluated how the construct of perceived barriers was conceptualized and operationalized in the reviewed studies. According to the criteria described in Table 2, we gave a higher score to studies that provided a clear definition of perceived barriers or described contextually what they meant by perceived barriers in the case of ACS. In contrast, studies that did not define the term clearly received a lower score. Similarly, studies that reported how they operationalized perceived barriers and clearly described the measured items were scored higher, while studies that claimed they measured perceived barriers but did not describe the measured items were scored lower. The possible range of the theory utilization assessment scores was 0 to 7. To examine the reliability of code and the assessment by the first author, two additional researchers trained in health behavior theories scored a sample of 10 articles (26%) independently. The sample of articles was selected randomly by using Microsoft Excel's random sorting function. The raters agreed on 93% and 90% with the original code, respectively, indicating good inter-rater reliability. Discrepancies found were addressed by re-appraisals and discussions, or judgment by a fourth party, until consensus was reached.
After the assessments were finished for both methodological quality and theory utilization quality, a correlation test between the MQS and TQS scores obtained were conducted to establish the relationship between them. This study was considered exempt by the institutional review board at Texas A&M University.
Results
A total of 4,409 unique records were identified from six databases and additional manual searching ( Figure 1). More than 4,300 articles were excluded after the abstract review, of which the majority were not about ACS (n =3,537). After examining the full text of 71 articles, 23 were eliminated because they were not empirically based, did not use ACS as the outcome, or were not about perceived barriers. Nine of the remaining articles were further excluded as they were purely qualitative investigations. The final analysis consisted of 39 articles [5, that met all inclusion criteria ( Table 3). were written by researchers from health-related fields, with seven articles (17.9%) representing collaborative work of researchers across disciplines (e.g., public health and urban planning). All identified articles were published after 2004, with the numbers increasing almost annually. The studies were undertaken in 10 countries, i.e., the U.S. (n =20, 51.3%), Australia (n =10, 23.1%), Belgium (n =2, 5.1%), Canada (n =1, 2.6%), Switzerland (n =1), Cyprus (n =1), Portugal (n =1), Ireland (n =1), England (n =1), and Brazil (n =1). Regarding study settings, 15 (38.5%) were conducted in urban areas, 4 (10.3%) included participants from both rural and urban areas, one (2.6%) was undertaken in rural area, and the remaining studies (n =19, 48.7%) did not specify study settings or distinguish between urban or rural areas. Sample sizes of the reviewed studies varied from 74 to 12,613, and most studies were exploratory (n =36, 92.3%) rather than hypothesis-driven (n =3, 7.7%).
Active commuting to school
The definitions of ACS were not consistent across the studies. Most studies defined ACS as walking or biking Theory utilization Clear identification/ operationalization of theory/constructs used A conceptual framework was proposed based on a theory and measured constructs/variables accordingly.
3
Inferred theory or partial use of theory A theory was not clearly identified, but three or more theoretical constructs of a theory were measured.
2
A theory was identified but only one or two constructs of the theory were measured.
May be informed by theory/slight evidence of use of theory The use of a theoretical framework was claimed to guide design, program, or measures, but was not evidenced.
1
A theory was not clearly identified, but one or two theoretical constructs of a theory were measured.
No evidence of using theory 0 What did the authors mean by "perceived barriers" in each article?
Conceptualization of perceived barriers
Defined or contextually described A clear definition of "perceived barriers" was provided. 2 What "perceived barriers" mean in the case of active commuting to school was clearly described.
Contextually described, but within a broader category
Participants' perceived environmental characteristics that may influence children's ACS were described, which included both perceived facilitators and barriers.
1
Not defined/described 0 Did the authors describe/detail how "perceived barriers" were measured?
Operationalization of perceived barriers
Clearly operationalized /reported Different items were used to measure "perceived barriers" and the items were clearly described.
2 Somewhat/slightly operationalized Different items were claimed to be used to measure "perceived barriers"; however, the items were not described.
1 "Perceived barriers" were claimed to be measured; however, it's not clear what items were used.
Not reported/described 0 to school usually (n =32, 82.1%), while some defined it as walking or biking to school at least once a week (n =3, 7.7%). Other definitions of ACS included walking or biking to school ever, walking or biking to school the longest portion of the journey to school, and walking or biking to school 5 days a week. Similarly, the dependent variable, i.e., ACS, was measured differently across the studies. Most studies used a dichotomized dependent variable as active versus non-active (n =24, 61.5%), or the frequency of ACS as a continuous variable (n =10, 25.6%). Eight studies (20.5%) did not report the rates of ACS. For studies that measured walking, biking, and other modes of transports, such as skateboarding, together as the usual mode to/from school (n =19, 48.7%), the rates of ACS ranged between 11.8% [54] and 77.8% [30]. For studies that considered/reported walking or biking separately as the usual mode to/from school (n =12, 30.8%), the rates of walking were from 6.6% [61] to 61.3% [49] and the rates of biking were between 1% [32] and 51.8% [61]. Only two studies focused specifically on biking to school [34,60].
Perceived barriers to ACS
Fourteen studies (35.9%) did not find any statistically significant (significant for short hereafter) perceived barriers to child's ACS in their analyses. For the other 25 studies, we further excluded four studies (10.3%) that reported perceived barriers based on descriptive or bivariate statistics [36,51,52,63], one study that measured single item (i.e., perceived safety) [47], and one study that used summary index (i.e., 11 items for parental concerns with the mean calculated) [41]. Among the remaining 19 studies (48.7%) that reported significant results, six studies included personal barriers, including parents' lack of time, ease of dropping child off the way to work, child's heavy backpack, child's preference to be driven to school, and walking as requiring too much planning ahead; 18 studies reported perceived physical environmental barriers, among which traffic safety and distance were most commonly cited; and 10 studies identified different types of perceived social environmental barriers to ACS, which were centered on neighborhood safety ( Table 4).
Eleven of the 19 studies that identified significant predictors of ACS used/included children's surveys, and, unanimously, traffic safety was regarded as a barrier to ACS among children. Compared with children, parents were more concerned about neighborhood safety, e.g., crime, strangers, and stray dogs. In regard to children's characteristics, 12 of the 19 studies focused on elementary/ primary school children, five sampled middle/high school (2) Correlates of ACS: distance (−), male (+), Latino (+), from lower-income families (+), attending public school (+), and living in urban areas (+); parental supervision (−), and parent knowing little or nothing about adolescents' whereabouts after school (+).
Carver (2005), Australia
American Journal of Health Promotion 345 parent-child pairs 12-13 years Socio-demographic characteristics, parental perceptions of neighborhood environment.
(2) Predictors of ACS: For boys: no significant bivariate associations between perceptions of the neighborhood and boys' walking to/from school; For girls: having friends living in the neighborhood (+), lots of other boys/girls to "hang out" with (+) and parents' concerns about busy traffic (−).
D'Haese (2011), Belgium
International Journal of Behavioral Nutrition and Physical Activity 696 6th grade Distance, criterion distance (i.e., cumulative percentages of children commuting to school by bike, on foot, and in a passive way, per covered distance), and environmental perceptions (1) Rate of ACS: 38.1% by bike, 21.1% walk.
Emond (2011), US
Journal of Transport Geography
Hume (2007) (2) Perceived barriers of ACS for boys: number of accessible destinations in the neighborhood (+).
(3) Perceived predictors of ACS for girls: having a neighborhood that was easy to walk/cycle around (+) and perceiving lots of graffiti (+).
Hume (2009), Australia
American Journal of Preventive Medicine (2) ACS significantly increased between 2004 and 2006 among children and adolescents.
(3) Predictors of ACS: children of parents who reported that the child had many friends in their areas (+), adolescents whose parents perceived insufficient traffic lights and pedestrian crossings in their neighborhood (−), adolescents of parents who were satisfied with the number of pedestrian crossings (+).
5-18 years old
Objective measures, including the neighborhood and individual walkability index, and subjective measures, including socio-demographic variables and perception of the local environment (e.g., residential density, street connectivity, and crime safety).
(1) Rate of ACS: 18.1% walked or biked 5 days a week, and 25.1% actively commuted at least once a week.
(2) Correlates of ACS: Parent concerns and neighborhood aesthetics were independently associated with ACS. Perceived access to local stores and biking or walking facilities accounted for some of the effect of walkability on ACS. (2) Safety concerns (traffic danger, stranger danger, and getting lost) were higher among drivers, but only significant in bivariate analyses. (2) Predictors of ACS: having enough time in the morning to walk to school (+) and parents feeling that it was safe for children to walk to school (+), and long distance from home to school (−).
(2) Positive associations between active commuting and physical activity.
Merom (2006), Australia
Health& Place 812 5-12 years Socio-demographics, parents' perceptions about safe environment, child's enjoyment of walking, and perceived health benefits of ACS, child's level of independence, parents' modes of transport to work (1) Rate of frequent ACS: 37%; Rates of regular ACS: 22%.
Miller (2013) (1) Children were most active after and least active before and during school.
(2) Weight was not related to activity.
(3) Boys were more confident than girls, whereas parents felt more confident than children did about active transport.
Mota (2007), Portugal
Annals of Human Biology 705 Grades 7-12 Socio-economic position, environmental assessment, including connectivity of the street network, infrastructure for walking and cycling, neighborhood safety, and social environment.
Nelson (2010), Ireland
Journal of Physical Activity and Health the presence of cycle tracks (+), and the ease of walking/cycling to transit (+).
Price (2011) (2) Several participants expressed concerns about liability issues related to students' ACS.
(3) Some reported that schools are not responsible for students' safety once students leave school grounds.
Ridgewell (2009) (2) The number of students using ACS did not significantly differ between schools with a restrictive policy and schools with no restrictive policy. Principals at schools with higher ACS rates were significantly more likely to report that students should consider ACS if residing within one mile, had significantly more enabling environments, and had significantly less restrictive environments. (2) Predictors of ACS: individual ("child prefer to be driven" (−), "no time in the mornings" (−); social ("worry child will take risks" (−), "no other children to walk with" (−), "no adults to walk with" (−), and environmental barriers ("too far to walk" (−), "no direct route" (−).
Positive association: "concern child may be injured in a road accident" and ACS (+).
Journal of the American Planning Association
Distance from school on the street network, five measures of perceived urban form: intersection density, dead-end density, route directness, major roads, and railroads, and measures of perceived convenience (e.g., desire to drop a child off on the way to work, backpack is too heavy) (2) Predictors of ACS: distance (−), intersection density (−), dead ends (−).
(3) Reported perceived barriers by frequency: ease of dropping child off on the way to work, the heaviness of the child's backpack, bad weather, dangerous traffic conditions, high-speed vehicles, lack of complete sidewalks.
Silva (2011), Brazil
Journal of Physical Activity and Health 1672 11 to 17 years Socio-demographics, type of school attended, time spent, and perceived barriers.
Trapp (2011), Australia
International Journal of Behavioral Nutrition and Physical Activity
(2) Predictors of ACS: school neighborhood design (in boys) (+), parental confidence in their child's cycling ability (+), parental perceived convenience of driving (+), parental perceptions regarding neighborhood safety issues (i.e., whether the neighborhood is safe enough and the need to cross busy roads ) (−) and child's preference to cycle (for both boys and girls) (+).
(2). Students living in different distance intervals are subject to different barriers. Table 3 Characteristics of studies on perceived barriers of children's active commuting to school (N = 39) (Continued) (3) Security and safety remain the primary factors of concern for parents to allow their children to ACS, esp. for those living at short walkable distances (4) School, parents' and students' attitudes, grade levels, and allowable grade level all had significant impact on the students' walking/biking rates.
1281
Grades 1-5 Personal factors, social factors, and parents' perception of the physical environment (1) Walking was a typical mode for 28% and 34% of trips to and from school, respectively, and mostly accompanied by an adult.
164
Grades 1-7 Socio-demographics, psychosocial factors, perceived environmental factors, children's level and enjoyment of physical activity, and perceived importance of physical activity (1) Mean number of days walking to school in a week was 1.00 ± 1.62.
(2) Predictors of ACS: perceived importance of physical activity, parents' individual history of transport to school, distance, concern about traffic, and concerns about personal safety. adolescents only [34,35,49,57,58], and two recruited both elementary and middle school students [29,43]. For middle school students, the identified perceived barriers were mostly about physical environmental characteristics, including distance, traffic safety, bad weather, and lack of sidewalks; no personal barriers were reported for middle school students. In contrast, perceived barriers for elementary school children were more diverse, including various personal, social environmental, and physical environmental characteristics.
Methodological quality of reviewed studies
The methodological quality of reviewed studies varied, with the MQS scores ranging between 7 and 20 (Mean =12.95, SD =2.95) ( Likewise, most studies did not report the data validity testing (n =29, 74.4%). Only four studies (10.3%) reported validity testing based on their own data and six studies (15.4%) reported results from other studies. Among the studies that reported validity, four (10.3%) tested face validity, and four (10.3%) tested construct validity.
Regarding participants recruitment, 12 (30.8%) studies recruited parent/child pairs, and 27 (69.2%) recruited only children, parents, or other stakeholders. Two studies (5.1%) did not report any participant characteristics, and 11 studies (28.2%) did not present any information about the school characteristics. Among the studies that reported school characteristics, 26 had the participating schools at different locations, and two studies focused on a single school.
a theoretical model and measured part or all of the relevant constructs; four studies (10.3%) either inferred a theory or presented partial use of a theory; and two studies (5.1%) only showed some but often weak evidence of theory uses. Among the 16 studies that clearly identified a theoretical framework, 14 studies used the Social Ecological Model; one used the Theory of Reasoned Action; and one developed a modified theoretical model based on Social Ecological Theory and Social Cognitive Theory [44].
As to the conceptualization of perceived barriers, most studies (n =26, 66.7%) did not provide a definition of perceived barriers. Only one study (2.6%) provided a clear definition of perceived barriers and 12 studies (30.8%) described perceived barriers but within a broader category, e.g., perceived environmental characteristics which included both perceived facilitators and barriers. In contrast, most studies clearly described how they operationalized perceived barriers (n =32, 82.1%); five studies (12.8%) slightly operationalized the construct, e.g., not indicating what items were used to measure perceived barriers; and two studies (5.1%) did not include any description on the operationalization method ( Table 6).
The correlation between MQS and TQS was statistically significant (r = .581, p < .001), indicating a positive relationship between the methodological quality and quality of theory utilization of the reviewed studies.
Discussion
The aim of this systematic literature review was to summarize and critically assess the current literature on perceived barriers to children's ACS. To our knowledge, this is the first systematic review evaluating methodological quality and theory utilization of empirical studies on perceptions of children's ACS. A detailed appraisal of the literature suggests several empirical, methodological, and theoretical issues.
Empirical issues
The results of our analysis revealed a need for more ACS studies globally. Most of the studies identified were conducted in the U.S. or Australia. There is a need for more studies to better understand the roles of perceived barriers to ACS in other areas, e.g., Asia and Europe. Although the international literature showed higher rates of ACS in several Asian countries, e.g., the Philippines and China, shifts to more passive commuting modes were anticipated in these countries with continued modernization and increasing car ownership [67,68]. Given that childhood obesity has become a global epidemic, promotion efforts for ACS should begin immediately in Asian countries. Although our findings indicated that compared with U.S. or Australia, the rates of ACS were generally higher in European countries (e.g., Switzerland, Portugal, Ireland, and England), most of the European studies recruited small samples from one area, which limited the representativeness of their findings. Therefore, more evidence from largescale empirical data on ACS within a European country is warranted, as well as studies conducted in more European countries. Considering the diverse environmental characteristics of European cities/countries and that individuals' health behavior can be influenced by characteristics of the geographical area where they live [69], there might be wide variations in perceived barriers to ACS across European countries/cities. With limited studies conducted in areas other than the U.S. and Australia, such comparisons are not meaningful, if not impossible. Future studies using well-established instruments tailored for specific populations are needed in regions other than those reported in this review. Also, there is a need for future research to consider/report walking or biking separately as the usual mode to/from school; around 70% of the reviewed studies examined perceived barriers to the two commuting modes together. To exacerbate the problem, most of the other 30% of studies that examined the two modes separately concentrated on walking, with little attention given to biking. Walking and biking are different behaviors, and, therefore, perceived predictors of biking are very likely to differ from those of walking [60]. More empirical knowledge about perceived barriers specifically to biking to school is required.
This review also highlighted a shortage of ACS studies regarding perceived barriers in rural settings. Among the 39 studies identified, only five studies clearly stated the inclusion of rural locations. The roles of environmental or social characteristics on ACS may vary across different community settings. In terms of rural/urban designation, distinctive natural and living environments of these two areas may illuminate different perceived barriers to ACS, and serve as ideal contexts for natural experiments to make such comparisons. However, few comparative studies examined such potential variations. Given that rural residents are less likely to meet physical activity recommendations compared with urban or suburban residents [70], more work is needed on ACS that specifically focuses on rural-urban variations. It is also worth noting that the criteria for rural-urban classification are different across countries. In England, for example, areas are defined as rural if they fall outside of settlements with more than 10,000 resident population, and six rural categories are classified, including town and fringe, village, and hamlet and isolated dwellings [71]. In comparison, urban areas must encompass at least 50,000 people in the U.S., and rural areas encompass all population, housing, and territory not included within an urban area [72]. Understandably, the inconsistent definitions of urban and rural areas across countries may result in a mismatch, i.e. resulting in a methodological issue influencing the association between perceived barriers and ACS, and consequently affect the accurate comparison between the two areas. Third, more prospective and intervention studies with perceived barriers as predictors of ACS changes are needed. Most of the reviewed studies were cross-sectional, which cannot infer cause-and-effect relationships. To influence policy changes and large-scale environmental interventions, evidence from intervention studies is crucial [73]. Further, prospective studies conducted at a minimum of three time points are recommended, because studies with two observation points are limited in drawing firm conclusions on the direction of the relationships among study variables [74]. It is possible that participants' perceptions of the environment might be influenced by the increased level of ACS at the second point, e.g., after an intervention was conducted [75].
In regard to perceived barriers identified by previous studies, our findings underscored the lack of inquiries into participants' perceptions on policy/regulatory barriers. Most research on participants' perceived barriers to ACS used a couple of established instruments that focused on factors at the personal, physical and social environment levels, thus leaving policy as an under-researched area. Policy issues can influence individuals' decision-making regarding ACS. For example, different countries or districts may have different school siting or school choice policies, which can influence their commuting distance and availability of viable travel modes [76,77]. Individual schools may also have opposing school bus policies that discourage ACS, e.g., grade/age minimums for ACS or policies requiring parents to designate their child as a walker or a rider [77,78]. Identification of participants' perceived policy barriers could inform possible policy changes in support of ACS, while neglect of these potential barriers may result in less effective interventions.
Methodological issues
Assessment of the methodological quality of the reviewed studies raised several methodological and analytical concerns. One major limitation was the lack of consistent definition for ACS. Great variation was observed in the proposed definitions and measurement of ACS. Although many studies defined ACS as walking or biking to school usually, researchers did not clarify what "usually" means, e.g., whether it's over 3 days a week or 4 days a week. Some studies defined ACS as walking or biking at least once a week. Moreover, when used as the dependent variable, ACS was measured categorically in some studies but continuously in others, e.g., as frequency of ACS or percentage of ACS children, which compromised the generalizability of identified perceived correlates. Although there's no "golden rule" for defining ACS, researchers should at least provide a valid rationale for the use of specific definitions and measurements of ACS. For example, health researchers who are more interested in the relationship between ACS and health outcome may prefer more detailed or rigorous measurements such as frequency and duration of ACS, which are more relevant for long-term health benefits [10]. In this case, dichotomizing ACS may be less appropriate.
Second, multiple studies applied univariate or bivariate statistical techniques and failed to justify their applications. When these techniques are used to analyze the association between multiple determinants and an outcome variable, biased or misleading results may be produced. To correctly assess the complicated relationships among the variables, we need more sophisticated methods which allow for modeling multiple variables and diverse pathways among them. Further, given that most ACS data are school-based or district-based, we recommend researchers resort to multilevel or hierarchical techniques that can effectively separate individual-level effects from cluster-level effects [79]. Advanced statistical techniques may not be necessary for all research questions, but researchers need to provide valid rationale for using simpler methods in multivariate cases. Otherwise, results should be interpreted with caution.
Also, most studies that conducted correlation tests did not include or report the inclusion of control variables in their analyses. Leaving out important control variables can cause model specification bias and render the interpretation of results suspicious [80]. Lack of a theoretical basis may account for the lack of control variable(s) in data analysis, as the selection of control variables is mainly theory-driven. Although control variables can also be chosen based on the statistical tests, we recommend ACS researchers to utilize theory to more effectively conceptualize the multi-level constructs related to behavioral outcomes. For those who included control variables, socioeconomic factors and distance were the most common variables. Researchers may also be interested in how the association between perceived barriers and ACS is modified by other objective environmental characteristics such as neighborhood walkability and land use types. Previous research has demonstrated the relative influence of some urban form variables on the probability of a child walking or biking to school [44]. Although individuals' perceptions of the environment around them have been suggested as a stronger predictor of children's active commuting behavior than physical environment [17], including and testing objective measures as mediators can further strengthen existing evidence and provide empirical support for more cost-effective interventions. To achieve this goal, collaborations among scholars from various disciplines such as public health, urban planning, and transportation are encouraged.
Another concern was the lack of reporting multicollinearity diagnostics in the studies. In the presence of multicollinearity, regression estimates are unstable. Multicollinearity can misleadingly inflate the standard errors of coefficients and make some variables statistically insignificant when they should be significant otherwise [81]. Moreover, when multicollinearity exists, the simultaneous analysis of interrelated constructs may yield spurious or confounded results whereby it is impossible to distinguish the individual effects. To minimize the risk of multicollinearity, researchers should avoid including predictors that are conceptually identical, regardless of the sample size. Other alternatives dealing with multicollinearity include ridge regression, combining of independent variables into a single index, or conducting factor analysis [81,82]. It is also possible that some researchers tested multicollinearity but didn't report the diagnostics in their papers. To confirm the audience of the studies' methodological rigor, we suggest that researchers report multicollinearity testing in their papers.
The quality of the reviewed studies was further compromised by the authors' neglect of reliability and validity testing. Most studies either did not mention data reliability/validity or reported the test result based on previous studies' data. Reliability and validity testing is critical because measurement errors can directly affect the results and their interpretation [83]. Researchers can either evaluate the score reliability and validity using their own samples or rely on published sources [84]. However, reliability and validity evidence from established instruments is applicable only if researchers use the same instrument in the same form and the instrument has been validated in a population similar to their samples [85]. Published reliability/validity coefficients may not be generalizable to a particular sample under consideration [84]. Despite the importance of reporting reliability and validity testing, many journals do not include specific requirements for empirical studies to report psychometric properties of the instrument being used and scores being analyzed. To facilitate the publication of high quality research, we recommend that journals refine their editorial guidelines and require authors to report reliability and validity coefficients for the data being analyzed. Researchers' awareness regarding the roles of reliability and validity also need to be increased to ensure the correct interpretation of their results.
Theoretical issues
The level of theory utilization among the reviewed studies was low. Over half of the studies were not theoretically driven or used theories superficially. Theories provide a framework for identifying determinants of particular health behaviors, which constitutes a critical initial step in the development of successful interventions [86]. The lack of theoretical basis might account for the overarching number of exploratory studies among the reviewed studies, which typically assume only their direct effects on ACS without considering interaction among predictor variables. The lack of theory use posed an added concern regarding "kitchen sink" regressions in which any variables available were included. When selecting a variable, its theoretical relevance should be as important as, if not more important than, its statistical significance. The relatively low level of theory utilization suggests that health behavior studies need to advance further in sophistication of study designs [28]. To overcome this shortcoming, researchers need to raise their awareness of using theories, not only in funding application but also for manuscript development. Journals may also need to expand the word limits they placed on manuscript submissions to ensure researchers have enough space to elaborate on theory utilization [27,28,87]. Despite the importance of theory use, it was possible that some researchers did not use any theory because the research area was rather new with no earlier model as a basis; therefore, they considered their studies as exploratory, rather than hypothesis-driven.
Our findings also highlighted the common use of the Social Ecological Models (SEM). All except two of the reviewed studies that identified a theoretical framework used SEM. Our result was in line with findings from previous reviews of physical activity research that SEM has been the most commonly adopted theoretical framework [88,89]. SEM provides a comprehensive framework for understanding the multi-level determinants of health behaviors [15,90]. Recently, researchers have used SEM to support a new emphasis on environmental causes of behaviors [86,89]. While the consistent use of the SEM facilitated the process of synthesizing and comparing findings, the SEM lacks sufficient specificity regarding specific characteristics at each level. Consequently, other significant factors that may work with hypothesized factors at each level may be neglected. For example, perceived barriers as a personal level construct may be influenced by other social cognitive factors at the same level such as attitudes, self-efficacy, and intention; neglecting these constructs may result in an incomplete picture and consequently biased results. Unfortunately, these important social cognitive constructs were rarely investigated within the ACS context [11]; it might be time to put these factors back into equation.
Another weakness of the research was the divergence between conceptualization and operationalization of perceived barriers. Only one study clearly defined perceived barriers; most authors simply assumed that readers knew what "perceived barriers" meant. With this assumption, most of the studies skipped the conceptualization stage and directly operationalized perceived barriers by describing survey items that were used to measure the construct. When a construct is poorly conceptualized, it is very unlikely that the construct is properly operationalized. To make the situation even worse, most of the reviewed studies did not conduct a validity test. Consequently, the quality of construct measurement and the interpretation of results were questionable. For future ACS studies, improving the conceptualization and operationalization of investigated constructs should be a high priority.
Implications for practice
The statistically significant association between MQS and TQS of the reviewed studies not only confirmed the internal consistency of the instruments that we developed, but also had great implications for future research on perceived barriers to ACS. When researchers used theory to guide inquiry, they tended to utilize more sophisticated analytical techniques. Similarly, when researchers resorted to more advanced statistical methods in their data analyses, they were more likely to ground their research questions in theory. If the reciprocal relationship between theory use and data analysis holds true, then the low level of theory use and generally undesirable methodological quality of the reviewed studies raised an important practical question as to the reliability of their findings: Can policy makers trust the perceived barriers identified by the researchers and design ACS interventions accordingly? It appears the research in this field still holds room for improvement, and its quality could be considerably improved if researchers (1) pay more attention to the theoretical grounding of their inquiry, and (2) improve the methodological rigor of studies.
Limitations and strengths
This review is not without limitations. First, we limited our search to articles published in English, and therefore relevant literature published in other languages was excluded. Second, with the heterogeneity in the definition of ACS and the absence of standardized measurement tools of perceived barriers, inter-study comparisons must be considered with caution. Third, we chose to focus on perceived barriers in this review mostly because perceived barriers is an important construct in many health theories and represents one of the most commonly investigated constructs in ACS research. Future reviews are warranted to assess how and to what extent other theoretical constructs or measures were considered in literature. For example, perceived facilitators is also an important construct in ACS research and does not necessarily mean the opposite of perceived barriers. Also, objective measures, though not well represented in commonly used theories, were widely investigated in ACS studies, and the methodological and theoretical issues we identified in this review might influence their effect on ACS as well. Furthermore, this review was limited by the relatively small sample of studies to evaluate trends in theory use over years and to compare studies by sub-groups or disciplines. Despite the limitations, the strengths of this review need to be recognized. First, it used an extensive search strategy to locate articles in six databases and rigorously screened articles through well-defined inclusion/ exclusion criteria. Second, the instruments that we developed for assessing the methodological and theoretical qualities of existing ACS literature were based on wellestablished instruments and tailored for ACS studies. The instruments served well to capture existing discrepancies in literature and provided detailed insight for future studies.
Conclusions
Following rigorous assessment process, this systematic review has provided a detailed discussion of empirical, methodological, and theoretical issues in the current literature of active transport, in regard to perceptions of barriers preventing children from ACS. Based on our findings and in light of the limitations of this review, we have several empirical, methodological, and theoretical recommendations for advancing the quality of future studies on perceived barriers to ACS.
Empirically, increasing the diversity of study regions and samples should be a high priority, particularly in Asian and European countries, and among rural residents. More studies are also needed to examine walking and biking as separate active commuting behaviors. Regarding the relation between individual perceptions and ACS behavior, more prospective and interventions studies conducted at multiple time points are needed to determine the causal mechanism liking the perceived factors and ACS. Moreover, future researchers should also include policy-related barriers into their inquiries. Methodologically, the conceptualization of ACS should be standardized or at least well rationalized in future studies to ensure the comparability of results. Favorably, definitions of ACS need to reflect the frequency and magnitude of the behavior more accurately. Second, researchers' awareness need to be increased for improving the methodological rigor of studies, especially in regard to appropriate statistical analysis techniques, control variable estimation, multicollinearity testing, and reliability and validity reporting. Theoretically, future researchers need to first ground their investigations in theoretical foundations. Further, efforts should be devoted to make sure theories are used thoroughly and correctly. Important theoretical constructs, in particular, also need to be conceptualized and operationalized appropriately to ensure accurate measurement. By reviewing what has been achieved, we hope this review offers insights for more sophisticated active transport studies in the future. | 2016-05-04T20:20:58.661Z | 2014-11-18T00:00:00.000 | {
"year": 2014,
"sha1": "6ec412899235fa45d34320f2c5c4dc508b66fb2a",
"oa_license": "CCBY",
"oa_url": "https://ijbnpa.biomedcentral.com/track/pdf/10.1186/s12966-014-0140-x",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a25c3eaef9630585ba2ae8b714731da9347585b4",
"s2fieldsofstudy": [
"Education",
"Medicine"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
226237361 | pes2o/s2orc | v3-fos-license | Elliptic $q,t$ matrix models
The Gaussian matrix model is known to deform to the $q,t$-matrix model. We consider further deformation to the elliptic $q,t$ matrix model by properly deforming the Gaussian density as well as the Vandermonde factor. Properties of an associated basis of symmetric functions that provide the matrix model property $\sim {\rm char}$ in the deformed elliptic case are discussed.
Introduction
Eigenvalue matrix models are associated with many different issues in modern string theory. Basically, they appear in two different forms: • models in Miwa variables with logarithimic potential, like • and the standard matrix models of the Gaussian type in time variables [4], like Integrals (1) and (2) per se are directly related by Miwa transform p k = a x k a , but things become more involved after various deformations of logarithms, exponentials and integrals. The logarithmic models attract a lot of attention these days, because they are associated with both the AGT correspondence [1] and with CFT/Painlevé correspondence [2], where one applies peculiar techniques based on various Selberg and Kadell type formulas [3], while, in the case of ordinary time-variables, one has direct relation to KP/Toda integrability and Virasoro-like constraints, which get obscure after reduction to a few Miwa variables.
On the technical side, matrix models of these both types are associated with systems of symmetric polynomials. In particular, matrix models in Miwa variables are associated with the Jack polynomials [3], or their generalized versions [5]. There is also a q, t-deformation of these matrix model, which is associated with the Macdonald [6] and generalized Macdonald [7] polynomials. Moreover, a further elliptic extension of some of Selberg type formulas was also recently proposed [8].
As for the matrix models in time variables, a reference example of such a model is the Gaussian Hermitian matrix model. A connection of such a matrix model with a set of symmetric functions goes the following way: a defining feature of these matrix models is their superintegrability property [9,10], which claims that the average of a properly chosen symmetric function is proportional to ratios of symmetric functions on a proper loci: < char >∼ char. In the particular case of the ordinary N × N Gaussian Hermitian model, this symmetric function is an SL(N ) character, the Schur polynomial.
An immediate deformation of the Gaussian Hermitian model to the q, t matrix model was recently presented in [11]. This does not come as a surprise that the set of symmetric polynomials associated with such deformation turns out to be Macdonald polynomials. Technically, such an approach proved to be a very powerful tool to construct deformations of systems of symmetric polynomials. Hence, the next important step to be done is to construct an elliptic deformation of the q, t matrix model and an associated system of symmetric polynomials.
Indeed, as soon as the Schur polynomials associated with the Gaussian Hermitian model are eigenfunctions of the Calogero Hamiltonains, while the Macdonald polynomials associated with the q, t matrix model are the eigenfunctions of the trigonometric Ruijsenaars Hamiltonians, one may expect that the symmetric functions associated with the elliptic q, t model are the eigenfunctions of the double elliptic Hamiltonians (or, rather, of the Hamiltonians dual to the elliptic Ruijsenaars Hamiltonains). Recently, we proposed [12,13] candidates for this role, and the subject is very topical. One of the problems is that, despite there is a candidate for the role of the eigenfunctions, there are no Hamiltonians for them! Indeed, the naive Hamiltonians proposed in [14,15] have distinct eigenfunctions (see [16, sec.6.3]). It is still an open question if these two systems of functions can be related by a unitary transform.
In this paper, we will try to make the first step: to introduce a proper definition of the elliptic deformation of q, t matrix model. The problem of constructing a corresponding set of symmetric polynomials will be reported elsewhere (see first results in [17]).
Notation. We use the notation for the Schur polynomials as polynomials of symmetric variables z i : S R (z i ) and as polynomials of power sums p k := i z k i : S R {p k } and similarly for other symmetric polynomials. We define the q-Pocchammer symbol: We also define 2 Standard matrix models
Hermitean Gaussian matrix model
The standard Hermitean Gaussian matrix model is defined by the partition function where H is the Hermitean N × N matrix, dH is the Haar measure on the Hermitean matrices, V N is volume of the unitary group U (N ), and the integral is understood as a formal power series in variables t k . This partition function is a generating function of all gauge-invariant correlators One can also integrate over the angular variables in (5) in order to obtain [?] and i m where z i are the eigenvalues of H. To deal with the later integral, one can use the moments of the density ρ H (z): The gauge invariant correlators (6) are correlators of symmetric functions in terms of matrix eigenvalues z i . Hence, in order to completely solve the model, one needs to know all correlators of any complete set of symmetric functions. Here we notice that the basis of Schur functions S R (z i ) is the most convenient one, and that This is the formula that provides the complete solution of the model.
q, t-matrix model
The q, t-matrix model is defined [11] directly in terms of the eigenvalue integral (7), and i m z ki m (q,t) where t := q β , ξ is a parameter, the q-Pocchammer symbol is defined as the integral is defined to be the Jackson integral, and the q-deformed density gives the Gaussian density ρ(z) := exp − 1 2 z 2 in the limit of q → 1.
Performing Jackson integral
Now let us see that this q-deformation of the density leads to the most natural q-deformation of the density moments.
We use the Andrews-Askey formula [ in the particular case of µ 1 = µ 2 = µ: Since in the particular case of µ = 0, it gives one obtains It also allows one to get moments of ρ(z) by expansion in µ 2 : using This gives As soon as ξ enters trivially as a scaling factor, from now on, we put ξ = 1.
Matrix model averages
Thus, we could see that the q-deformation of the density just replaces the double factorials by the quantum double factorials. Now we can calculate the correlators in the q, t-model (11). The most natural basis of symmetric functions in this case is given by the Macdonald polynomials M R (z i ), and one gets similarly to (10): In particular, at N = 1, the only non-zero Macdonald polynomial is M [r] (z) = z r , and there is an identity (see (24)) which is quite non-trivial, because the both Macdonald polynomials at the r.h.s. of this formula do depend on t, but the ratio does not! In order to prove (26), one can multiply its both parts with (t;q) k (q;q) k λ r M [r] 1 1−t k and use the Cauchy formula, and (22) in order to see that (26) is equivalent to the identity This identity is just the Andrews-Askey integral (16) at µ 1 = 0, µ 2 = λ.
3 Elliptic q, t matrix models
The definition
Let us define the θ-function 1 θ w (z) := (z; w) ∞ (w/z; w) ∞ (30) and the elliptic Γ-function In particular, Now, in order to define an elliptic q, t matrix models, one has to define two points: (i) a proper elliptic generalization of the measure (26), and (ii) a proper elliptic generalization of the Vandermonde factor. In order to generalize (26), one can naturally define it from the generalization of (24) as while the Vandermonde factor is given just by the elliptic Γ-function. Thus, finally, we define (compare with [19] at t = q, i.e. β = 1) Here the yet-to-be-defined integral . . . d ell z is technically fixed by formula (33).
Factorization property
There is now an important step to be done: one has to find a counterpart E R of the Macdonald polynomials that would provide the proper basis with property (25), corresponding deformations of the special loci also have to be realized: In particular, at N = 1, there should be an identity 1 It differs by a factor from the standard odd θ-function: One could expect that these polynomials are the elliptic GNS [12,13] (generalization of the Noumi-Shiraishi polynomials from [20]), this still has to be checked. Note that there are two sets of the elliptic GNS polynomials orthogonal to each other, and which one of them would emerge is an open question. Since of these two sets, the set of conjugate polynomials create a ring consistent with the tensor product of representations of SL(N ) (i.e. the corresponding generalized Littlewood-Richardson coefficients do not vanish iff they do not vanish for the corresponding Schur functions), they are most plausible candidate. The first few examples of the elliptic GNS polynomials are: while those of the conjugate ones are: where ζ(q, t, w) := θ w (q 2 )θ w (t) θ w (qt)θ w (q) Note that (37) does not depend on t (see (33)), since all the dependence on t enters through the Vandermonde factor. Hence, one may expect that the factorization properties related with (26), would also persist at t = q, when the polynomials E R reduce to the Schur polynomials. In other words, one has to find an elliptic deformation of the loci that preserves the factorization property, in particular, in the Schur polynomials. It can be actually done, and will be reported elsewhere [17].
Conclusion
To conclude, we suggested two different definitions of elliptic matrix model: explicit eigenvalue integral (35) and explicit prescription for time-dependent characters averages (36). Usually it takes quite some time to establish an exact relation/equivalence between such two formulation, and we do not try to do it in the present short note. Still, we hope that it will not take too long, since an elliptic model is badly needed in modern studies of AGT relations and DIM algebras. For the first steps in this direction, see [17]. | 2020-11-04T02:00:58.608Z | 2020-11-03T00:00:00.000 | {
"year": 2020,
"sha1": "622c0a1ed402639885bcb9e71793e9b65ac1f465",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.physletb.2021.136196",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "feafe0d493e47e2bc193c34ace51740b61361f4b",
"s2fieldsofstudy": [
"Physics",
"Mathematics"
],
"extfieldsofstudy": [
"Physics"
]
} |
252912392 | pes2o/s2orc | v3-fos-license | A cross-sectional study on assessing depression among hemodialysis patients
Depression is the most common disorder of psychiatric illness experienced by hemodialysis patients. Failure to measure depression may diminish their well-being. The main objective of this study is to assess depression in hemodialysis patients. The primary goal is to find out the prevalence of depression among chronic kidney disease (CKD) patients. The secondary goal is to find the association between age, sex, and the severity of depression among patients with CKD. This cross-sectional study was performed at the Nephrology Department of King Fahad Hospital, Tabuk, from December 2021 to April 2022. Depression was assessed using a validated Arabic version of the Zung Self-Rating Depression Scale, which is a self-administered questionnaire to assess depression. The mean age of subjects was 42.13 years (standard deviation = 15.65), most of them were in the age group of 18–29 and 40–49 years (n = 59, 24.58%), and the majority were male (n = 128, 53.33%). The depression prevalence among hemodialysis patients was 74.58%. Majority of the patients were with mild depression (n = 175, 72.92%). No significant difference was found for depression among different age groups or genders with ongoing hemodialysis, in our study. Even though the prevalence of depression was high, mostly they were in the mild category. Depression should be assessed frequently, and psychological counseling should be given to improve the well-being of patients.
INTRODUCTION
Depression is the most common disorder of psychiatric illness experienced by hemodialysis patients. [1] It is an established mental health issue in end-stage renal disease (ESRD) patients leading to more disease states and death. [2] Various factors are involved in triggering depression in hemodialysis patients. These include their comorbidities, chronic pain, and disturbances in sleep. [3] Depression even depends upon the treatment and the health-care professionals. Patients' psychological needs should be addressed to improve their well-being. [4] Failure to measure depression may diminish the well-being of the patients. [5] Even though depression is a major disorder next to hypertension, it is left untreated in hemodialysis patients. [6] Dialysis patients have more depression, ranging from 23% to 39%. [7] Depression is a significant illness that heavily affects psychosocial functioning and dramatically reduces the well-being of a person. Depression is widespread in chronic kidney disease (CKD) patients. Mild depression is found commonly in CKD patients, and the culture of the patient is strongly linked to depression. [8] Patients with CKD have been suffering from psychological disease since the beginning of their illness. Many patients learn how to manage this over time, but severe depression affects at least one of every four people with CKD at any stage. Furthermore, a large percentage of patients show subclinical depression symptoms, leading to decrease in patient's quality of life. [9] A recent study has shown that there is a link between symptoms of depression and mortality and morbidity in dialysis patients. [10] ESRD patients have a significant rate of depression, which has been well documented. [11] Depression is a multifaceted etiology involving social, psychological, and biological causes in both healthy and chronically ill populations. The prevalence of depression being higher among chronic disease population is due to considerable and long-term changes in their psychology, environment, and pathology. Depression in dialysis patients is characterized by hopelessness and despair. [12] The routine of dialysis treatment, as well as variable degrees of health, may have an impact on feelings of despondency, which can lead to depressive episodes. Patients with ESRD suffer from both mental and physical losses. Treatment dependence, compounded by a slew of losses, can lead to delusions of grandeur. [11] The main objective of this study is to assess depression in patients undergoing hemodialysis. The primary goal is to find out the prevalence of depression among CKD patients. The secondary goal is to find the association between age, sex, and the severity of depression among CKD patients.
Study design and population
This cross-sectional study was performed at the Nephrology Department of King Fahad Hospital, Tabuk, from December 2021 to April 2022.
Inclusion criteria
Hemodialysis patients visiting the Nephrology Department of age ≥18 years of both sexes were included in the study.
Exclusion criteria
CKD patients treated by modalities other than hemodialysis and those who were diagnosed with psychiatric illness were excluded from the study.
Ethical concern
Ethical approval was obtained from the Tabuk Institutional Review Board (UT-077/022/116). The consent to participate was obtained from all the participants.
Data collection
The demographic details such as age, sex, education level, and marital status were collected from the patients. Further, the comorbidities, duration of hemodialysis, and the hemodialysis sessions per week were collected. Depression was assessed using a validated Arabic version of the Zung Self-Rating Depression Scale. [13] It is a self-administered questionnaire to assess depression with 20 questions. Each question was scored on a scale of 1-4. Questions 1, 3,4,7,8,9,10,13,15, and 19 have a score of 1 for rarely and 4 for always. Questions 2,5,6,11,12,14,16,17,18, and 20 have a score of 4 for rarely and 1 for always. The score ranges from 20 to 80. A score of 25-49 or below is normal, 50-59 is mild depression, 60-69 is moderate depression, and 70 and higher is severe depression.
Statistical analysis
SPSS version Armonk, NY, USA: IBM Corp. Database version 21 was used. Pearson's correlation was performed to find out the correlation between age and depression. Fisher's exact test was performed to assess the association between gender and depression. P < 0.05 was considered statistically significant.
RESULTS
Depression was assessed in 240 hemodialysis patients. All patients who undergo dialysis were of Stage 5 kidney disease, or ESRD, in our study. The mean age of subjects was 42.13 years (standard deviation = 15.65), most of them were in the age group of 18-29 and 40-49 years (n = 59; 24.58%), and the majority were male (n = 128; 53.33%) [ Table 1].
The comorbidities of the patients are depicted in Figure 1. Diabetes and hypertension were the common comorbidities seen in majority of the patients (40%), followed by hypertension alone (11.67%).
Depression was differentiated into normal, mild, moderate, and severe according to the Zung Self-rating Depression Scale. In our study, the prevalence of depression in The category of depression based on the age group and gender is shown in Table 2. Mild depression was more in patients with age group of 18-29 years (n = 50, 20.83%). Moderate depression was observed in patients of age group 30-39 years only. Males had more of mild depression (n = 96, 40%) compared to females (n = 79, 32.92%). Moderate depression was observed in females only.
Pearson's correlation was performed to find the association between age and depression. There was no correlation between age and depression (r = −0.084). Fisher's exact test was performed to assess the association between gender and depression. There was no association between gender and depression (P = 0.1038).
DISCUSSION
Depression is the most commonly prevalent psychiatric illness in CKD patients. [11] Depression affects the patient's well-being, affecting their role in family and work. [14,15] In our study, the depression prevalence among hemodialysis patients was 74.58%, regardless of its severity. The result of this study was not on par with other studies, which reported that the depression prevalence among patients undergoing dialysis is in the range of 19%-60%. [16,17] To be more specific among hemodialysis patients, the level of depression was 38.2%. [4] The depression prevalence was found to be between 40% and 55% in another study, [18] which almost is half to the results of our study. Others have reported high depression prevalence in CKD. [2,[19][20][21] Different assessment tools used for assessing depression and the characteristics of population may have attributed to this variation.
The depression levels among hemodialysis patients are near to depression in patients with cancer. Even though CKD is not a fatal disease, it may affect the well-being of a patient to a great extent. [22] ESRD patients have more depression than other diseases. [23] Depression among ESRD patients was found to be much more higher than other disease conditions. [24] In 2017, a Saudi Arabian study reported that majority of the patients with kidney diseases showed different levels of depression. [25] In our study, we only found patients with mild (72.92%) and moderate (1.66%) depression levels.
In this study, more depressed patients were among the younger age groups of 18-29 years (20.83%), and there was a difference in the severity of depression among males and females. Males were more depressed than females. Moderate depression was found only in females and not in males. This is in accordance with other studies. [26][27][28] Other studies showed inconsistent results with our study, in which females were more depressed than males. [29] No association between depression and CKD stages was found in other studies. [30,31] A study reported an association between psychological aspects and the progression of CKD. [32] Studies have supported that Stage 4 or 5 CKD patients were more depressed than those with other stages of disease. [33,34] In our study, all the patients were of CKD Stage 5 or ESRD.
No significant difference was found for depression among different age groups or genders with ongoing hemodialysis, in our study. Other studies also reported that there was no association between age and sex of patients with depression among patients with CKD. [35,36] The economic costs play a great part causing depression in patients undergoing dialysis. [37] In Saudi Arabia, dialysis is provided free of cost in governmental hospitals. Hence, we could not perform a pharmacoeconomic analysis. The mild depression observed in almost all the patients may be due to their religious beliefs and the cost covered by the government.
Strengths and limitations
This study has explored the prevalence of depression in Tabuk, which was never assessed before.
The major limitation of this study is a single-center study with convenience sampling. It may not generalize the results. The assessment of depression was solely based on self-report, and no clinical diagnosis was performed.
CONCLUSION
Depression is a common problem for dialysis patients, affecting them on both psychological and physical levels.
Even though the prevalence of depression was high, mostly they were in the mild category. Patients' religious beliefs may be attributed to this mild depression. Depression should be assessed frequently, and psychological counseling should be given to improve the patient's well-being.
Financial support and sponsorship
Nil.
Conflicts of interest
There are no conflicts of interest. | 2022-10-17T14:01:25.013Z | 2022-10-01T00:00:00.000 | {
"year": 2022,
"sha1": "1864dcc07083d7a912c1a7588dca6159557cd47c",
"oa_license": "CCBYNCSA",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "5df66b25f61af9b2f844af34099e2185780f94cd",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
270016636 | pes2o/s2orc | v3-fos-license | Contiguous Gene Syndromes and Hearing Loss: A Clinical Report of Xq21 Deletion and Comprehensive Literature Review
Given the crucial role of the personalized management and treatment of hearing loss (HL), etiological investigations are performed early on, and genetic analysis significantly contributes to the determination of most syndromic and nonsyndromic HL cases. Knowing hundreds of syndromic associations with HL, little comprehensive data about HL in genomic disorders due to microdeletion or microduplications of contiguous genes is available. Together with the description of a new patient with a novel 3.7 Mb deletion of the Xq21 critical locus, we propose an unreported literature review about clinical findings in patients and their family members with Xq21 deletion syndrome. We finally propose a comprehensive review of HL in contiguous gene syndromes in order to confirm the role of cytogenomic microarray analysis to investigate the etiology of unexplained HL.
Introduction
Hearing loss (HL), the most prevalent sensorial defect at birth and during childhood, may be due to an alteration of the mechanic conduction of hearing waves through the external and middle ear, to a cochlear or neural defect in the signal transduction, or both, as it happens in conductive HL, in sensorineural HL (SHL) or in mixed HL, respectively [1,2].In syndromic HL, the deficit may be associated with other medical findings, including malformations, anomalies, or pathologic alteration in any organ or system.Of note, approximately 20% of children presenting with HL as the only initial clinical feature will subsequently be diagnosed with syndromic HL [3,4].The implementation of universal newborn hearing screening and audiologic surveillance make an early diagnosis of HL possible, optimizing patients' management and allowing for prompt treatments [5].Subsequent to the identification of HL, a detailed clinical and genetic work-up follows in order to reach the etiological diagnosis and to offer personalized care and follow-up to the patients and their families.These points are crucial especially in cases in which concurrent disability is present, as that may affect the outcomes.
Genetic causes are identified in almost 80% of prelingual HL cases [3], and more than 600 different syndromic conditions are known [6].
SHL is mostly the consequence of pathogenic variants (SNVs, Single Nucleotide Variants or indels) in single genes, while Copy Number Variations (CNVs) are an underestimated cause of non-syndromic HL.Indeed, pathogenic deletions are expected to occur sporadically across at least 100 HL-causing genes [7].Current data show that sequence analyses of 100 HL genes together with the assessment of CNVs can identify a genetic cause in roughly one out of two patients with suspected hereditary HL, whereas ~20% of the patients with an etiopathogenetic diagnosis carry at least a causative CNV.However, only a few studies, mainly single-case or small cohort reports, have focused on contiguous gene syndromes exhibiting HL in association with congenital anomalies, neurodevelopmental disorders, and various syndromic conditions [8][9][10].
We report on a young boy with congenital, severe SHL associated with an inner ear malformation, who developed a neurodevelopmental disorder with autistic features, and choroideremia consistently with the detected novel hemizygous deletion of 3.7 Mb in Xq21.1q21.2.We therefore propose an extensive review of patients and families with Xq21 deletion syndrome and we suggest HL as a possible early-onset feature of contiguous gene syndromes.In addition, we propose to hold gross DNA rearrangements in consideration as a possible cause of HL.
Clinical Report
The clinical report summarizes information from a complete collection and revision of clinical and molecular documentation.Next Generation Sequencing (NGS) multi-gene panels were performed at R&I, and CGH arrays took place at the Genetic Laboratory of Perugia Hospital.Molecular analysis was performed in trio (proband and biological parents) from DNA extracted by peripheral blood.Multidisciplinary evaluations, including audiological, logopedic, neurological/NPI, cardiological assessments, inner ear imaging, abdominal US, brain MRI, and clinical follow-up, were performed at IRCCS Burlo Garofolo in Trieste.
Literature Review
(1) Xq21 deletion syndrome A collection of reported cases was selected from OMIM [11] and Pubmed [12], and data were extrapolated and compared.A summary of the main findings is presented.
(2) Contiguous gene syndromes with HL We researched and selected recent papers providing a comprehensive list of contiguous gene syndromes, microdeletion and microduplication syndrome in humans, and data merging.Searches of phenotype description of the selected syndromes took place on OMIM [11]; Orphanet [13]; Genereview [14]; and Pubmed [12] with a focus on the presence and characteristics of HL.A summary of the main findings is presented.
Clinical Report
The child was referred to our center at 3 years of age for a second opinion regarding a delay in obtaining positive results from speech therapy and psychomotricity.
Family History
The proband is an only child, born at term from a pregnancy obtained by heterologous intracytoplasmic sperm injection (ICSI), with oocytodonation, which had an unremarkable course.The father suffers from ischemic stroke consequences in essential thrombocythemia.There is no known family history of hearing deficits.
Clinical Presentation
The proband's newborn hearing screening testing with otoacoustic emission failed bilaterally ("REFER"), and the second level audiologic evaluation (including tympanometry, diagnostic distortion product otoacoustic emissions, and click-evoked auditory brainstem response (ABR) testing) revealed a bilateral SHL, with an estimated hearing threshold of 60-70 dB HL in the right and 70-80 dB HL in the left ear.At 4 months of age, the child was fitted with two hearing aids.As part of the clinical investigations indicated for permanent congenital HL, a temporal bone CT revealed a bilateral inner ear malformation characterized by anomalous fusion of the internal auditory canal and the basal cochlear turn, a cochlea with an absent modiolus, the presence of the interscalar septa, and bulbous dilatation of the fundus of the internal auditory canal as shown in Figure 1a.Incomplete partition type III (IP3) was classified according to the Sennaroglu's revised classification [15].Absence of the modiolus of the cochlea and bulbous dilatation of the fundus of the internal auditory canal was confirmed by MRI.Brain MRI was normal.The cardiological assessment, ECG, echocardiography, abdominal ultrasound, and ophthalmological examination performed at 2 years of age were all reported as unremarkable.The parents reported a motor delay with late achievement of milestones: he reached the sitting position without support at 11 months, crawled at 18 months, and achieved independent walking at 28 months, initially with a wide base and very unstable gait.Because of the impairment of global and gross motor skills and the presence of bilateral pes planum, he was treated with alternating orthotics and underwent physiotherapy and neuropsychomotricity programs.Sphincter continence was also significantly delayed.Speech and language development was severely compromised with delays in comprehension, very limited verbal expression, and several phonemic errors, despite early speech therapy and neuropsychotherapy.
Overall, the audiological evaluation carried out at 3 years and 2 months of age confirmed a severe bilateral SHL associated with inner ear malformation.Although hearing abilities were not easily assessed due to the presence of attention difficulties, the hearing aid fitting provided a satisfactory amplified hearing threshold (30 dB HL).In contrast, motor, communication, and language delays were significantly worse than expected, considering the early amplification and rehabilitation.The following neurological consultation documented the presence of stereotypies (e.g., rocking of the head and flickering with the hands), poor gaze engagement, impulsive-oppositional behavior, and limited attention.The diagnosis was of Mixed Specific Developmental Disorder and attention deficit disorder.At the neurological follow-up, a neurodevelopmental disorder with global psychomotor delay and autism spectrum disorder (ASD) was diagnosed.At 3 years and 2 months of age, weight and height were approximately in the 30th percentile with OFC about 53.5 cm (unreliable measurement due to poor cooperation) (98th pc).He showed dolichocephaly, a prominent forehead, bilateral epicanthus, periorbital fullness, long eyelashes, and a broad nasal tip with anterverted nostrils.Mild dorsal hypertrichosis and keratosis pilaris on the arms and thighs were also evident, together with flat feet.
Genetic Analysis
Following clinical genetic evaluation, the NGS panel analysis of genes associated with congenital HL suggested a deletion in the POU3F4 gene.Molecular karyotyping was then performed for confirmation: the CGH array revealed a hemizygous deletion of approximately 3.7 Mb in the chromosomal region Xq21.1q21.2, between the nucleotides 81464567 and 85234239 (release GRCh37), encompassing POU3F4, CYLC1, RPS6KA6, HDX, APOOL, SATL1, ZNF711, and POF1B and partially (exons 4-15) involving CHM.The deletion was inherited from the oocyte donor.
Ophthalmological Follow-Up
Following a reverse phenotyping approach, ophthalmological follow-up performed at 4 years of age showed a bilateral widespread pigment clumping in the middle retinal periphery.Snellen visual acuity was performed binocularly only, in accordance with age, and the kinetic binocular visual field resulted normal for age (visual acuity was 20/32) with a slightly hyperopic refraction, and with normal bilateral temporal extension (of 70 degrees).At the age of 5 years, visual acuity and the visual field were still normal.The electroretinogram (full-field ERG) showed a significant reduction in the scotopic component (30 micronvolts) and a modest reduction in the photopic one (60 micronvolts; a and b waves' latencies were 21 ms and 43 ms, respectively).Finally, at the age of 6, areas of chorioretinal atrophy were found at the fundus, bilaterally in the middle retinal periphery, as well documented by retinography, shown in Figure 1b.
Xq21 Deletion Syndrome: Literature Review
At least 29 probands (28 males and 1 female) belonging to 14 unrelated families of different ancestries have been reported with syndromic association due to Xq21 deletion before our patient.Except for a de novo deletion and a patient whose mother was not genotyped, all the Xq21 deletions were maternally inherited.
We revised the molecular and clinical findings of all the affected patients (30) and their carrier mothers, when available.Carrier females (24) were evaluated following the molecular diagnosis in their son(s).A well-described cohort of heterozygous females has been also considered [16].Clinical findings of the patients and the carrier females are summarized in Figure 2, which was drawn from the data reported in Supplementary Table S1.In the table, the breakpoints of the deletions are also shown.
All the reported probands had a major hearing issue: they all present bilateral sensorineural or mixed HL, of variable degree varying from moderate to profound, rarely asymmetric, and without documentation of progression according to the available followup.HL diagnosis was made during childhood and assumed to be "likely congenital" in some cases.Our proband is the only one in which HL is surely congenital, since the diagnosis was made after failing newborn hearing screening.All patients who underwent temporal bone imaging presented inner ear malformation, generally consistent with an IP3, as the here-reported patient.In addition, one patient underwent left-middle-ear exploration and stapedectomy, showing fixation of the stapes to the foot plate with a patent cochlear aqueduct and experienced marked perilymphatic gusher on stapedectomy, supporting DFNX2.Despite early hearing aid fitting and treatment, verbal comprehension and expression are significantly compromised, as we observed in our patient.Except for some patients, global developmental delay is a frequent concomitant feature with generally worse speech than motor development and a variable degree of intellectual disability, from moderate to profound.Hypotonia has also been reported in some patients (4).Behavior anomalies have been more rarely mentioned, with one patient presenting with psychotic behavioral and relational disorder and another showing severe social disability.In our patient, the comorbidity with attention deficit disorder and ASD probably contributed to poor verbal outcomes.Ophthalmologic signs and symptoms consistent with a diagnosis of choroideremia are present in a minority of children, and in most of the patients, it has been described as appearing after childhood.Facial phenotype, when reported (6), is not specific and clinically recognizable.Flat feet are present in our patient and in two other siblings (3) [17].Among other clinical reported features, asymptomatic renal pelvic dilatation, a single microematuria episode, vescicoureteral reflux grade V, brain anomaly with mega cisterna magna, cerebral cysticercosis, and vestibular problems have been reported once, whereas obesity, short stature, and strabismus are rare findings.In the cohort, a female proband exhibited bilateral HL, IP3, and developmental delay; no information about fundus oculi was provided [18,19].Heterozygous females are generally asymptomatic, with normal hearing and vision.However mild ophthalmological signs including choroidal atrophy and retinal pigmentary stippling have been documented, as well as bilateral sensorineural mild HL [16,18] (Figure 2).
Contiguous Gene Syndromes with HL: Literature Review
The phenotypes of 192 recurrent and non-recurrent microdeletion and microduplication syndromes listed by Wetzel and Darbro [20], and 99 microdeletion and microduplication syndromes reported and described by Nevado et colleagues [9], have been revised with special focus on the presence, prevalence, and characteristics of HL.The clinical description of diseases mainly relies on information available in the free databases OMIM [11], Orphanet [13], and Genereview [14] and in papers in Pubmed [12], which could not be comprehensive of the totality of contiguous gene syndromes with HL.In particular, very rare and sporadic microdeletion or microduplications presenting with HL have not been included in our review.
Starting from over 200 microdeletion/microduplication syndromes [9,20], we extrapolated 56 imbalance-sensitive genomic loci associated with more than 60 syndromes, which may include HL as a clinical finding.Conductive, sensorineural, and mixed HL may occur commonly, occasionally, or rarely in the contiguous gene syndrome.In Tables 1-3, we focus on HL characteristics and HL candidate or responsible genes, whereas we provide only a summary of the main clinical features (column "Phenotype") of the identified microdeletions/microduplication syndromes.Few cases of genomic regions sensitive to microduplications have been associated with HL (4), as displayed in Table 2.
Xq21 Deletion Syndrome: Literature Review
We revised 22 clinical histories spanning over forty years and described a novel proband [17,18,[51][52][53][54][55][56][57][58][59][60].By reporting the phenotype exhibited by males in a three-generation family, i.e., choroideremia, obesity, and congenital deafness, Ayazi and colleagues suggested the existence of a new X-linked syndrome [55].The pathomechanism underlying the condition present in this family and in a similar one, i.e., submicroscopic deletions in Xq21, was subsequently identified by Nussbaum RL [51].Despite the limits of available technologies, it was possible to map the critical region for choroideremia and HL between DXYS1 and DXS72 markers [51][52][53].The technological optimization for the deletion breakpoint determination in the first family was later published together with the family's clinical follow-up and the description of other similar patients [52,56].However, only with the advent of MLPA and microarray technologies (CGH or SNP) was it possible to define the breakpoints of the Xq21 deletions and reveal that its size varies from 5.2 [17] to 16 Mb [57].Our patient is the first example of Xq21 deletion identified through a multi-gene NGS panel after neonatal diagnosis of an apparently isolated HL.Excluding the patient described by Song et al. [57] affected by HL only as carrier of a 1-1.5 Mb microdeletion at about 90 kb upstream of POU3F4 (Table S1), with a size of 3.7 Mb, it is the smallest Xq21 deletion and the only among those characterized deleting part of CHM (exons 4-15), whereas POU3F4, CYLC1, RPS6KA6, HDX, APOOL, SATL1, ZNF711, and POF1B were fully encompassed.Indeed, the breakpoints of only 3 out of the 12 different Xq21 deletions (Table S1) were characterized through array technologies (CGH or SNP): genes involved in our patient were fully encompassed.
POU3F4 and ZNF711 haploinsufficiency is causative of the expression of audiological and neurodevelopmental phenotypes, respectively.Indeed, POU3F4 (OMIM #300039) is implicated in nonsyndromic X-linked deafness-2 (DFNX2, DFN3OMIM #304400), characterized by sensorineural or mixed HL in association with IP3, cochlear hypoplasia, and/or stapes fixation.In males, HL begins prelingually and progresses over time.Females with pathogenic variants in POU3F4 tend to be less severely affected, with rare postlingual, mild HL and very rarely with inner ear anomalies [16].DFNX2 is also caused by deletions and insertions located upstream of the gene, in a putative regulatory element region.POU3F4 in DFNX2 may be partially or completely deleted [61][62][63].
POU3F4 encodes a transcription factor that restricts the proliferation and lineage potential of neural stem cells [63].It plays a role in neurogenesis of the inner ear, mediating the inner radial bundle formation [64].In mice models for DFNX2, a dysfunction of spiral ligament fibrocytes in the lateral wall of the cochlea, leading to reduced endocochlear potential, underlies the sensorineural loss.Considering the pathogenic mechanism, some authors have recently proposed to develop a therapeutic approach in male Pou3f4 -/y mice based on gene transfer in cochlear spiral ligament fibrocytes mediated by an adeno-associated viral (AAV) vector with a strong tropism for the spiral ligament (AAV7).Complementary gene replacement before HL progression to profound deafness could represent an attractive strategy to prevent fibrocyte degeneration and to restore normal cochlear functions and properties, including a positive endocochlear potential [65].
ZNF711 (OMIM #314990) codes for the zinc-finger protein 711, ZNF711, the specific function of which is unknown.Some evidence suggests that it is crucial for brain development, probably acting as a transcription factor for genes required for neuronal development [66].Nonsense, frameshift, and missense variants have been reported in ZNF711-related nonsyndromic intellectual developmental disorder, X-linked 97 (OMIM #300803), characterized by generally mild motor and severe speech developmental delay, mild-to-moderate intellectual disability in males, and autistic features in some of them.Despite some mild phenotypic features having been described, there are no distinctive facial dysmorphologies that would allow clinical recognition [67].Female carriers do not show manifestations.
Although HDX and RPS6KA6 are not OMIM disease genes, they are candidates for intellectual disability.For instance, two alterations (one duplication and one translocation) involving the gene and associated with X-linked intellectual disability and premature ovarian failure, respectively, have been reported in HGMD ® [75].RPS6KA6 encodes for a constitutively active kinase that belongs to the same family of a protein whose loss of function causes Coffin-Lowry syndrome, a syndromic intellectual disability with hypotonia.Moreover, at least two missense variants in RPS6KA6 have been reported in association with psychomotor retardation, ASD and HL [75].
Although some authors suggested an age-dependent penetrance for choroideremia [17], in our patient, an early onset of the phenotype has been documented.As reported for females carriers of POU3F4 pathogenic variants [16], Xq21 deletion carrier females can also exhibit mild ophthalmological signs including choroidal atrophy and retinal pigmentary stippling, as well as bilateral mild SHL and rarely inner ear anomalies.Moreover, even a female may exhibit the full spectrum of the Xq21 deletion syndrome, that includes neurodevelopmental delay (Figure 2) [16,18].Furthermore, we underline that the identification of heterozygous females is crucial for reproductive counseling as well as for clinical evaluation and follow-up.
The phenotype of patients with contiguous gene syndrome may be more complex than the sum of specific signs and symptoms related to single genes.In particular, the co-occurrence of double sensorial deficit and neurodevelopmental disorder significantly impacts on prognosis, outcomes, and management choices in children with Xq21 deletion.Interestingly, neurodevelopmental issues have been frequently observed in children with POU3F4-related IP3, even in patients who are carriers of pathogenic variants [18].
Considerations on Etiologic Diagnosis of HL
Early identification of HL and consequent etiology determination impact the personalization of the patient's follow-up, prognosis and outcome prediction, and in certain cases also the choice of treatments and therapeutic options.In syndromic HL, a multidisciplinary follow-up is required, according to the phenotype.
Following the diagnosis of POU3F4-related HL, imaging is mandatory in search of inner ear anomalies [61].Cochlear implant surgery in IP3-III may be difficult due to the leakage of cerebrospinal fluid into the middle ear cavity when opening the round window, or performing the cochleostomy, with a prolonged "gusher" [76,77].This increases the risk of further post-surgical cerebrospinal fluid leaks which can also lead to rhinorrhea, meningitis, and intracranial infections.A second surgical risk concerns the possibility of incorrect positioning of the electrodes in the internal acoustic canal due to the absence of the modiolus and the widening of the bottom of the internal acoustic canal [77].Therefore, the choice of cochlear implant surgery requires careful counseling with the family and the sharing of therapeutic rehabilitation choices [78].
The identification of the causative genomic Xq21 deletion allowed for the indication of our patient specific follow-up and the early diagnosis of choroideremia.Choroideremia results from the progressive, centripetal loss of photoreceptors and choriocapillaris, secondary to the degeneration of the retinal pigment epithelium (RPE) [79].Affected individuals present in late childhood or early teenage years with nyctalopia and progressive peripheral visual loss.Typically, by the fourth decade, the macula and fovea also degenerate, resulting in advanced sight loss [80,81].In the first phases of this dystrophy, peripheral pigmentary changes may characterize the retina of affected patients [82].At a later time, distinct regions of chorioretinal atrophy are usually visible.Of note, these degenerative alterations usually start at the equator and progressively, in a centripetal direction, involve the posterior pole and the peripapillary region [83].These retinal changes can lead to the appearance of multiple scotomas in the peripheral visual field [84].The full-field ERG may have reduced amplitude, initially in the scotopic component only, or be extinct [83,85].Children under 7 years have rarely been diagnosed [82,84,85].In our case, considering the initial alterations of the fundus as early as 4 years, the presence of areas of chorioretinal atrophy at 6 years, and an altered ERG at 5 years, which presented an initial involvement of the retinal cones, the ophthalmologic involvement is to be considered very serious with probable poor outcome and early visual loss.Close audiologic evaluations are obviously scheduled for the patient, also considering the possibility of hearing impairment progression.Therapeutic options have to consider the increased risk of complete loss of hearing in the case of middle ear surgery due to the identification of a specific inner ear anomaly.
In addition, the etiological diagnosis allows for the definition of the recurrent risk and the discussion of reproductive options in family counseling.In each pregnancy, the apparently healthy mother of a male proband has a 50% chance of transmitting the CNV loss; consequently, males who inherit the deletion will be affected, whereas females may be healthy or symptomatic.Furthermore, carrier mothers should undergo hearing and vision evaluations and follow-up.In the here-reported patient, challenges in genetic counseling emerged in both the donor and recipients of the oocyte, facing new ethic, legislative, and clinical needs.
HL in Contiguous Gene Syndromes
Unraveling the etiology of syndromic HL means we must consider the involvement of a single gene with a pleiotropic effect, or of two genes in double diagnoses as well as contiguous gene syndromes.Contiguous gene syndromes include genomic disorders with reciprocally deleted or duplicated chromosomal regions and microdeletion/microduplication syndromes.In these last ones, CNVs may randomly involve any genomic region, causing variable clinical presentations [86,87].Of note, few contiguous gene syndromes may exhibit isolated HL, such as in female carriers of Xq21 deletion, in the frequent autosomal recessive 15q15.3deletion syndrome in females, or in the very rare 9q21.11duplication syndrome.We then focused on the pathomechanism or susceptibility factors for HL.Anatomical features, such as craniofacial abnormalities and ear malformations, immune deficit, and susceptibility to frequent ear infections, may contribute to conductive or mixed HL (Tables 1-3).Inner ear malformation has been reported not only in Xq21 microdeletion syndrome, in which HL and IP3 are common and specific findings are related to POU3F4 haploinsufficiency, but also in 22q11.2microdeletion syndrome, also known as DiGeorge syndrome, characterized by incomplete penetrance and an extremely variable phenotype (Table 3).
As shown in Tables 1-3, in nearly half of the contiguous gene syndromes, there are HL candidate genes.Some of them ( 13) are known to be causative of monogenic hereditary HL according to the deafness database [38].We can therefore hypothesize that they may be involved in causing HL in the contiguous gene syndromes, where they are deleted or duplicated.For the remaining half of contiguous gene syndromes, the etiological mechanism of HL is unknown.We can only infer that dosage-sensitive genes mapped into the rearrangements might be candidates for HL.COCH is associated both with autosomal dominant nonsyndromic HL with variable penetrance of vestibular malfunction (DFNA9), Meniere's disease, and possibly glaucoma and with autosomal recessive nonsyndromic HL.SERPINB6, MYO15A, and OTOA have been associated with autosomal recessive nonsyndromic HL; thus, a possible second-hit event could be hypothesized in patients with the relative deletion syndromes exhibiting HL.USH1C may be associated both with autosomal recessive nonsyndromic HL and with Usher syndrome.Bilallelic deletions in 15q15.3involving STRC/CATSPER are responsible for autosomal recessive nonsyndromic HL in females and autosomal recessive HL with asthenoteratozoospermia in males.HL may also be due to the presence of a pathogenic SNV in STRC in trans with the deletion.Finally, similarly to the disease mechanism related to POU3F4, COL4A5 causes Alport syndrome both in case of a pathogenetic SNV or complete gene deletion.
The Role of Chromosomal Microarray in HL
Recently, the European Network for Genetic Hearing Impairment published recommendations for the evaluation of prelingual HL, underling the indication of chromosome assessment by a CGH array, investigating different genes located contiguously on a chromosome segment, in case of a polymalformative syndrome, or a set of clinical signs associated with deafness that does not evoke a known diagnosis [88].With mostly genetic HL explained by monogenic conditions, isolated chromosomal microarray testing in an individual with apparent nonsyndromic HL has a low diagnostic yield, as stated in the dedicated GeneReviews page on the genetics of HL [89].Congenital HL may actually be isolated in several cases, due to environmental causes such as CMV infection or due to a genetic cause, including the most frequently mutated GJB2 gene or large numbers of other genes [3].Since there are hundreds of syndromes presenting with congenital HL and characterized by additional signs and symptoms manifesting after birth, during childhood or later, caution should be posed in the definition of "nonsyndromic HL" at birth, in early infancy, and during childhood, especially before a definite etiologic diagnosis.With this in mind, the choice of comprehensive genetic analysis is nowadays preferred to HL multi-gene panels.In the starting era of neonatal genomic screening, exome sequencing has been proposed as an efficient first-tier analysis to screen for monogenic causes of congenital HL [90].The promising benefits of early etiologic diagnosis by NGS strategies are arising, together with challenges in the interpretation of molecular data, related to the possibility of a partial phenotype at birth, variant of uncertain significance, and limited genotype-phenotype correlations.Whenever comprehensive NGS tests turn out to be nonconclusive, genomic disease mechanisms should be considered.
In contiguous gene syndromes, HL may be caused by the presence of dosage-sensitive gene(s) mapped in the CNV or gene disruption at the CNV breakpoint.Although chromosomal microarray testing in an individual with HL has a low diagnostic yield, we identified about 50 loci associated with almost 60 contiguous gene syndromes that could exhibit HL.As discussed in the previous paragraph, in some of them, we already know or can hypothesize the gene(s) involved in or that can cause HL.
Congenital HL identified by newborn hearing screening may be the first clinical feature in those microdeletion or microduplication syndromes exhibiting minor congenital anomalies, usually undetectable by prenatal imaging, such as inner ear malformation, neurodevelopmental disorder, and other sensorial defects.Indeed, only in a minority of the reviewed microdeletion or microduplication syndromes, such as Wolf-Hirschhorn syndrome (4p16.3DS, Table 1), Cri-du Chat syndrome (5p15 DS, Table 1), and microdeletion or microduplications associated with split hand/foot malformation, do patients exhibit at birth a recognizable pattern of signs/multiple congenital anomalies.This is the subgroup that could be diagnosed in a prenatal setting [91].
According to the existing recommendations, the evidence, and this review, we point out that we should consider contiguous gene syndromes and perform CMA or locusspecific MLPA analysis in patients exhibiting HL whenever a deletion is suspected at NGS or only a monoallelic variant has been identified in patients in which a recessive syndrome is supposed, in evocative syndromic associations, and in undiagnosed patients with likely syndromic HL.Finally, we believe that this review may help geneticists, audiologists, and clinicians to counsel about HL in contiguous gene syndromes, especially in newborns.
Figure 1 .
Figure 1.This figure shows the main clinical features of the patient.(a) Computed tomography of temporal bones (axial plane) showing bulbous dilatation at the distal ends of internal auditory canals; the interscalar septa of the cochlea are present, but the modiolus is absent (red arrows).(b) Fundus oculi showing choroidal degeneration and diffuse retinal pigmented epithelium dystrophy.
Figure 2 .
Figure 2.This figure shows the frequence of clinical findings in affected probands (a) and in carrier female family members (b).SNHL: sensorineural hearing loss; HL: hearing loss; IEM: inner ear malformation; DD: developmental delay; ID: intellectual disability; Behavior D: behavior disorder; LD: learning disability. | 2024-05-26T15:07:11.214Z | 2024-05-23T00:00:00.000 | {
"year": 2024,
"sha1": "24e1094ae654334db67a494b9f843aba1f8726a7",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4425/15/6/677/pdf?version=1716479309",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "57dd998691c3dc166070a68ba30b5c53d698ac99",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
16837652 | pes2o/s2orc | v3-fos-license | Post-traumatic stress in Crohn's disease and its association with disease activity
Objective Violence, accidents and natural disasters are known to cause post-traumatic stress, which is typically accompanied by fear, suffering and impaired quality of life. Similar to chronic diseases, such events preoccupy the patient over longer periods. We hypothesised that post-traumatic stress could also be caused by Crohn's disease (CD), and that CD specific post-traumatic stress could be associated with an increased risk of disease exacerbation. Methods A cohort of CD patients was observed over 18 months in various types of locations providing gastroenterological treatment in Switzerland. The cohort included 597 consecutively recruited adults. At inclusion, CD specific post-traumatic stress was assessed using the Post-traumatic Diagnostic Scale (range 0–51 points). During follow-up, clinical aggravation was assessed by combining important outcome measures. Patients with post-traumatic stress levels suggestive of a post-traumatic stress disorder (≥ 15 points) were compared with patients with lower post-traumatic stress levels as well as with patients without post-traumatic stress. Also, the continuous relation between post-traumatic stress severity and risk of disease exacerbation was assessed. Results The 88 (19.1%) patients scoring ≥15 points had 4.3 times higher odds of exacerbation (95% CI 2.6 to 7.2) than the 372 (80.9%) patients scoring <15 points, and 13.0 times higher odds (95% CI 3.6 to 46.2) than the 45 (9.8%) patients scoring 0 points. The odds of exacerbation increased by 2.2 (95% CI 1.6 to 2.8) per standard deviation of post-traumatic stress. Conclusions CD specific post-traumatic stress is frequent and seems to be associated with exacerbation of CD. Thus gastroenterologists may want to ask about symptoms of post-traumatic stress and, where relevant, offer appropriate management according to current knowledge.
Introduction
Crohn's disease (CD) is an incurable inflammatory disease potentially located in the entire gastrointestinal tract with particularly high prevalence rates in Northern Europe, North America and the UK. 1 The pathological mechanisms are still elusive and the disease course is difficult to predict. The treatment is often exhausting, expensive and burdensome. 2 Post-traumatic stress is an adverse reaction to traumatic experiences which decreases daily functioning and quality of life and makes the patients suffer. 3 Patients typically re-experience the traumatic event (eg, in dreams or thoughts), avoid reminders to the event and endure symptoms of hyperarousal (eg, irritability and sleeping difficulties). 4 Post-traumatic stress is traditionally thought to be caused by violence, accidents and natural disasters. However, research in medical fields other than CD (eg, cancer, human immunodeficiency virus infection) suggests that diseases (together with the necessary diagnostic and therapeutic procedures) can also be perceived as traumatic events by a considerable number of patients. 5 6 To date, CD has not been investigated as a potential cause of post-traumatic stress. This is probably due to the fact that avoidance (such as not talking about the burdensome experience of having CD) is a key feature of post-traumatic stress. However, CD may preoccupy patients over longer periods similar to 'traditional' traumatic events, and we suspect a high prevalence of post-traumatic stress among CD patients.
In addition, post-traumatic stress is associated with lasting alterations in the hypothalamic pituitary adrenal axis, the autonomic nervous system and the immune system, and thus may affect the course of CD. 7 If caused by myocardial RESEARCH infarction, post-traumatic stress is associated with poor cardiovascular prognosis. 8 Examining the level of post-traumatic stress caused by CD and its relation to the clinical course of CD seems timely and important given that post-traumatic stress impairs wellbeing and can effectively be treated. 9 We hypothesised that patients with CD show high levels of post-traumatic stress specific to CD and, moreover, that CD specific post-traumatic stress is associated with an increased risk of exacerbation of CD over an observation period of 18 months. Because ≥15 points on the Post-traumatic Diagnostic Scale (PDS) is suggestive of a post-traumatic stress disorder (PTSD), our second hypothesis was that patients meeting this cut-off would have a higher risk of adverse events than those scoring below 15 points. 10
Study design and patients
The ethics committees of all study sites approved the study protocol. Between July 2006 and February 2008, patients were consecutively recruited from virtually all places providing gastroenterological treatment in Switzerland (www.ibdcohort.ch). These included the study centres (ie, University Hospitals of Basel, Bern, Geneva, Lausanne, St Gallen and Zurich), regional hospitals and private practices. Recruitment was performed as follows: the treating gastroenterologists provided their patients with information about the study. When a patient had given written informed consent, gastroenterologists and study nurses completed the enrolment sheet asking about medical history, clinical status and laboratory parameters. More precisely, baseline data included gender, age, disease duration, previous hospitalisations, medication, height, weight, smoking status and the CD activity index. 11 At the same time, patients received the PDS with the instruction to complete it and return it directly to the data centre (given that gastroenterologists assessed exacerbation, they had to be blind for post-traumatic stress). To facilitate generalisation of findings to the general population of CD patients treated in Switzerland, the only eligibility criterion was confirmation of a diagnosis of CD based on the criteria of Lennard-Jones 12 ; the latter allows confirmation of a diagnosis of CD by radiological, endoscopic or histological findings, or by surgery.
Subsequent to enrolment and collection of baseline information, patients were followed over an observation period of 18 months. They were advised to inform their treating gastroenterologists whenever they experienced worsening or amelioration of symptoms, in which case an appointment was scheduled to clarify whether fluctuation of symptoms was caused by fluctuation of CD activity or by some other clinical state (eg, infections, gallstones). Regular meetings among gastroenterologists ensured the use of matched diagnostic and therapeutic criteria, as issued by the European Panel on the Appropriateness of CD Treatment. 13 In addition, the authors informed the data centre in case of inconclusive or missing data in order to improve the enrolment sheet accordingly.
The analysis was performed on an a priori defined sample of 468 patients to assure accurate control for potential confounders without overfitting statistical models. A sample of 367 patients has a power of 90% to detect an OR of adverse events of 1.5 for an increase of 1 SD in post-traumatic stress at a significance level of α<0.05 (two tailed). We computed this power using Power and Precision software for an overall event rate of 25% over 18 months. 14 However, we oversampled the calculated sample size by 40% and enrolled 597 patients (367 corresponds to 60% of 597) to account for projected loss in response rate, dropouts and missing values.
Measures
To assess post-traumatic stress we used the German and French version of the PDS, 15 which is a self-rated questionnaire with 17 items. We replaced the term 'event' by 'intestinal disease' in the PDS questionnaire to ensure measurement of disease specific post-traumatic stress. Each of the three main characteristicsnamely re-experiencing CD ("Did you have bad dreams or nightmares about the intestinal disease?", four items), avoidance of cues related to CD ("Did you try not to think about, talk about or have feelings about the intestinal disease?", seven items) and hyperarousal (five items)-is represented by a separate symptom cluster. All items are rated on a 4 point Likert Scale (0-3; total score 0-51) with a higher score indicating a higher level of post-traumatic stress. Validity measures in our study included a variance of item means of 0.39 and a variance of item variances of 0.01 (ie, good weighting) and a Cronbach's α of 0.84 (ie, good overall reliability). If no more than one item per symptom cluster was missing, replacing it by the mean of the non-missing items of the respective cluster did not affect the validity (personal communication with Stieglitz 15 in 2009).
Patients scoring ≥15 points are likely to have fully developed PTSD, in which case a structured clinical interview is recommended. 4 A structured clinical interview is mandatory to diagnose full PTSD. Our measure of interest was not PTSD. Instead, we examined post-traumatic stress along a continuum of severity (ie, already at levels far below those seen in fully developed PTSD). This approach has several advantages 16 17 : post-traumatic stress can easily be measured in larger samples by means of the PDS; post-traumatic stress can be detected in patients who may benefit from therapeutic interventions despite not having full PTSD; and ongoing revision of the diagnostic criteria of PTSD cannot affect the validity of the results (http://pn.psychiatryonline.org/content/45/10/8.full; last accessed 14 June 2010). We were primarily interested in the total PDS score but also in re-experience, avoidance and hyperarousal as separate scores.
Fourthly, we performed 16 stratified binary logistic regressions by combing each symptom score with each measure of exacerbation (eg, re-experience and flares) in a different analysis without including potential confounders.
For the combination of total PDS score and adverse events, we set the level of significance at 0.05 and computed 95% CI for all ORs. For all other combinations, we set the level of significance at 0.01 to adjust for multiple comparisons and computed 99% CI for the ORs.
Patient characteristics
The flowchart (figure 1) shows the selection process of the 468 analysed patients. Sixty-nine patients (14.7%) experienced flares, 26 (5.6%) did not respond to therapy, 28 (6.0%) suffered from complications and nine (1.9%) had extraintestinal manifestations. Taking into account the fact that 20 patients experienced more than one category of adverse events (eg, first flares and later on complications), the total number of patients with adverse events was 106 (22.6%). The characteristics of the whole sample and the differences between the groups with and without adverse events are shown in table 1.
Three patients (6.7%) experienced adverse events in group 1, 64 (17.2%) in group 2 and 42 (47.7%) in group 3. The odds of experiencing adverse events were thus 0.07 for group 1, 0.21 for group 2 and 0.91 for group 3. These findings corresponded to an OR of 4.3 Measures of exacerbation of CD were flares, extraintestinal manifestations, complications and non-response to therapy. These were selected because they have a high clinical impact, force patients to seek gastroenterological healthcare and, in combination, allow for a complete evaluation of the disease course. They were combined to one binary outcome which is referred to as adverse events (present/absent), as previously described. 18 In brief, flares were defined as an increase of 100 points or more from baseline on the CD activity index. Any need for an increase in medication was considered a treatment failure. Extraintestinal manifestations were included if they occurred during the observation period. Complications included strictures, fistulae, malignancies or any other need for surgery.
Clinical experience has shown that disease duration, strictures and fistulae requiring hospitalisations, need for medication (ie, 5-aminosalicylates, sulfasalazin, steroids, immunosuppressors, antitumour necrosis factor α agents, antibiotics), low body mass index, baseline disease activity and smoking are predictive of the disease course. 18 For this reason, we controlled not only for those factors but also for gender and age because these are the most important demographic parameters.
Data analysis
We analysed the data using SPSS 15 for Windows (Chicago, Illinois, USA) excluding the few cases with missing data.
Firstly, we described the total sample. We compared the groups with and without adverse events using standard methods. For mean and percentage differences, we computed asymptotic 95% CI and two sided p values. We used the t test for normally distributed metric variables (age, body mass index) and the Mann-Whitney U test for extremely skewed metric variables (diagnosis duration, hospitalisation days related to CD) or nonmetric quantitative variables (scores). For categorical variables, we compared each category separately using Fisher's exact test.
Secondly, we divided the sample into three groups according to post-traumatic stress levels. Group 1 included all patients with a PDS score of 0, group 2 included all patients scoring <15 (hence this group also included patients in group 1) and group 3 comprised patients scoring ≥15 points. We compared group 3 with groups 1 and 2 in terms of relative risks and ORs of adverse events.
Thirdly, we performed four binary logistic regression models: a model for total post-traumatic stress (primary score), a model for re-experiencing symptoms, a model for avoidance symptoms and a model for hyperarousal symptoms (secondary scores). Each model contained the respective symptom score plus the above mentioned potential confounders as independent variables and adverse events (yes/no) as the outcome. Figure 1 Flowchart of patients selected for analysis, illustrating the number of patients at each stage of the study. Most patients were excluded because they did not return the questionnaires at baseline. We completed 11 re-experience scores, 23 avoidance scores and 15 arousal scores by the mean of the non-missing items. Two Post-traumatic Diagnostic Scales could not be used for analysis because all three subscales (re-experience, avoidance and arousal) had more than one missing value. (95% CI 2.6 to 7.2; p<0.001) between groups 2 and 3 and 13.0 (95% CI 3.6 to 46.2; p<0.001) between groups 1 and 3. After controlling for potential confounders, the odds of experiencing adverse events showed a 2.16fold increase (95% CI 1.65 to 2.84; p<0.001) for an increase of 1 SD of post-traumatic stress, which was the primary measure. In addition, it showed a 1.72fold increase for an increase of 1 SD of re-experiencing symptoms, a 1.77-fold increase for an increase of 1 SD of avoidance symptoms and a 2.07-fold increase for an increase of 1 SD of hyperarousal (table 2).
RESEARCH
The results for the separate disease measures are summarised in table 3 and largely concur with the results for the combined measure.
Discussion
CD can be a great burden for patients. 2 Fatigue, general weakness and hardly foreseeable incidents such as diarrhoea may impair the patient's activities. Treatment is often costly and side effects potentially severe. 19 Ileostomies, facial bloating and feelings of dirtiness may reduce self-esteem. Severe complications such as perforation and peritonitis may even cause fear of dying. Studies show that patients with CD perceive that their psychological needs are incompletely met. 2 We found that psychological reactions to CD compare well with reactions to more sudden events, such as violence, accidents and natural disasters. Only 9.8% of our patients were free of CD specific post-traumatic stress symptoms (group 1) compared with 86.9% of a sample representing the entire Swiss population (OR 61.6). 20 Nineteen per cent of our sample (group 3) scored ≥15 points while only 2% of the referred sample had subthreshold PTSD (ie, traumatic event with higher levels of post-traumatic stress without the duration and degree of impairment of a full PTSD 21 ) and none had full PTSD (OR of subthreshold PTSD 11.5).
Distinguishing between subthreshold and full PTSD is important because the therapeutic approaches may be different. Clinical experience has shown that patients with subthreshold PTSD may profit from psychological coaching. Such coaching focuses on improvement of coping capacities, is widely used and results in reduced symptom severity and associated fear. 9 Full PTSD requires so-called trauma focused therapies to reduce the degree of mental and physical impairment. 22 In studies on postmyocardial infarction patients, the proportion of patients scoring ≥15 PDS points was the same as in our sample, and in half of those patients clinical PTSD according to DSM-IV criteria was confirmed by a structured interview. 23 24 In anticipation of evidence specific to CD, we recommend treating CD specific post-traumatic stress in the same way as posttraumatic stress is usually treated.
We also found that the odds of disease exacerbation were four times higher in patients meeting the criterion for a structured clinical interview than in patients scoring below 15 PDS points and even 13 times higher than in patients scoring 0 points. An increase of 1 SD of post-traumatic stress was associated with more than twofold higher odds of subsequent disease deterioration. Isolated symptom clusters of post-traumatic stress showed similarly strong associations. Evidence
RESEARCH
the study. We also cannot exclude the possibility that patients with post-traumatic stress were less likely to participate in the study because they avoided being questioned about their disease. This study is the first to report a point prevalence of CD specific post-traumatic stress, which is high. Clinicians may wish to be aware that a considerable number of patients perceive CD as psychologically traumatic to the extent that the disease preoccupies them in thoughts and dreams and thus causes irritability and sleeping difficulties associated with fear. In most cases patients avoid talking about cues which remind them of having the disease, including intake of medication, which may affect adherence to therapy. 2 Such behaviour may unwillingly be encouraged by the usual shortness of consultation time and unfamiliarity of clinical gastroenterologists in dealing with the psychological needs of their patients. Nonetheless, it is evident that post-traumatic stress causes considerable suffering. Therefore, gastroenterologists should be aware of the impact of CD on psychological wellbeing and the fact that this relationship is probably bidirectional. This knowledge may open the door to psychological interventions. Gastroenterologists may wish to ask about re-experiencing, avoidance and hyperarousal and, depending on the intensity of symptoms, may be advised to refer patients for psychological counselling or for a structured clinical interview to assess for PTSD. In the case of full PTSD, trauma focused psychotherapy should be considered, which should be performed by a specialist to avoid possible re-traumatisation.
for causality between CD specific post-traumatic stress and subsequent exacerbation according to the Bradford-Hill criteria included the temporal relationship, strength of the association and pathological findings. 25 The strength of the association (OR/ SD 2.16 (95% CI 1.65 to 2.84)) was similar to that of azathioprine (1.0-2.5 mg/kg per day) over placebo in preventing disease exacerbation over 1 year (OR 2.27 (95% CI 1.75 to 2.94)). 26 However, a response relationship between post-traumatic stress and subsequent disease exacerbation has never been examined. A systematic review identified eight studies investigating psychological interventions in CD but none included the treatment of post-traumatic stress. 27 Post-traumatic stress may contribute to exacerbation by triggering gastrointestinal inflammation. 28 Elevated levels of proinflammatory biomarkers such as C reactive protein and tumour necrosis factor α, which play an important role in CD, 29 have been observed in different studies on post-traumatic stress. Given that prior adverse events (or at least their perception as a trauma) predict post-traumatic stress and probably also future adverse events, they could have confounded the association between post-traumatic stress and subsequent adverse events. However, this seems unlikely because we controlled for disease duration, baseline disease activity and prior hospitalisations (a clinically important marker of prior adverse events). In any case, the high prevalence of CD specific post-traumatic stress remains the more important and more robust finding.
In binary logistic regression an increasing proportion between the number of variables and the number of cases belonging to one of the two possible outcome categories (yes/no) increases the risk of overfitting. We decided to select our number of variables in order to keep an acceptable control of both overfitting and confounding. Cohort studies regularly face the problem of missing values. In the present cohort study, however, the 80.2% patients who returned the questionnaires lacked a relatively small number of data points (tables 1 and 2). We believe that our sample, being representative of Switzerland, may be representative of other western countries. Interestingly, patients treated in tertiary centres were more likely to participate in Values are ORs for 1 SD (99% CI) and corresponding p values. Sensitivity analyses stratifi ed by the three symptom clusters (re-experiencing, avoidance and hyperarousal) of the Post-traumatic Diagnostic Scale and by the four items of the combined outcome (fl ares, non-response to therapy, complications and extraintestinal manifestations). For these analyses we did not include additional covariates to avoid overfi tted models.
What is already known about this subject
▶ Patients with Crohn's disease (CD) report that their psychological needs lack attention from the medical community and that genuine information on psychological topics is diffi cult to obtain. ▶ Post-traumatic stress, which is traditionally thought to be caused by manmade traumas and natural disasters, is typically accompanied by fear, suffering and reduced quality of life. ▶ As shown by years of worldwide clinical experience and by high level evidence, the treatment of post-traumatic stress is effi cient in reducing fear and suffering.
How might it impact on clinical practice in the foreseeable future?
▶ The gastroenterologists may want to ask about symptoms of post-traumatic stress and, where relevant, refer the patients for a structured clinical interview in order to search for a posttraumatic stress disorder (PTSD). ▶ As long as there is no specifi c evidence for the treatment of post-traumatic stress caused by CD, it is reasonable to treat it in the same way as post-traumatic stress caused by other traumatic events. ▶ If the amount of post-traumatic stress is uncomfortable for the patient but does not reach the duration and impairment of a PTSD, the patient may be offered psychological counselling in order to improve coping with the disease. In the case of a full PTSD, a trauma focused approach by a mental health specialist might be necessary. ▶ Future studies may examine whether treatment of posttraumatic stress protects from disease deterioration. Although this remains to be proven, other outcome measures (eg, improvement of quality of life, reduction of suffering) are suffi cient reasons for treatment.
What are the new fi ndings ▶ Post-traumatic stress is frequently caused by CD. ▶ We observed a strong association between the amount of posttraumatic stress and subsequent disease deterioration.
Open Access This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 3.0) license, which permits others to distribute, remix, adapt, build upon this work noncommercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/bync/3.0/ | 2016-05-12T22:15:10.714Z | 2010-12-01T00:00:00.000 | {
"year": 2010,
"sha1": "77c45d776bab4b8bfb35a51bfcf182ab6573638b",
"oa_license": "CCBYNC",
"oa_url": "https://fg.bmj.com/content/flgastro/2/1/2.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "77c45d776bab4b8bfb35a51bfcf182ab6573638b",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
267784845 | pes2o/s2orc | v3-fos-license | Research on the Impact of Inflation Reduction Act on the New Energy Vehicle Market in China and America
: This study aims to explore in depth the impact of the Inflation Reduction Act on the new energy vehicle market in China and the United States. As a key solution to address environmental pollution and energy sustainability challenges, the development of new energy vehicles is influenced by both government policies and market factors. Inflation is a widely concerned issue that may have multiple impacts on the automotive market, but its specific effects have not been thoroughly studied. In this study, we first reviewed the current situation and trends of the new energy vehicle market, with a focus on analyzing the comparison between the Chinese and American markets. Subsequently, we delved into the background, main content, and legislative motivations of the Inflation Reduction Act. Through a comprehensive examination of market data and policy text analysis, we have conducted a detailed study of market changes before and after the implementation of the bill, with a particular focus on the trends in sales volume and price stability. Finally, we have drawn some preliminary conclusions that the inflation reduction bill may have complex impacts on the new energy vehicle market in China and the United States, including sales volume, price stability, corporate strategy, and technological innovation.
Introduction
With the increasing importance of global climate change and energy sustainability, new energy vehicles, as a clean and environmentally friendly means of transportation, are gradually emerging.In China, the United States and other countries, the government and automobile manufacturers have invested a lot of resources to promote the development and promotion of new energy vehicles.However, at the same time, accompanied by inflation, its impact on economic and market stability has gradually attracted widespread attention [1].Inflation refers to the phenomenon that the value of money is declining in the currency circulation, which leads to a general increase in prices.This phenomenon may have many influences on the automobile market, including consumers' decision to buy a car, the fluctuation of automobile prices, enterprise cost management, etc.In order to cope with the instability of inflation on the market, the government may formulate and implement an inflation reduction bill to control the inflation rate and maintain economic stability [2].In this context, this study aims to explore the impact of the Inflation Reduction Act on the new energy vehicle market in China and the United States.We will analyze the changing trend of the new energy vehicle market before and after the implementation of the bill, focusing on the changes in sales volume and price stability [3].In addition, we will also study the competition and cooperation in the new energy vehicle market, including enterprise competition strategy and technological innovation and development [4].In the research, we will use professional data collection and analysis methods, combined with detailed market data and policy text analysis, to fully understand the impact of the bill on the market.At the same time, we will learn from relevant literature and international experience to deepen our understanding of the inflation reduction bill and how it affects the new energy vehicle market.The importance of this research lies not only in the impact assessment of the new energy vehicle market, but also in providing useful insights on how to deal with inflation for government decision makers, corporate strategists and researchers [5][6].Through in-depth understanding of the impact of the bill, we can better guide policy formulation and market development, promote the sustainable growth of the new energy automobile industry, and maintain economic stability.In a word, the purpose of this study is to study the impact of the Inflation Reduction Act on the new energy vehicle market in China and the United States, so as to reveal the dynamics and trends of the new energy vehicle market.Through in-depth analysis, we hope to provide useful information for promoting the sustainable development of the new energy automobile industry, provide reference for the management of inflation, and provide useful insights for future policy and strategic decisions [7].
The Impact of Inflation Reduction
Act on the New Energy Vehicle Market in China and the United States
Market conditions before the implementation of the bill
Before the implementation of the bill, both the new energy vehicle markets in China and the United States experienced rapid growth.The sales volume of new energy vehicles has been increasing year by year, partly due to government incentive policies, including tax reductions, subsidies, and car purchase discounts [8].In China, for example, sales of new energy vehicles have been on the rise, partly due to a series of measures taken by the government to encourage the development of new energy vehicles, such as free licenses, charging infrastructure construction, and car purchase subsidies.The price of new energy vehicles is gradually decreasing under the promotion of market competition and technological progress.For the market situation before the implementation of the bill, Figure 1 shows the trend of sales of new energy vehicles.This may be attributed to various factors, including support from national policies, growth in market demand, technological progress, and increased environmental awareness.In addition, the market environment and consumer preferences of different countries may also have an impact on sales growth [9].Before the implementation of the bill, the high cost of new energy vehicles had always been a major obstacle for consumers to purchase cars.However, with the improvement of battery technology, the expansion of production scale, and the intensification of competition, the price of new energy vehicles has become relatively stable and gradually more attractive [10].
Market changes after the implementation of the bill
After the implementation of the bill, both the new energy vehicle markets in China and the United States have undergone a series of changes.On the one hand, sales of new energy vehicles may be affected in the short term, as the bill may lead to consumer concerns about car prices and purchase costs.However, this may also stimulate consumer interest in more environmentally friendly modes of transportation, prompting them to consider new energy vehicles.One of the main objectives of the bill is to reduce the instability of inflation in the automotive market.For the market changes after the implementation of the bill, Figure 2 shows the trend of sales of new energy vehicles.although at a slower pace, from 120000 vehicles in 2021 to 160000 vehicles in 2025.Continuing comparative analysis shows that compared to the United States, China is more actively expanding in the new energy vehicle market.This instability may lead to fluctuations in car prices, thereby affecting consumer purchasing decisions.Therefore, it is crucial to study the stability of new energy vehicle prices after the implementation of the inflation reduction bill.By analyzing the correlation between price data and inflation rates, we can better understand the impact of the bill on the market.
In depth exploration of potential influencing factors
The implementation of the inflation reduction bill may have an impact on various links in the new energy vehicle industry chain.For example, it may affect raw material prices, component supply chains, production costs, etc. Research should conduct in-depth analysis of these changes to evaluate the impact of the bill on the entire industry.Consumers are influenced by various factors in their car purchase decisions, including price, tax policies, environmental awareness, etc.By investigating and analyzing changes in consumer behavior and attitudes, we can better understand the impact of the bill on consumer decision-making.The inflation reduction bill may affect consumer confidence and demand for cars.Inflation is often accompanied by rising prices, which may lead to some consumers reducing their car purchase expenses or seeking cheaper alternatives.Therefore, research needs to delve deeper into the impact of the bill on consumer car purchasing decisions, including their level of emphasis on environmental protection.
One of the core characteristics of new energy vehicles is the use of clean energy sources, such as electricity.The implementation of the inflation reduction bill may have an impact on energy policies, including the development and utilization of renewable energy.This has a potential significant impact on the new energy vehicle market, as more renewable energy may reduce battery charging costs and increase the attractiveness of new energy vehicles.The new energy vehicle market has been constantly innovating, including the continuous progress of battery technology, electric drive technology, and autonomous driving technology.The inflation reduction bill may have an impact on technological innovation and cost reduction, as the government may encourage research and development to improve energy efficiency and reduce costs.This may have a profound impact on the competitiveness and attractiveness of the new energy vehicle market.The trade relationship between the new energy vehicle markets of China and the United States is also a potential factor.The inflation reduction bill may affect import and export policies, which may have an impact on the operations of multinational manufacturers and suppliers.It is crucial to conduct in-depth research on the impact of international trade factors on market stability and development.After the implementation of the bill, consumer purchasing behavior may change, and they may pay more attention to environmental factors or price stability.
Competition and Cooperation
Between China and The United States in The New Energy Vehicle Market
Enterprise competitive strategy
In the Sino-US new energy vehicle market, there are many major competitors, including Tesla, NIO, XPeng, LI and other companies.These enterprises adopt different competitive strategies, including product differentiation, pricing strategy, market positioning and so on.As the world's leading manufacturer of electric vehicles, Tesla has always taken high-end electric vehicles as its main competitive strategy, while China enterprises such as NIO and XPeng pay more attention to the middle and high-end market in market positioning.In order to cope with fierce competition, some new energy automobile enterprises choose cooperation and alliance.This cooperation can cover technology sharing, charging infrastructure construction, supply chain cooperation and so on.For example, some new energy automobile enterprises in China have established strategic cooperative relations with battery suppliers to ensure a stable battery supply.In addition, multinational automakers have also made joint ventures with their partners in China to launch new energy vehicles in the China market.
Innovation and technological development
Technological innovation plays a key role in the new energy vehicle market.New energy automobile manufacturers have continuously introduced new battery technology, driver assistance system and automatic driving technology to improve automobile performance and safety.Especially in the field of battery technology, continuous R&D(Research and Development) and innovation are helpful to improve battery capacity, charging speed and life, and further promote the market popularization of electric vehicles.There is a trend of technical cooperation and knowledge transfer in the new energy vehicle market between China and the United States.Chinese and American companies may cooperate in the fields of battery technology, electric drive technology and autonomous driving technology.This kind of cooperation can accelerate the development and application of technology, improve automobile performance and reduce costs, thus enhancing the competitiveness of enterprises.In addition, the government may also encourage technical cooperation to promote the development of new energy automobile industry.The government may provide financial support, intellectual property protection and R&D cooperation opportunities to promote technological exchanges between enterprises.
Conclusions
In this study, we conducted in-depth analysis and research on the impact of the Inflation Reduction Act on the new energy vehicle market in China and the United States.Through examining market conditions, price trends, competition, and cooperation, we have come to the following conclusion: Firstly, before the implementation of the bill, the new energy vehicle markets in China and the United States experienced rapid growth.The sales volume of new energy vehicles continues to rise, thanks to government incentive policies such as car purchase subsidies and tax reductions.Secondly, after the implementation of the bill, the market underwent a series of changes.The sales volume of new energy vehicles may be affected in the short term, as the bill has raised concerns among consumers about car prices and purchase costs.However, the bill may also stimulate consumer interest in environmentally friendly transportation.In terms of competition and cooperation, there are multiple competitors in the new energy vehicle market between China and the United States, each adopting different competitive strategies.Meanwhile, some companies choose to cooperate and form alliances to jointly address market challenges.Technological innovation plays a crucial role in market competition, and new energy vehicle manufacturers continuously introduce new technologies to improve vehicle performance and safety.Over time, we anticipate that the market will continue to evolve and require joint efforts from governments, businesses, and research institutions to address new challenges and opportunities, in order to promote the sustainable development of the new energy vehicle industry.
Figure 1 .
Figure 1.Trends in sales of new energy vehicles before the implementation of the bill
Figure 2 .
Figure 2. Trends in sales of new energy vehicles after the implementation of the bill | 2024-02-23T16:04:16.909Z | 2024-01-23T00:00:00.000 | {
"year": 2024,
"sha1": "3d55638bb3e5598c7e19e76b4da1d1163c0116df",
"oa_license": "CCBYNC",
"oa_url": "https://drpress.org/ojs/index.php/fbem/article/download/16700/16212",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "fb7af06fee08c74006fa2812fc13b98a099cc31d",
"s2fieldsofstudy": [
"Economics",
"Environmental Science"
],
"extfieldsofstudy": []
} |
95963028 | pes2o/s2orc | v3-fos-license | Power-dependent speciation of volatile organic compounds in aircraft exhaust
< At low power, scaling hydrocarbons to ethene reduces variability amongst engines. < At high power, there is a difference in composition with aromatics dominating. < At low power, thermal cracking dominates. < At high power, aromatic species are formed via the HACA process in the exhaust. < In downwind samples, HAPs make up a large portion of measured emissions (27 e 42%).
Introduction
The emission of volatile organic compounds (VOCs) into the atmosphere adversely affects the environment including production of ozone and increased levels of greenhouse gases and hazardous air pollutants (HAPs).The primary anthropogenic source of VOC emissions is the combustion of fossil fuels and related evaporation.Aircraft usage is a minor consumer of fossil fuels, accounting for only 3% of total usage.However, total jet fuel usage in the United States is projected to increase by 130% between 2009 and 2030 (Federal Aviation Agency (FAA), 2010).In addition, aircraft emissions are unique in that they directly affect the atmosphere both at ground level (aircraft taxi, idling, take-off and landing) and at cruise altitudes (upper troposphere and lower stratosphere).
The complete combustion of jet fuel results in the formation of carbon dioxide (CO 2 ) as the only carbonaceous species.However, because of combustion inefficiencies, other carbon-containing compounds are emitted both as gaseous and particulate species.To quantify the impact of these emissions, the International Civil Aviation Organization (ICAO) requires manufacturers to measure and document carbon monoxide (CO), unburned hydrocarbons, and smoke number emissions from all engines used in civil aircraft at power settings equivalent to idle (7% of rated thrust), airport approach (30%), climb-out (85%) and take-off (100%).While these "certification tests" yield estimates of total carbonaceous species emissions, they provide no information on hydrocarbon speciation and thus no insight into the reactivity or hazardous nature of the exhaust.
The sum of NMHCs in aircraft exhaust is highly dependent on engine power with the highest emission at power settings typical of ground idle and decreasing as power increases.The organic fraction of the exhaust is composed primarily of short-chain unsaturated hydrocarbons (Anderson et al., 2006) and aldehydes (Spicer et al., 1994).In particular, the predominant species emitted are ethene and formaldehyde (HCHO) at a ratio of 1:1 (Herndon et al., 2009).This is in strong contrast to the composition of unburned jet fuel which is composed primarily of C 11 eC 14 hydrocarbons (Spicer et al., 1994).The short-chained species found in the exhaust are produced from fuel cracking (alkenes) and partial oxidation (forming formaldehyde) during combustion (Warnatz et al., 2006).In a recent study looking at exhaust plumes from multiple aircraft while on an active runway, Herndon et al. (2009) showed that scaling the nearidle power emission indices (EIs) to tracers of fuel cracking (formaldehyde or ethene) eliminated much of the variability between engine types.However, some evidence points to variation in the hydrocarbon speciation profile with changes in engine power setting, with higher engine power settings producing a larger fraction of aromatics and alkanes (Anderson et al., 2006).The formation of aromatics is an intermediate step in the formation of soot (Warnatz et al., 2006) which also increases as aircraft engine power increases.The soot formation process proceeds via hydrogen abstraction/carbon addition (HACA) which first forms aromatic species, then polycyclic aromatic hydrocarbons (PAHs), and finally soot (Bauer and Jeffers, 1988;Wang and Frenklach, 1994;McEnally and Pfefferle, 1997).
The power-dependent speciation of emitted VOCs was a focus of the third Aircraft Particle Emission Experiment (APEX-3) at the NASA Glenn Research Center in Cleveland, Ohio during November 2005 (Wey et al., 2006).This project focused on the quantification and speciation of both particulate and gaseous emissions from different engines.Results from the particulate sampling are given in Timko et al. (2010b) and Kinsey et al. (2011), while gas-phase measurements of nitrogen oxides and formaldehyde are found in Timko et al. (2010a).The current analysis reports emissions of NMHCs directly behind the aircraft engine (non-oxygenated species only) and in exhaust plumes downwind from the engines (including carbonyls).
Sample collection
During APEX-3, aircraft were parked and chocked on a side portion of the tarmac at Cleveland Hopkins International Airport in Cleveland, Ohio.The thrust of the engines was then varied from conditions typical of idle (4% of full rated thrust) to take-off (93%).Samples were collected using an inlet probe positioned one meter behind the engine exit plane.The sample was immediately diluted with dry nitrogen, resulting in a variable dilution ratio between 4:1 and 90:1.Dry nitrogen dilution was necessary for concurrent aerosol measurement (dilution reduces aerosol processing in the sampling lines and allows for aerosol concentrations within the detection limits of the instruments; Wong et al., 2011).A pump drew the exhaust through stainless steel tubing to a mobile lab where samples were collected in two-liter stainless steel canisters.In total, 20 direct exhaust samples were collected from three engine types at powers between 4 and 30% (Table 1).
Plume samples were collected by the U.S. Environmental Protection Agency's (EPA's) National Risk Management Research Laboratory using its Diesel Emissions Aerosol Laboratory (Kinsey et al., 2006).The EPA extracted a continuous sample from the engine exhaust plume 30 m behind the exit plane.To obtain sufficient sample for analysis, time-integrated sampling of the plumes was performed for each engine encompassing multiple engine thrust settings as described by Kinsey (2009).Samples were collected for analysis of NMHCs (in evacuated stainless steel canisters) and carbonyls (using 2,4-dinitrophenylhydrazine [DNPH] impregnated silica gel cartridges equipped with potassium iodide ozone scrubbers).During transport and in the field, the silica gel cartridges were stored at approximately À20 C. Background mixing ratios were also sampled by an independent sample extraction and analysis system.Additional details on the EPA sampling campaign are provided in the final EPA report for the three APEX campaigns (Kinsey, 2009).a Samples collected behind the right and left engines of the same aircraft.
b Two Boeing 757s with RB211 engines were sampled for the direct exhaust and one for the downwind sample.
Sample analysis
After the experiment, the direct exhaust samples were analyzed at the University of California, Irvine (UCI) using gas chromatography for the measurement of CO 2 , NMHCs and halogenated hydrocarbons.However, only CO 2 and NMHC data are presented here because emission rates of halocarbon species were found to be small.Carbon dioxide was separated by GC and quantified by a thermal conductivity detector.Non-methane hydrocarbons and halocarbons were measured using a three GC system featuring a quadrupole mass selective detector (MSD), two electron capture detectors (ECDs) and three flame ionization detectors (FIDs; Colman et al., 2001).In brief, an aliquot of each sample was preconcentrated in a stainless steel loop containing glass beads cooled with liquid nitrogen.The loop was heated to re-volatilize the sample, which was then flushed by a helium carrier, split into six streams, and output to different columnedetector combinations.The NMHCs were detected using the FIDs and the MSD.Quantification of the compounds was performed by the analysis of whole air standards after every eight samples.The limit of detection (LOD) was 3 pptv for all compounds.Samples were pressurized and analyzed within a month after collection, both factors which have been shown to increase sample integrity (Apel et al., 2003).
Analysis of the 30-m downwind VOC canister samples was performed using similar GC-MSD and GC-FID systems.The approach is a combination of EPA Methods TO-15 (EPA, 1999a) and CB-4 (EPA, 1986), used to resolve air toxics and hydrocarbon species.Carbonyl samples collected on the DNPH silica gel cartridges were extracted and analyzed using a High Pressure Liquid Chromatograph (HPLC) equipped with an ultraviolet (UV) detector.Sample analysis was performed using a modification of EPA Method TO-11A (EPA, 1999b) to incorporate additional compounds.
Data analysis procedures
The dilution factors for the direct exhaust samples were determined by measuring CO 2 in the raw exhaust (by non-dispersive infrared gas analyzers) and in the diluted samples (by the GC system described above).The background mixing ratio of each compound was subtracted from the dilution-corrected mixing ratio to determine the enhancement in compound X measured in the exhaust (DX).Background measurements for the VOCs were made by the EPA during periods with no aircraft activity.
The enhancement in the exhaust was then used to determine the emission indices (EIs) for the individual NMHC species.This was accomplished by ratioing the enhancement of a compound to the enhancement in CO 2 , which is the most abundant species emitted by aircraft engines and has an emission index of 3160 g CO 2 kg À1 fuel based on complete combustion of the JP-8 fuel used in the tests (Knighton et al., 2007).Using this value, the EI for a compound (EI X ) can be found by where DX and DCO 2 must be in the same units (pptv or ppbv), and M X and M CO2 are the molar masses (g mol À1 ) of compound X and CO 2 , respectively.Enhancements were deemed significant if they were greater than 100 pptv which is larger than the expected variations in background mixing ratios during the tests.At typical DCO 2 values measured (2500e20,000 ppmv), this corresponds to EIs of approximately 100 mg ethene kg À1 fuel and 30 mg benzene kg À1 fuel.Because exhaust mixing ratios are much larger than background values, background variation is believed to be a minor cause of uncertainty.Instead, uncertainties in EIs are dominated by uncertainties in the dilution factor which can be attributed to variations in sample flow and pressure during sample collection.These uncertainties are estimated to be on the order of 20% (Anderson et al., 2006).Emission indices vary with ambient temperature and pressure and were corrected to 15 C and 1 atm according to the Boeing fuel flow methodology (DuBois and Paynter, 2006).This correction was less than 1.5% for all samples but was performed to facilitate comparison with other experiments.
Results e direct exhaust samples
Power dependent emission indices for the NMHCs measured in the direct exhaust and ICAO certification values for the AE3007, PW4158 and RB211 engines are given in Table 2. Halogenated compounds had small background enhancements and are therefore not reported.The total measured NMHC is lower than ICAO estimates of unburned hydrocarbons at idle except for PW4158.This is likely due to the fact that the ICAO measurements include oxygenated species not measured in the direct exhaust during APEX-3.The reason why the measured PW4158 NMHCs emissions are greater than the ICAO certification (even though the compounds included in the field measurement should be only a subset of those included in the ICAO measurement) is unclear.However, in addition to this quantitative difference, the speciation profile of the PW4158 was found to differ from the other engines (see discussion below).
Consistent with the ICAO databank, emissions were highest at idle conditions and decreased dramatically with increasing power (Fig. 1).All engines were tested near 7% (AE3007 at 8%, others at 7%) since it is used as the idle conditions for ICAO certification.However, in practice aircraft are known to idle at lower powers with considerable variability (Herndon et al., 2009).Thus, the RB211 engine exhaust was also sampled at 4% power.Emission indices for the 4% power samples were significantly greater than measured at 7% (a factor of 3.4 for total NMHC and up to an order of magnitude for individual species), suggesting emissions at low power are very sensitive to the actual power setting while at idle.This is also seen in runway studies of idling aircraft (Herndon et al., 2009) and is especially important because of the large amount of time that aircraft spend at idle in comparison to take-off conditions.While fuel flow rates are higher at take-off, the increased EIs seen at idle and longer periods of time at these power conditions cause aircraft idling and taxiing to be the major source of VOC emissions at ground level (similar to calculations made for NO 2 emissions in Wood et al., 2008).
As the power increases, not only do the emission indices decrease but the NMHC speciation changes.At low power, alkenes are the predominant species accounting for over 50% of the measured NMHC (Fig. 2).However as the power increases, the fractions of alkenes and alkynes both decrease and the fraction of aromatics increases.This suggests a shift in NMHC production processes (discussed below).
In all of the idle samples (4e8% thrust), ethene is the most abundant NMHC measured in the exhaust.A scatter plot of total NMHCs versus ethene shows a good correlation with nearly identical slopes for the AE3007 and RB211 engines (Fig. 3).The PW4158 engine has 18% higher emissions of NMHCs relative to ethene; however these higher values are not significantly different due to the limited sample size at low power (only two PW4158 samples were collected at 7% power).Table 2 also includes formaldehyde data from Timko et al. (2010a) for the same engines measured by a tunable laser differential absorption spectrometer (TILDAS).The ratios of ethene-to-formaldehyde at idle ranged from 0.60 to 0.75 which are lower than previous measurements (0.78e1.26;Herndon et al., 2009).However, these lower than expected ratios could be the result of sampling artifacts from the sampling probe.De la Rosa Blanco et al. (2011) found that hydrocarbons can be oxidized in the probes to form CO. If ethene is preferentially oxidized (or is only partially oxidized to form formaldehyde) the ethene-to-formaldehyde ratio will be lower than in true exhaust.
Scaling emissions indices to ethene (as suggested by Herndon et al., 2009) reduced the variability among engines.However, the linear fits can be largely biased by the samples with the highest emission indices (samples collected at low power).The y-intercepts in Fig. 3 are non-zero suggesting a change in the speciation and source with an increase in power (decrease in ethene).Applying the same fitting technique with the different hydrocarbon classes (alkenes, alkynes and aromatics) also gave good correlations (shown for RB211 in Fig. 4).However, the alkenes and alkynes have a near-zero y-intercept while the aromatics have a y-intercept closer to 20 mg kg À1 fuel.This is also seen in Fig. 5 where propene, benzene and toluene emission indices for the RB211 engine are compared to idle measurements from previous studies (Herndon et al., 2009 and references therein) and measurements during the NASA EXCAVATE campaign (Experiment to Characterize Aircraft Volatile Aerosol and Trace-species Emissions; Anderson et al., 2006).During EXCAVATE, VOC emissions were characterized for a RB211-535-E4 engine (which is an earlier version of the RB211 engine studied during APEX-3) at idle to 61% power.Propene values at all power settings lie along the line defined by the low power data from multiple tests, whereas the aromatics (benzene and toluene) deviate at high power.The deviation from idle conditions increases as power setting increases (i.e., as the power increases aromatic values show more deviation from the low power data shown in the shaded area).In addition, the benzene-to-toluene ratio changes with power.For the AE3007 engine, the ratio is reduced from 2.2 at low power to 0.9 at high power.The decreasing benzene-to-toluene ratio with increasing power is related to the production of larger, more complex hydrocarbons via the HACA process as power increases.
The correlation of the alkanes with ethene is more difficult to interpret (Fig. 6).The correlation between ethane and ethene is good, but the correlation between larger alkanes and ethene is poor.This suggests that ethane might have a dominant combustion source but the larger alkanes have an additional, non-combustion source such as unburned fuel.
Results e EPA plume samples
The samples collected downwind from the aircraft represent a naturally diluted sample of the exhaust.Emission indices were determined using the method described above (Table 3), but the plume samples were collected over a prolonged period encompassing multiple engine powers and thus represent an average emission sample.Time-weighted powers were between 32 and 41%.However, the non-oxygenated NMHC emissions (Fig. 7) were closer to levels measured in the direct exhaust at 7 and 15%.These high values are possibly due to transient emissions such as engine start-up and power changes; Timko et al. (2010b) found that emissions of CO at idle were two times greater at engine start-up than when the engine had already been operated at higher power settings.
The ethene-to-formaldehyde ratio also varied among the samples.This ratio was shown by Herndon et al. (2009) to be fairly stable between 0.76 and 1.26 at idle conditions.Ratios measured in the plumes varied between 0.3 and 1.5.Likewise, Spicer et al. (1994) found a ratio of 0.14 at 80% power for a CFM56 engine (values were near detection limits).This suggests that this ratio is power dependent, most likely due to a shift in combustion processes.
Discussion e combustion processes
As mentioned previously, a shift is seen in the NMHC speciation (towards production of aromatics) as engine power increases.This is likely the result of increased engine temperatures and pressures as engine power increases (Wey et al., 2006) causing a shift in combustion processes (Schulz et al., 1999).Fuel cracking is believed to be the primary source of short-chain alkenes and dominates at idle power, while formation of larger compounds (e.g., toluene) can occur at the higher power settings (higher temperatures).The variance in NMHC emissions with power can be found by examining the NMHC-to-ethene ratio.The ratio at idle (r idle ; using 7e8% power for ease of comparison between engines) gives the amount of emissions expected if the combustion processes are independent of power.If ethene is assumed to be produced strictly via homolytic fission processes associated with fuel cracking, then the amount of NMHC emissions expected from cracking at higher powers is given by: NMHC cracking ¼ Ethene high power  r idle The excess NMHC is therefore presumed to be from other sources: NMHC ðother sourcesÞ ¼ NMHC high power À NMHC cracking Table 4 shows the fraction of NMHC emissions due to fuel cracking as a function of power.At idle, cracking is the only source (by definition), while at higher powers other sources become important (including growth of larger compounds via the HACA process).
This shift in combustion processes with power affects the particulate emissions of the engine.Production of aromatic species via the HACA process can act as an intermediate in the formation of PAHs and soot.However, PAH (Kinsey et al., 2011) and soot (Timko et al., 2010b) emissions follow a 'u'-shaped trend with a decrease in emissions from idle to a minimum at intermediate powers, and then an increase to maximum emissions at 100% power.Thus the formation of PAHs and soot at low power is from aromatics produced by fuel cracking and unburned fuel, while their formation at high power is from aromatics produced via the HACA process.
Discussion e ozone formation potential
The shift in hydrocarbon composition as the power changes also affects the reactivity of the exhaust.To analyze this shift, the Maximum Incremental Reactivity (MIR) scale for ozone production is used (Carter, 1994;using updated values from Carter, 2010).The MIR scale relates the amount of ozone (O 3 ) produced from an incremental increase in the amount of a specific hydrocarbon to the urban atmosphere.This allows for a scale of ozone production from individual hydrocarbons.As ozone production is highly dependent on the reaction rate of the hydrocarbon with hydroxyl radical, MIR values are also dependent on the hydroxyl reactivity of the hydrocarbon.Therefore, of the compounds measured, alkenes have the highest MIR values (ranging from 6 to 15 gO 3 gVOC À1 ) while alkanes (0.3e1.8 gO 3 gVOC À1 ) and CO (0.06 gO 3 gVOC À1 ) have the lowest.Alkynes and aromatics have intermediate MIR values.While this method cannot directly be used to determine ozone production in the concentrated engine plume (due to high NO x mixing ratios) relative MIR values give a relative estimate of ozone production potential downwind.
For the direct exhaust samples, an average MIR value can be determined based on the hydrocarbon mixture: Where EI i and MIR i are for the individual NMHCs measured in the exhaust.The average MIR value for the exhaust decreases with power due to a decrease in the proportion of alkenes (with high MIR values) and a relative increase in aromatics and alkanes.For the AE3007 engine, the average MIR value decreases from 5.9 gO 3 gVOC À1 at 7% to 4.6 gO 3 gVOC À1 at 30%, giving a decrease in ozone productivity of 22% (similar decreases were seen for the other engines; 7.4 to 5.8 gO 3 gVOC À1 for the PW308 and 5.8 to 3.4 for the RB211 engine).Surprisingly, despite the low MIR value for CO it can produce a large amount of ozone due to its high EI in the exhaust.At 30% power, more ozone can be produced from CO oxidation than from oxidation of hydrocarbons (Fig. 8).Conversely at low power, production from alkenes dominates.
Discussion e HAPs emission
Additionally, aircraft exhaust includes a number of compounds the United States classifies as HAPs (EPA, 2008).These compounds are identified in Tables 2 and 3.In the plume samples, HAPs made up 27e42% of the measured emissions with formaldehyde dominating.The contribution of HAPS in the direct exhaust is lower (8e28% with an increase at higher power) because oxygenated species were not measured.Due to differences in measurement techniques and engine power settings, the direct and plume exhaust HAPs levels cannot be compared quantitatively.Further research is needed to determine the production and gas-particle partitioning of HAPs that can occur as the exhaust ages and cools.
Conclusions
The current research shows that the speciation of hydrocarbon emissions from aircraft is power dependent.Low engine powers (representative of idle conditions) produce a larger amount of total NMHCs composed primarily of alkenes and alkynes.As power increases, total emissions decrease and the speciation shifts towards the production of aromatics.The increasing importance of aromatics observed as power is increased is likely the result of a shift from cracking (forming predominantly alkenes) to growth of larger compounds (formation of aromatics).This shift causes a decrease in the reactivity of the emissions and therefore a decrease in the ozone formation potential.
At ground level, aircraft operated in the idle and taxi mode are the major source of emissions due to a combination of the higher emission indices and longer periods of time spent at these power conditions (even taking into account increased fuel flow rates at higher engine powers).These emissions also are the most likely to cause formation of ground level ozone.At altitude the shift in speciation causes a larger relative emission of aromatics (some of which are HAPs).Because of the colder temperatures at cruise altitudes, these HAPs will have longer atmospheric lifetimes than those emitted at ground level.
Fig. 1 .Fig. 2 .
Fig. 1.Total NMHC EIs in the direct exhaust as a function of power for the three engines studied.
Fig. 5 .Fig. 6 .
Fig. 5. Emission indices for the RB211 engine (crosses) and EXCAVATE dataset (circles; fromAnderson et al., 2006).The data is colored by engine power setting.The shaded area represents the ratios reported byHerndon et al. (2009) at idle conditions.These ratios are for high ethene EIs but have been extended to all ethene values.(For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)
Table 2
Emission indices (mg kg À1 of fuel) for VOCs measured in the direct exhaust samples.
Timko et al. (2010a)thylbenzene and 3 isomers of ethyltoluene.bICAOemission indices for total unburned hydrocarbons (includes oxygenates).cFromTimkoet al. (2010a); AE3007 and RB211 values are averages of two engines.d Shaded compounds are classified as HAPs; Fraction ¼ Total HAPs/Total NMHCs.e Uncertainties for the lowest power are the 1 sigma standard deviation.
Table 3
Emission indices (mg kg À1 of fuel) for VOCs measured in the plume samples.
a Shaded compounds are classified as HAPs.b Other than n-Hexane.c Total HAPs/(NMHC þ Oxygenated).
Table 4
Fraction of NMHC from cracking and other sources (including pyrolysis).
Fig.8.Percentage of ozone formation potential for each hydrocarbon category for the AE3007 engine at 7% and 30% power. | 2019-04-05T03:38:08.843Z | 2012-12-01T00:00:00.000 | {
"year": 2012,
"sha1": "0c9379f04da23c9f0f960fa66e1d37df0910b3bf",
"oa_license": "CCBY",
"oa_url": "https://escholarship.org/content/qt5mc232m9/qt5mc232m9.pdf?t=q4i9lv",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "dfb5317dea31159a8fa9a47fe09e19f703424111",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
119516918 | pes2o/s2orc | v3-fos-license | Quantum theory: kinematics, linearity and no-signaling condition
We show that the linearity of an evolution of Quantum Mechanics follows from the definition of kinematics. The same result is obtained for an arbitrary theory with the state space that includes mixtures of different preparations. Next, we formulate the non-signaling theorem and show that the theorem poses no additional restriction on Quantum Mechanics provided the kinematics is given. We also discuss validity of the postulate for the case of more general theories.
I. INTRODUCTION
It was more than seventy years ago when Quantum Mechanics became widely accepted and established as one of the fundamental theories of Nature. Despite its success there are still several questions which, at least for a certain part of the physical community, have not been answered satisfactory yet. The main reason is that there is much more space between abstract mathematical elements of the theory and real objects prepared and measured in our laboratory. The rigorous mathematical formulation of the theory was given in [1] which can be summarized in a few postulates. There have always been numerous attempts to derive the postulates of the Quantum Mechanics from some other more "fundamental" or at least physically well motivated postulates. As for instance to derive the linearity of the evolution from the nosignaling postulate [2]. Such attempts usually raise some discussion [3,4] and a casual non-expert reader may be confused by the language used by experts. In this paper we will try to clarify the relation between the linearity and no-signaling condition in the Quantum Mechanics as well as present more general results.
To begin with let us briefly summarize the most frequently used representation of quantum objects, i.e. the Hilbert space formulation of quantum theory. In order to avoid certain mathematical complications we will work with finite-dimensional Hilbert spaces. However, our discussion remains valid in the case of infinite ones too. In this framework states and observables are associated with specific linear operators acting on a given Hilbert space [5]. Each outcome λ k of the measurement M is associated with one of these operators and the probability for measuring the corresponding outcome is given by the trace rule p k = Tr (̺F k ) , (1.2) provided that the system was prepared in the state ̺. In the case of projective measurements these operators possess the property F j = F † j = F 2 j , i.e. they are projective operators. For a general measurement M these operators must be positive, i.e. F k = F † k and F k ≥ 0, and sum up to the identity operator, i.e. k F k = 1 1. Let us denote by P(H) the set of all positive elements of B(H). Usually we use the concept of operator measure defined on Borel sets B(R) of real numbers R (associated with the outcomes of the measurements). Any mapping F M : B(R) → P(H) satisfying the properties of a measure, i.e.
II. KINEMATICS
When we face the problem of building some new physical theory our first step is to introduce basic objects representing our physical world -the kinematics. This can be accomplished by defining two sets S and M. The first one represents the states and the second one is associated with the measurements. Each element M ∈ M induces a probabilistic measure P M , where is the probability that the outcome of the measurement M is from the Borel set A ∈ B(R) provided that the system was prepared in the state ρ ∈ S. Here we have restricted ourselves to the B(R) as all measurement outcomes are always mapped onto the set of real numbers R. An outcome of a measurement is always a real number like position or number of counts etc.. This probabilistic rule is the only thing we can prove by performing our experiments. Moreover, from the mathematical point of view, this simple rule gives us limitations on both sets in a way that their mutual compatibility is guaranteed. For instance, if the set of states S is given then for any pair of states the set of all measurements M must provide a possibility for us to distinguish the two states. These two sets are usually called kinematics of the theory. Now we need to associate mathematical objects with the abstract elements of these sets, i.e. find some mathematical realization. In quantum theory there is a very convenient choice using the concept of the Hilbert space. We would like to stress here that this may not be the only choice, but at the same time it cannot be done in arbitrary space. The space should be rich enough and possess all features of the theory, like for example interference, uncertainty relations, etc. Therefore, let us associate a Hilbert space H with a given quantum system. The unit elements of H (denoted by Dirac's ket symbol |ψ ), i.e. any vector |ψ ∈ H, for which ψ|ψ = 1, is an element of S and represents a state of the quantum system. It is well known [5] that the set of quantum mechanical states is much larger than the set composed from vector states only. One of the ways how to introduce density matrices (mathematical objects representing generalized quantum states) is by mixing up different pure states (their preparations) together. This is how the density matrices were discovered. "Mixing" can be mathematically described by convex combinations not of the vectors, but rather as convex combinations of operators representing the vector states, i.e. of projectors. More generally, mixtures can be viewed just like probability distributions defined on the set of unit vectors without any reference to operators. Let us denote such set of distributions by D(H).
Note 1: From the operational point of view states correspond to our preparation procedures. There are many preparations that lead us to the same state. Moreover, mixing different preparations is a preparation again. The question is whether such mixtures prepare some new states, or not. In Quantum Mechanics some of these distributions are equivalent and are represented by a single density matrix. This fact is due to the given set of observables, that does not allow us to distinguish among different preparations of the same density operator. ✸ The second way how to obtain a density operator as an object representing a state uses the notion of composite quantum system with the Hilbert space given by tensor product H A ⊗ H B , where symbols A, B denote two different physical systems. If the whole system is described by a vector state |ψ ∈ H A ⊗ H B then there is no guarantee that the subsystems are described by vector states as well. After performing partial trace operation (discarding the second system) we obtain that the resulting operators possess all features of density operators. Such confirmation gives us new possibilities how to prepare density matrices. We shall call these matrices reduced density operators and the set of all density matrices will be denoted as S(H) in accordance with Eq. (1.1). (The resemblance in our notation between S and S(H) is not accidental and will be revealed later.).
Quite natural question arises. What does happen, if we mix together two preparations of two density matrices? Like in the case with vector states, we can associate mixtures of density matrices with the probability distributions defined on the set of states S(H). Let us denote by π = {p j , ̺ j } such (discrete) probability measure on the set of density operators S(H) and let K(H) be the set of all such distributions π of density matrices. As a result we obtain new set of quantum states K(H), which from the point of view of a mathematical description contains the former set D(H). The members of S(H), reduced density operators now form extremal points of K(H). They are not created by mixing different preparations, but by discarding the second system. Next we should find a tool that enables us to differentiate among these states. If we find such tool (like it was find in the case of generalizing vector states) then we can build a new theory with new set of states. We shall call any such theory "Extended Quantum Mechanics (EQM)" according to Ref. [6] where an interesting solution to this problem has been presented.
The set of states K(H) (as well as D(H)) is endowed with a convex structure. That is, any mixture of two probability distributions (elements of K(H)) is again a probability distribution defined on S(H). Instead of expressing a mixing of preparations by π = {p j , ̺ j } we use the natural convex structure and write any element π ∈ K(H) as π = j p j ̺ j , where ̺ j is now a point in the state space K(H) representing the state {p j = 1, ̺ j }. In the formulation of the kinematics of the Quantum Mechanics any mixture of density matrices, i.e an element of K(H), is associated with a certain element of the set S(H). That is instead of probability distributions defined on the set S(H) we can speak only about the set S(H) with its natural convex structure. For the time being we postpone the definition of the set of all possible measurements M for different types of state spaces. The rationale being that in Quantum Mechanics every observable is related to the generator of a one-parametric semi-group that is a dynamical evolution. The same can be done even in the case of a more general theory see for instance Ref. [6].
III. DYNAMICS
Under quantum dynamics two concepts can be understood. Firstly, some rule how quantum system evolves with time, and secondly some set of transformations of quantum states (different from measurement) without any explicit reference to time. Of course, any state transformation takes some time, but we will not consider its time duration here. This second notion determines the whole possible set of objects in which the first rule (with time) can draw a line to record the time dependency. In each time the state is transformed according to a map that belongs to all allowable set of state transformations. Let us now formalize these ideas. In general the allowed dynamical maps form a set This set contains any transformation Λ of the quantum states.
There are yet no restrictions such as the shape of the target set S Λ , or the linearity of Λ. In what follows by dynamics of quantum theory we will understand the evolution without any reference to time. It means we will investigate the general properties, what the dynamics should satisfy. Now we discuss the issue of linearity [7] of Quantum Mechanics. To obtain the linearity of the evolution Λ one has to consider the following "mixing procedure". Let us assume that there is no possibility how to distinguish between two different preparations (decompositions) of any density operator by performing all possible measurements M(H) [8]. A preparator might have this information, but in quantum theory (with the state space S(H)) we believe it is useless. Let us allow nonlinear evolution Λ and let the preparator prepare single particle (in state ψ j ∈ S(H)) and evolve it according to this evolution . Such procedure is experimentally acceptable. Let us consider two different preparations of states ψ j and φ k with probabilities p j , q k such that where the equality means that the density operators are the same. Applying the (non-linear) evolution Λ the outgoing states need not represent the same density matrix, i.e.
It means that the preparator is able to differentiate between the two mixtures but then he must be able to differentiate between them from the beginning [9]. Thus we should include observables that enables him to do so. In order to preserve the kinematics (the set S(H) and the set M(H)) the non-linear evolution must be forbidden. Otherwise, the above procedure enables us to distinguish preparations resulting in the same density operator ̺.
Let us apply the above consideration for the Extended Quantum Mechanics with the larger set of states K(H), specifically onto the elements of K(H). If we assume that this set represents all the possible quantum states, then the evolution defined as a mapping Λ : K(H) → K Λ ⊂ K(H) must be "linear", (i.e. affine). To show this it is enough to repeat the previous discussion, only instead of S(H) consider the set K(H). As a result we obtain that the evolution Λ is linear in the following sense where π j ∈ K(H) and p j ≥ 0, j p j = 1. The linearity implies that the evolution is completely determined by its action on extremal states (Dirac distributions on S(H)) associated with the members of the set S(H). Let us remind that S(H) is not a subset of K(H), but they are different sets with different elements. Therefore, the question of the linearity of Λ on the set S(H) is ill defined, because in general, the map Λ can transform extremal states of K(H) into mixtures in K(H). But if we assume that the evolution Λ maps extremal points into extremal points, the definition (restriction) of Λ : S(H) → S(H) is possible and the linearity can be discussed.
Note 3: This result is nothing else but our comprehension of the notion probability. Probabilities arise in our description naturally due to the preparations of mixtures. Notice, that the set K(H) is a set of all probability distributions defined on S(H). It means that any element π ∈ K(H) can be written as {p j , ̺ j } where ̺ j ∈ S(H) and {p j } is the probability distribution. Here, the elements π are understood as statistical ansambles (mixings of preparations) -the state ̺ j is prepared with the probability p j . Consequently, it must hold that since each participant from the ansamble, a system prepared in one of the states ̺ j , evolves independently on the other participants. ✸ So far we have considered only particular cases of Quantum Mechanics and Extended Quantum Mechanics, but above arguments can be used for any physical theory with specified kinematics. Therefore, we can formulate the following theorem Theorem Consider a set of states S and a set of measurements M (compatible with S), i.e. the kinematics of the theory is given. If the set S is endowed with the "convex structure" then evolution must be linear.
In fact, any non-linear evolution leads either to a contradiction or to a new kinematics. Let us suppose that the space S is endowed with the convex structure and an evolution Λ is non-linear. The convex structure of the space S is a consequence of the possibility of mixing preparations. In other words for a set of elements π j ∈ S the mixture {p j , π j } is an element π = j p j π j of the space S (see the end of the section Kinematics). The non-linearity of the evolution Λ implies that there exist at least one pair of sets of states {̺ j } ∈ S and {ξ i } ∈ S where j p j ̺ j = i q i ξ i such that The two states j p j ̺ j and i q i ξ i represent two mixing preparations, so that Λ( j p j ̺ j ) = j p j Λ(̺ j ) and Λ( i q i ξ i ) = i q i Λ(ξ i ). What's more, they represent the same point and thus transform into a single point Λ( j p j ̺ j ) = Λ( i q i ξ i ) which is in contradiction with the assumption that the map Λ is non-linear as In the case when S is not convex (like for instance S = {unit vectors in H}), then we can create a new set of states (D(H)), which must either be compatible with the set of observables M, or we also need to change the set of measurements M in order to preserve the mutual compatibility. In the case of Quantum Mechanics we find out solution, where S = S(H) and M = M(H). In particular, we can represent pure states |ψ like onedimensional projections P ψ and define probability distributions on these projections. The linear structure of the space of linear operators B(H) enables us to associate these distributions {p j , P ψj } with the linear operators j p j P ψj ∈ B(H). Let us remind that such assignment is a map "from-many to-one". As a result of the identification of all distributions represented by the same operator we obtain the set of density operators S(H) which is compatible with the set of observables M(H), i.e. all POVMs.
Example 1: Next we will use the theorem to show in what sense the evolution in Classical Mechanics is linear, too. The phase space Ω plays role analogical to the Hilbert space H (in the case of Quantum Mechanics), or space S(H) (in the case of Extended Quantum Mechanics). It means that elements of the phase space Ω are extremal points (denoted by δ ω ) of the set of all classical states P(Ω), i.e. probability distributions on Ω. The same arguments as before will lead us to the linearity (on P(Ω)) in the following sense. Any (discrete) probability distribution π( ω) = k π k δ ω k ↔ {π k , ω k } must evolve with Λ : P(Ω) → P Λ ⊂ P(Ω) according to the rule Again, in the sense of P(Ω) the evolution Λ is linear, but the transformation of the points in Ω (i.e. Λ : Ω → Ω) need not be linear. (Unlike the Quantum Mechanics, if one starts with the phase space Ω then the new set of classical states equals to the set of all probability distributions P(Ω).) ✸
IV. MEASUREMENTS
As has already been mentioned above the kinematics of the theory can be viewed as a set of states S and a set of measurements M where the two sets have to be mutually "compatible". Our notion of "measurements" correspond to "measurable quantities", or "observables". Hence, they do not contain any description of a dynamics of the corresponding physical process of measurement. In the case of Quantum Mechanics the set S is the set of all density operators S(H), while M is the set of all positive operator valued measures M(H). And these two sets are "compatible". The set M contains enough elements so that we are able to differentiate between any two elements of the set S. Moreover, if we take any representation of a given state (The set S has a certain structure; see above) then no measurement can differentiate between any such representations. On the other hand if we take the set S and ask what are all possible measurements i.e. all possible probability measures defined on the set S then we find out that the sought set is M.
In order to retain the larger set of states K(H) of Extended Quantum Mechanics, but still use only density operators (i.e. elements of S(H)) for our description of states, we must be able to differentiate between two different types of density matrices: genuine mixtures and elementary mixtures (for more details see section Kinematics or Ref. [6]). The elementary mixtures represent reduced density operators and genuine mixtures are associated with statistical mixtures of these reduced density operators. However, in order to discriminate between an elementary mixture (reduced density operator) ̺ and a genuine mixture {λ j , ω j } (with ω j ∈ S(H) being elementary mixtures) associated with the same density operator ̺ = j λ j ω j (decomposition is fixed), we have to introduce an observable M that is non-linear (see Eq. (2.1)) for at least one A ∈ B(R). Consequently, if we decide to deal with the set of density operators S(H), then the set M has to include non-linear observables. Let us remind the reader that quantum mechanical observables, self adjoint operators, are in Quantum Mechanics identified with the generators of a dynamical evolution. Once we allow non-linear evolution then we have to include observables that are non-linear and vice versa. But let us stress here that the evolution of genuine mixtures (elements of K(H)) is always linear while elementary mixtures (elements of S(H)) can evolve according to nonlinear maps.
V. NO-SIGNALING CONDITION
The impossibility to transmit information faster than the propagation of light seems to be an interesting problem in the context of quantum theory. Undoubtedly, in quantum theory there is no dynamical restriction how fast the particles can move. However, no-signaling does not deal with dynamical properties of the theory, but rather kinematic ones, namely with the possibility to use the projection postulate of quantum measurements in the information transfer [10]. Naturally, only the theory with the projection postulate (or more generally any postulate of similar type) can be questioned. Therefore, our following discussion will be focused only on two specific theories: Quantum Mechanics and Extended Quantum Mechanics (for definition see sections Kinematics and Measurements).
Let us consider a projective measurement M represented by the set of projective operators {F k } (for more details see Introduction and Ref. [5]) and let a quantum system be prepared in a state ̺. The projection postulate states: After performing a projective measurement M (in nondemolition experiments) resulting in the observation of the value λ k the system is described by the following state The postulate is an independent postulate of Quantum Mechanics and has been introduced on account of the following: When we repeat the same measurement (on the same object) right after the first one then the results of the two consecutive measurements are always the same. After many repetitions of the same measurement on systems all prepared in the state ̺ (i.e. following the same preparation process) the final ensemble will be described by a mixture of states ̺ k associated with different outcomes, i.e. ̺ f = {p k , ̺ k } = k p k ̺ k with p k = Tr̺F k . As a result we have that measurements (without postselection) prepare systems in mixtures. That is, for anybody who does not have access to observed values, the outcome of the measurement is described by this mixture ̺ f . Moreover, another measurement M (with corre- Example 2: Let us consider a pair of two-dimensional quantum systems (qubits) denoted as A and B prepared in the state To demonstrate the projection postulate we will consider a specific measurement M represented by two operators F k defined as It is easy to see that the measurement M is actually a projective measurement acting on the system B only with two projectors |0 B 0| and |1 B 1|. After obtaining the first outcome the bipartite system is according to Eq. (5.1) in the state ̺ 0 = |1 A 1| ⊗ |0 B 0| and when the second outcome is measured the system is in the state Thus, by measuring the bipartite system we prepare a mixture of states ̺ 0 and ̺ 1 with equal probabilities as Tr|ψ ψ|F 0 = Tr|ψ ψ|F 1 = 1/2. What is more interesting is the fact that by measuring the subsystem B the state of the subsystem A changes as well. For instance, if the eigenvalue λ 0 is measured then the state of the bipartite system is ̺ 0 and the state of the subsystem A is which is different from its original state The fact that a measurement performed on the system B can change a description of the state of the system A no matter how far from each other they are is certainly a peculiar property. What is more this change is considered to be instantaneous. Therefore, it is correct to ask, whether such property cannot be used for signaling or transmission of information at speeds larger than the speed of light.
In general, consider a bipartite system where A and B are corresponding parts and let the system be in a state ̺. Any projective measurement M performed on the system B can be represented with operators F k = 1 1 A ⊗ P k , where P k are operators (projectors) acting on the system B only. After measuring M the state of the system A is (according to Eq. (5.1)) in one of the states Tr B ̺ k with probability p k . Due to the fact that the only thing we can predict are the probabilities of individual outcomes of a given measurement but not which of them is observed in a single event, it follows that by measuring the system B we prepare the system A in the mixture ̺ M A = {p k , Tr B ̺ k }. (Let us note here that the observer possessing the system A does not know the results of the measurements performed on the system B.) For different measurements M the mixture ̺ M A can be different [11]. However, if we express the ̺ M A as a density operator and use Eq. (5.1) then it is easy to see that It means, that the resulting state of the system A is described by the original density operator ̺ A , only its decomposition is different.
As we have shown, by using different measurements on the first part of a bipartite system we can prepare different realizations (mixtures) of a given density operator of the second part. It means that the only information we can "signal" using this procedure is the information on the particular realization of a given density operator. But the kinematics of the Quantum Mechanics is such that two different realizations of a given density operator represent the same state (see the section Kinematics) and there is no measurement that the owner of the second system could use to distinguish the two preparations. In other words, within the standard quantum state space S(H) two different realization of ̺ A (two different statistical mixtures or a statistical mixture and the reduced density operator) represent the same point. Therefore, the no-signaling holds and follows from the kinematic properties of the set S(H).
In the context of the Extended Quantum Mechanics with the state space K(H) the situation is different. The projection postulate corresponds to the projection onto a Dirac distribution δ ψ = {1, P ψ }, where P ψ is a projector associated with the vector state |ψ ∈ H. If we apply this postulate onto bi-partite systems, then we are able to prepare two different elements of K(H) from a spatially distant place in the Universe (see Eq. (5.2) and discussion above). Let us remind the reader that in the theory with the state space K(H) two different decompositions of a given density operator represent different points in K(H). Therefore, the situation is different from that in Quantum Mechanics. Due to the experimental possibility to discriminate two decompositions of a given density operator, our information transfer (based on this property) will be as fast as we are able to distinguish the two prepared states (the projection is considered to be instantaneous). We can always place the second system far enough from the first one to violate the second principle of relativity, i.e. we will be able to signal at a speed greater than the speed of light.
In conclusion, in the Extended Quantum Theory with the projection postulate the no-signaling condition does not hold [12]. On the other hand, in Quantum Mechanics the no-signaling holds and follows from the kinematics of Quantum Mechanics. We have also shown (see section Dynamics) that the linearity follows from the kinematic properties too. Let us stress here that this result is independent on the no-signaling condition which, in the particular case of kinematics of Quantum Mechanics, is therefore redundant. Finally, even though the Extended Quantum Mechanics together with the projection postulate is not compatible with the no-signaling condition we cannot exclude all non-linear theories (for instance those not using the projection postulate), inasmuch as we have not considered the most general case. | 2019-04-14T03:18:57.267Z | 2002-11-22T00:00:00.000 | {
"year": 2002,
"sha1": "5cfc23dbb0f2813290cac719f0a1100f26018d8f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "5cfc23dbb0f2813290cac719f0a1100f26018d8f",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
15278826 | pes2o/s2orc | v3-fos-license | Continuity of the radius of convergence of differential equations on $p$-adic analytic curves
This paper deals with connections on $p$-adic analytic curves, in the sense of Berkovich. The curves must be compact but the connections are allowed to have a finite number of meromorphic singularities on them. For any choice of a semistable formal model of the curve, we define an intrinsic notion of normalized radius of convergence as a function on the curve, with values in $(0,1]$. For a sufficiently refined choice of the semistable model, we prove continuity and logarithmic concavity of that function. We characterize \emph{Robba connections}, that is connections whose sheaf of solutions is constant on any open disk contained in the curve.
0 Introduction 0.1 Radius of convergence of p-adic differential equations It is well-known that any system of linear differential equations with complex analytic coefficients on a complex open disk admits a full set of solutions convergent on the whole disk. It is also well known that the same fact does not hold over a non-archimedean field k that contains the field of p-adic numbers. For example, solutions of the equation The question is actually of interest over any non-archimedean field k of characteristic zero (we do not exclude the case of a trivially valued k). What is the behaviour of the radius of convergence of a system of differential equations with k-analytic coefficients as a function on points of the k-analytic affine line ? Of course, one has to give a precise meaning to this question since points of k-analytic spaces are not necessarily k-rational. We recall in fact [5, 1.2.2] that to any point x of a k-analytic space X, one associates a completely valued extension field H (x) of k, called the residue field at x; the point x is k-rational (resp. rigid ) iff H (x) = k (resp. [H (x) : k] is finite) ; X(k) (resp. X 0 ) denotes the subset of k-rational (resp. rigid) points of X. On a smooth X, k-rational points admit a fundamental system of open neighborhoods which are open polydisks, but this is not the case for general points, not even using theétale topology [10,Corollary 2.3.3].
Let X be a relatively compact analytic domain in the k-analytic affine line A 1 = A 1 k , and suppose we are given a system , and we define R(x) as the number R(x ′ ) for the system of equations on X H (x) induced by Σ. A more precise formulation of the above question is as follows. What is the behavior of the function X → R >0 , x → R(x) ? For example, is it continuous ? The latter question is not trivial since a precise formula for the radius of convergence involves the infimum limit of an infinite number of continuous real valued functions on X (cf. §3).
A problem of the above type was considered for the first time in the p-adic case in a paper by Christol and Dwork (in a slightly different setting). Namely, let X be an affinoid domain in A 1 that contains the open annulus Then there is a continuous embedding [r 1 , r 2 ] ֒→ X, r → t 0,r , where t 0,r is the maximal point of the closed disk of radius r with center at zero. Christol and Dwork proved that, for any system of differential equations Σ on X, the function [r 1 , r 2 ] → R >0 , r → inf(r, R(t 0,r )) is continuous [19,Théorème 2.5].
The following example, which we learned from Christol, should convince the reader that the variation of the radius of convergence presents, in general, some non obvious features.
In the region |t| = p δ we do not get from (0.1.1.4) a complete description of R(t). The transfer theorem ([23, IV.5] or (5.1.2) below) implies that R(t) ≥ p β , and the main result of this paper, i.e. continuity of t → R(t), together with proposition 6.0.9, imply R(t) = p β in almost all residue classes D = D(c, (p δ ) − ) with |c| = p δ . This is because, by formula 6.0.9.1, if R(x) > p β in D, then in fact R(x) ≥ p β+δ /|x− c| all over D. Then, if x ∈ D, R(x) < p β+ε , for ε > 0, can only take place in A D,ε := {x ∈ D | |x − c| ≥ p δ−ε }. But a union D A D,ε contains a neighborhood of t 0,δ in D(c, (p δ ) + ) only if D runs over a finite set of classes.
By a closer examination of (0.1.1.7) Christol shows that there are p − 1 exceptional open disks of radius p δ , centered in the zeros c of the polynomial −1 − b + pc p−1 = 0, where R(t) is strictly bigger than p β . Notice that |c| = p δ for such zeros. In such an exceptional class, one obtains a priori (1−δ(p−2)) < |t − c| < p δ .
We point out that (0.1.1.10) p β < p 1 2 (1−δ(p−2)) < p 1 p , so that the value of R(t) in the exceptional classes is bigger than the value at their boundary, as expected. Now, for |t − c| = p The previous calculations give a complete description of the function R(t, Σ) on A 1 . The conclusion is summarized in Fig. 4, where the graph Γ ⊂ A 1 has the property that, for any t ∈ A 1 (k), R(t, Σ) is constant in any open disk in A \ Γ containing x. Such a Γ is said to control the system Σ. In a subsequent paper [4] we will show that every connection on a curve is controlled by a finite graph in a much stronger sense.
A particular case of our main result states that for any relatively compact analytic domain X in A 1 the function x → R(x) is continuous on the whole space X. Our result in fact establishes the property of continuity for more general analytic curves. (Recall that in comparison with the complex analytic case a smooth k-analytic space is not necessarily locally isomorphic to an affine space.) To formulate the result, we first define a so-called "normalized" radius of convergence R(x), which is related to the function R(x) via a simple formula but does not depend on the embedding of the analytic domain X into A 1 . After that we explain how the function R(x) can be defined in a more general setting on onedimensional smooth 1 affinoids, which are not necessarily analytic subdomains of A 1 , and even in arbitrary dimensions. Let X be an affinoid domain in A 1 k , and assume at first that X is strict and that k is non-trivially valued and algebraically closed. Recall that the set of all points of X which have no open neighborhood isomorphic to an open disk, is a closed subset of X, called the analytic skeleton of X and denoted by S(X) [5, §4]. The complement X \ S(X) is a disjoint union of open sets D y (X), where y ∈ D y (X) is a k-rational point, isomorphic to the standard open unit disk D k (0, 1 − ) = {x ∈ A 1 k | |T (x)| < 1 } , via a normalized coordinate T y : D y (X) ∼ − − → D k (0, 1 − ) such that T y (y) = 0. Furthermore, each of the above open disks has a unique boundary point at S(X), and so there is a canonical continuous retraction map τ X : X → S(X). The topological space S(X) carries a canonical structure of a finite polygon S(X) = (S(X), V(X), E(X)) whose set of vertices V(X) consists of the points which have no neighborhood isomorphic to an open annulus: they are all of type (2) in the sense of [5, 1.4.4]. The set of open edges E(X) of S(X) is formed by the connected components of S(X) \ V(X); an open edge E = E(u, v) connects precisely two vertices u, v and E ∪ {u, v} is a closed subset of X, canonically homeomorphic to the closed interval [r, 1] for a well defined r ∈ |k| ∩ (0, 1), with u → 1, v → r. All points of the Shilov boundary Γ(X) of X are among the vertices of S(X). For x ∈ S(X), the set τ −1 X (x) \ {x} is a disjoint union of maximal open disks D y (X) with boundary point x in X, all of the same radius r(x) with respect to the standard coordinate in A 1 . Then, r(x) is the radius of the point x ∈ A 1 defined on p. 78 of [5] and the function r : If k ′ is an algebraically closed non-archimedean field over k, then S(X ⊗k ′ ) ∼ − − → S(X) under the natural projection. In the general case (with k possibly trivially valued), we define S(X) as the image in X of S(X ⊗k ′ ), for any non-trivially valued algebraically closed non-archimedean extension field k ′ /k such that X ⊗k ′ is strict, and similarly for open edges and vertices. We have a canonical continuous retraction τ X : X → S(X) induced by the similar retraction on X ⊗k ′ . We call the triple S(X) = (S(X), V(X), E(X)), a generalized subpolygon of X.
Suppose we are given a system of differential equations on the affinoid domain X ⊂ A 1 with analytic coefficients, i.e. a free O X -Module of finite rank F with a connection ∇. If k is algebraically closed, X is strict, and x ∈ X(k) is a k-rational point of X, we define the normalized radius of convergence R(x) = R X (x, (F , ∇)) as the radius of convergence of F ∇ around x in the corresponding maximal open disk neighborhood D x (X) ∼ = D(0, 1 − ). If k, X and x are arbitrary, we consider an algebraically closed non-archimedean field k ′ over k such that X ⊗ k k ′ is strict and so large that H (x) admits an isometric k-embedding ϕ in k ′ . Then, there is a canonical point x ′ ∈ X ⊗ k k ′ and we set R(x) = R(x ′ ), a quantity independent of the choice of k ′ and of ϕ. The function R : X → R >0 does not depend on the embedding of X in A 1 , either, and one has R(x) = R(x) · r(τ X (x)) for all x ∈ X. The function r(τ X (x)) = δ P 1 (x, X) was called the diameter of X at x (with respect to the embedding X ֒→ A 1 ֒→ P 1 = ( P 1 ) η ) in [2]; its continuity (at least under the present assumptions) was proven in loc.cit. [3.3]. By definition, the function R(x) is preserved by extensions of the ground field, and so it suffices to study its behavior in the case when X is strictly k-affinoid.
We now notice that, independently of the characteristic, if k is algebraically closed and non-trivially valued, a strictly k-affinoid, as well as a projective, rig-smooth curve X is the generic fiber X η of a strictly semistable 2 formal scheme X over k • . We say that X is a strictly semistable model of X.
We will assume, for simplicity, that k is non-trivially valued and algebraically closed. To any semistable formal scheme X of relative dimension one over k • , one associates (cf. [5,Chap. IV], modulo the equivalence between such formal schemes and formal coverings of smooth compact curves with semistable reduction), a triple of the previous type S(X) = (S(X), V(X), E(X)) supported on a closed subset S(X) ⊂ X := X η , called the skeleton of X. Unless X is strictly semistable (1.1.4), the subpolygon S(X) may have loops, i.e. open edges with a single boundary vertex in X.
For X and Y semistable models of X, we set X ≤ Y if there exists a morphism Y → X inducing the identity on the generic fiber (identified with X for both). Then, if X ≤ Y, S(X) is a "subpolygon" of S(Y), in the sense that S(X) ⊂ S(Y), V(X) ⊂ V(Y), and an open edge of S(X) is a union of vertices and open edges of S(Y). There is a natural continuous retraction τ X,Y : S(Y) → S(X). Now, any strictly k-affinoid curve X, as much as any projective smooth curve which is neither rational nor a Tate curve, admits a minimum semistable model X 0 and the skeleton S(X) of X coincides with the skeleton S(X 0 ) of the formal scheme X 0 3 . Moreover, as a topological space, Z run over a cofinal system of semistable models of X, is a quasi-polyhedron (cf. [5,IV.1]) and, if the minimum semistable model X 0 of X exists, the retraction τ X : X → S(X) is the natural projection to S(X 0 ) = S(X).
A semistable formal scheme X over k • is an example of a nondegenerate polystable formal scheme over k • (see [8]). In the case of dimension one over k • , the class of such formal schemes coincides with the class of semistable formal schemes. Recall [8, 5.2] that every nondegenerate polystable formal scheme X has a skeleton S(X) ⊂ X η and a retraction map τ X : X η → S(X) which are preserved under any ground field extension functor. Furthermore, 2 When k • is not noetherian, this definition is not completely standard cf. (1.1.4) below. 3 For a rational curve X, S(X) is empty, while for any minimal semistable model X of X, S(X) = (η, {η}, ∅), for a point η ∈ X with H (η) purely transcendental of degree one over k. For a Tate curve X, a choice of η ∈ S(X) with H (η) of transcendence degree one over k, determines a homeomorphism S(X) if x is a k-rational point of X η , then there is a well-defined maximum open neighborhood D X (x, 1 − ) of x in X η \S(X), which is isomorphic to the standard open unit polydisk D k (0, 1 − ) (with x → 0), so that the open or closed polydisks D X (x, r ± ), r < 1 with center at any k-rational point of X η are well-defined (we are talking about polydisks with equal radii).
Let k be any non-archimedean field. All k-analytic spaces considered in this paper are supposed to be separated. We generalize a definition of [10, 1.1], as follows.
Definition 0.1.2. A k-analytic space X is said to be rig-smooth (resp. of pure dimension n) if, for any non-archimedean field k ′ over k, and any connected strictly affinoid domain Assume now k is of characteristic zero, let X be a rig-smooth k-analytic space, and let X G be the associated G-analytic space. We denote by MIC(X/k) the category of pairs consisting of a locally free O XG -Module F of finite type and of an integrable k-linear connection ∇ on F , with the usual (horizontal) morphisms. Notice that, unlike in the classical case, for an object (F , ∇) of MIC(X/k), the abelian sheaf for the G-topology of X is not in general locally constant.
Definition 0.1.3. Let k be a non-archimedean field of characteristic zero, and X be a rigsmooth k-analytic space. We say that an object (F , ∇) of MIC(X/k) is a Robba connection on X if for any algebraically closed complete extension k ′ of k, and any open polydisk We denote by MIC Robba (X/k) the full abelian subcategory of MIC(X/k), consisting of Robba connections.
Example 0.1.4. Let C be the complex field, equipped with the trivial valuation. On the C-analytic space A 1 C , we consider a system of linear differential equations Σ of the form (0.1.0.1), with G a matrix of rational functions in C(T ). To analyze the singularity of Σ at T = 0, we consider the open analytic domain Notice that X = S(X) ∼ = (0, 1) via ρ → t 0,ρ . The field H (t 0,ρ ) is the non-archimedean field over C, K ρ = C((T )), equipped with the absolute value || || ρ , such that ||0|| ρ = 0 and || n a n T n || ρ = ρ m , if a n = 0, for n < m and a m = 0. Let X ′ := X ⊗K ρ , and let t ′ 0,ρ be the canonical K ρ -rational point in X ′ above t 0,ρ . Then, writing t = T (t 0,ρ ), the maximal open , and the fact that Σ is a Robba connection is equivalent to the property that the singularity at T = 0 is regular. In fact, by the formal Turrittin theory [1], it is enough to check this statement for the rank one differential equations In the former case the solution at t ′ 0,ρ is y = In the latter case a simple calculation shows that the solution at t ′ 0,ρ is a power series , having the same radius of convergence as the series that is converging for | T − t t | < |t| N . We recall from loc.cit. that N is the Poincaré-Katz rank of irregularity of the corresponding differential equation at T = 0.
Notice that if (F , ∇) is any connection, the natural map In particular, if F ∇ is locally constant, then it is locally isomorphic to the locally constant sheaf k rkF . Assume now that the k-analytic space X is isomorphic to the generic fiber X η of a nondegenerate polystable formal scheme X over k • . For any object (F , ∇) of MIC(X/k), and for a k-rational point x ∈ X η , we may define the (X-normalized ) radius of convergence is then locally constant for the usual topology 4 , and is therefore constant, since an open disk is simply connected for the usual topology. If x is arbitrary, we set Let ϕ : Y → X be anétale morphism of nondegenerate polystable formal schemes over k • , and let ϕ η : Y → X be the generic fiber of ϕ. For any object (F , ∇) of MIC(X/k), ϕ * η (F , ∇) is an object of MIC(Y /k) and for any point y ∈ Y we have Under the same assumptions, if k ′ /k is any extension of non-archimedean fields, the k ′ •formal scheme X ′ := X ⊗k ′ • is nondegenerate polystable, and, ψ : X ′ → X is the natural projection Conjecture 0.1.5. Let X = X η be the generic fiber X η of a nondegenerate polystable formal scheme X over k • and (F , ∇) be an object of MIC(X/k). The function X → R >0 , x → R X (x, (F , ∇)) is continuous.
Notice that, by formula 0.1.4.2, the conjecture holds if it holds under the restrictive condition that X is strictly polystable over k • (base change to a finite extension field k ′ /k is irrelevant). Let MIC X (X/k) be the full subcategory of MIC(X/k) consisting of the objects (F , ∇), such that F is the O X -Module associated to some locally free O X -Module of finite type F, called a formal model of F over X. In case F admits a formal model F, the calculation of R(x) may be performed as follows. We pick an open affine neighborhood Y of the specialization of is the maximum common radius of convergence of all sections of F ∇ in D Y (x, 1 − ) around x, when expressed in terms of a basis of global sections of F on Y. In dimension one, this amounts to solving a system of differential equations as in (0.1.0.1), over an affinoid domain in a smooth projective curve.
Conjecture 0.1.6. Let X = X η be the generic fiber X η of a nondegenerate polystable formal scheme X over k • and (F , ∇) be an object of MIC X (X/k). Then (F , ∇) is a Robba connection on X if and only if the function x → R X (x, (F , ∇)) is identically 1 on X.
And here is our main result.
Theorem 0.1.7. Let k be a non-archimedean field extension of Q p and X be a rig-smooth compact strictly k-analytic curve. Conjectures 0.1.5 and 0.1.6 hold for objects of MIC X (X/k), where X is any semistable model of X.
Surprisingly enough, before our joint paper with Di Vizio [2], conjecture 0.1.5 seemed to be open even in the case when X = Spf k • {T }, hence X η = X is the closed unit disk D k (0, 1 + ), a case extensively discussed in the literature (cf. [29], [23] and [20] for reference, and [32, 10.4.3] for a partial result in our direction). A direct proof, in the case of an affinoid domain X of A 1 , was given in [2, 5.3]. Notice that, always by (0. 1.4.2), it suffices to prove theorem 0.1.7 in the special case when X is either a "formal disk" or a "formal annulus", i.e. is affine connected and admits a dominantétale morphism ϕ to Spf k • {X} or to Spf k • {X, Y }/(XY − a), for a ∈ k •• \ {0}, respectively. The previously mentioned result of [2], implies the theorem when ϕ is an open immersion. In the general case however, we must appeal to the more powerful statement in section 2 below.
Our detailed description of the function x → R X (x, (F , ∇)) will in fact show the following Theorem 0.1.8. Let k, X and X be as in theorem 0.1.7, and let (F , ∇) be an object of MIC X (X/k). If R X (ξ, (F , ∇)) = 1 at any vertex ξ of the skeleton S(X) of X, then x → R X (x, (F , ∇)) is identically 1 on X. Moreover, (F , ∇) is a Robba connection on X.
Definition 0.1.9. Let k, X be as in 0.1.7 and let X be a semistable formal model of X. An object (F , ∇) of MIC(X/k) is said to be X-convergent if it is an object of MIC X (X/k) and R X (x, (F , ∇)) = 1 identically on X. We denote by MIC X−conv (X/k) the full subcategory of MIC(X/k) consisting of X-convergent objects.
Let ϕ : Y → X be anétale morphism of semistable k • -formal schemes and let ϕ η : Y → X be its generic fiber. It follows from formula 0.
Corollary 0.1.10. The category MIC Robba (X/k) is the 2-colimit category of its full subcategories MIC X−conv (X/k), where X runs over the semistable models of X.
Notice that, despite the previous result, the introduction of formal models is necessary for several reasons: 1. The analytic definition of D x (X) given above for x a k-rational point of the smooth affinoid X, as the maximum open neighborhood of x in X isomorphic to the polydisk D(0, 1 − ), does not work in higher dimensions, where such a maximum neighborhood need not exist.
2. Even in the case of an affinoid X ⊂ A 1 , and x ∈ X(k), the above analytic definition of D x (X), does not globalize well. If X = X 1 ∪ X 2 is a union of two affinoids, and x is a k-rational point of X 1 , one can only say that D x (X 1 ) and D x (X 2 ) are contained in D x (X). The assignment of a semistable model X of X, remedies to this problem, i.e. D X1 (x, 1 − ) = D X (x, 1 − ), provided the decomposition X = X 1 ∪ X 2 originates from a Zariski open covering X = X 1 ∪ X 2 of formal schemes. Notice that if X is the analytic projective line, there is no maximum disk containing a given point.
3. According to conjecture 0.1.6, it seems likely that consideration of a formal model X of X should be of lesser importance in the study of the directional logarithmic derivative of R(x) along any tangential vector to X. In other words, a generalized theory of slopes in the style of [18] should only depend upon the connection on the analytic curve X.
We finally propose, for future use, a more general definition, where we consider various possible topologies on categories of analytic spaces over X. We have in mind, in particular, the natural (Berkovich) topology on X, the G-topology on X described in [6, 1.3], theétale topology of [6, 4.1], the quasi-étale topology of [7, §3].
Basic definitions and description of contents
The organization of the paper is as follows. In section 1 we review the structure of a rig-smooth compact connected strictly k-analytic curve X over a non-archimedean field k with non-trivial valuation and the parallel theory of semistable reduction of k-algebraic curves. For simplicity, we assume that k is algebraically closed. Most, but not all, of these results may be extracted from [5, Chap. IV] or [13]. We describe systematically the equivalence between the notions of semistable model of X, of formal affinoid covering of X with semistable reduction, of semistable partition of X (1.5), and of complete subpolygon of X (1.6). A well-known consequence of these equivalences is that all results on semistable reduction of curves admit in principle both an algebraic and an analytic proof.
The algebraic theory of semistable models of families of algebraic curves [22] has recently made important progress thanks to Temkin [33], who has eliminated in particular the properness assumption, and has clarified minimality of the stable model. It should be possible to apply Temkin's method to completely describe the reticular structure of the category F S(X) of semistable models of X, and in particular to showing that, unless X is either rational or a Tate curve, a minimum semistable model X 0 of X exists, and its skeleton coincides with the analytically defined skeleton of X. We have not seriously attempted to follow this algebraic strategy. Instead, we approach the problem via an analytic discussion of intersections and unions of disks and annuli in the style of [13,Prop. 5.4]: we content ourselves with a precise account of the proofs. Details, which would lead us astray of the main path, have been collected in [3].
Over an algebraically closed field k, the notions of open or closed disk or annulus, present no difficulty 5 . Open or closed disks or annuli in A 1 k are called k-rational, if their center in a k-rational point and their radii belong to |k|. Similarly, for open or closed disks or annuli in general. A punctured disk is a disk punctured at a rigid point, and it is k-rational, if the full disk is so.
Open annuli, viewed as simply connected quasi-polyhedra, have two endpoints. An open segment (resp. open half-line) in X is the skeleton of an open annulus (resp. of an open punctured disk) in X: the annulus (resp. the punctured disk) is then uniquely defined. An open segment may have one or two boundary points called ends in the compact curve X; they are points of type (2) or (3). The segment is k-rational iff the corresponding annulus is, i.e. if and only if its ends are points of type (2). In general, an open segment E is a locally closed subset of X and it admits a continuous (1 : 1) parametrization ρ → η ρ by an open interval (r, 1) ⊂ (0, 1) (resp. [r, 1) ⊂ (0, 1)), with r the height of the annulus, which is canonical up to the inversion ρ → r/ρ. Similarly, an open half-line is canonically parametrized by (0, 1), where η ρ tends to a rigid point as ρ → 0. A half-line is k-rational iff the corresponding punctured disk is.
We define the category GP(X) (resp. GP c (X)) of (resp. complete) subpolygons Γ = (|Γ|, V Γ , E Γ ) of X. Here |Γ| is a closed connected subset of X, V Γ is a finite subset of points of type (2) of X ("vertices" of Γ) and E Γ is a finite set of open k-rational segments of X ("open edges" of Γ), such that |Γ| is the disjoint union of all vertices and open edges of Γ.
A subpolygon Γ of X is "complete" if X \ |Γ| is a union of open disks. The category GP c (X) is really a directed partially ordered set, where Γ ≤ Γ ′ if |Γ| ⊂ |Γ ′ |, V Γ ⊂ V Γ ′ , so that an open edge of Γ is a union of open edges and vertices of Γ ′ . We prove (more details in [3]) that GP c (X) is isomorphic to the category F S(X) of semistable formal models of X and morphisms inducing the identity on the generic fiber (they turn out to be precisely the "admissible blow-ups" of [15], [11]). The isomorphism S : F S(X) ∼ − − → GP c (X) is the skeleton functor. Unless X is a rational projective curve or a Tate curve, the category F S(X) admits a minimum X 0 . The skeleton S(X 0 ) = (S(X 0 ), V(X 0 ), E(X 0 )) is the skeleton S(X) = (S(X), V(X), E(X)) of the analytic curve X, in the sense of the previous subsection.
We define below cf. (1.4.1) a "basic formal disk" (resp. a "basic formal annulus") (B, T ) (resp. or (B, S = a/T ), for a ∈ k •• \{0}). We write (B, T ) for the corresponding affinoid disk (resp. annulus). The coordinate T in B defines, for any z ∈ B(k), a normalized coordinate The main point for us is the fact that any semistable model X of the curve X admits ań etale covering by formal disks or annuli. This allows us to recover and globalize most results of the classical non-archimedean theory of linear differential systems [23]. In section 2 we prove a general criterion due to Berkovich (private communication) to test whether a function X → R >0 , on a smooth k-affinoid curve X is continuous: it represents an abstract and more efficient version of the method of proof used in [2, 5.3]. In section 3, we start assuming that k has characteristic zero. We pick a semistable model X of X and introduce the function "normalized radius of convergence at x" R X (x, (F , ∇)), for an object (F , ∇) of MIC(X/k). Actually, in view of the most common applications, we also consider the following more general situation: X is the complement of a finite reduced divisor Z = {z 1 , . . . , z r } in a compact rig-smooth connected k-analytic curve X. So, if X = M(A ), z 1 , . . . , z r correspond to r distinct maximal ideals of the k-affinoid algebra A . We assume that X is the generic fiber of a semistable k • -formal scheme X. It is well known that Z determines a finite flat genericallyétale closed reduced subscheme Z of X. We will assume that Z isétale over k • . For the limited purposes of this paper, however, consideration of Z will not be necessary, and the previous assumption will be concealed under the requirement that the disks D X (z 1 , 1 − ), . . . , D X (z r , 1 − ) are distinct and that their boundary points τ X (z 1 ), . . . , τ X (z r ) are vertices of S(X). We keep the notational distinction mainly for future use.
For any object (F , ∇) of MIC(X/k), we define the function x → R X,Z (x, (F , ∇)), X → (0, 1]. We restrict our attention to the abelian tannakian subcategory MIC(X( * Z)/k) of MIC(X/k), consisting of the objects (F , ∇) having meromorphic singularities at Z. This means that F is the restriction to X of a coherent and locally free O X -Module F and for any open U ⊂ X and any section e ∈ Γ(U, F), there is a non-zero f ∈ O X (U ), such that f ∇(e) ∈ Γ(U, F ⊗ Ω 1 X ). Morphisms are horizontal O X -linear maps, with meromorphic poles at {z 1 , . . . , z r }.
We are mostly concerned with the full abelian tannakian subcategory MIC X (X( * Z)/k) of MIC(X( * Z)/k). It consists of objects (F , ∇) of MIC(X( * Z)/k) such that the O X -Module F is the restriction to X of a coherent and locally free O X -Module F. The category MIC X (X( * Z)/k) admits natural internal tensor product and internal Hom and End.
Let us assume, from now on in this section, that k is an extension of Q p . We say that an object (F , ∇) of MIC X (X( * Z)/k) satisfies condition NL at z ∈ Z, if the formal Fuchs exponents of (the regular part of) End((F , ∇)) at z are p-adic non-Liouville numbers. An old result of the author [1,Prop. 4], which applies under condition NL, relates the asymptotic behavior of x → R X,Z (x, (F , ∇)) for x → z i to the algebraic irregularity ρ zi (F , ∇) of the connection at z i , namely is continuous on X. We first reduce the problem of continuity of this function to the standard situation of a connection (F , ∇) with F free of finite rank µ over a basic affinoid disk or annulus.
For the convenience of the reader, we recall in section 4 the classical theory of differential systems on an annulus, largely due to Dwork, Robba, Christol and Mebkhout. One should also refer to the elegant account [29], for more recent results. These authors treat the case of a system of ordinary differential equations (0. [19, 2.5] to the present situation §5.3. Section 4 of the joint work with Di Vizio [2] contains a several variable generalization of this result. We point out that the upper semicontinuity is precisely the non-trivial part of [19, 2.5], and that our proof (as given here and in [2]) differs from the one of Christol-Dwork. We conclude the proof of continuity of x → R(x) in section 6. As in the classical case, we obtain a more precise description of x → R(x). Namely, let k 0 be a closed subfield of k, and assume X = X 0 ⊗k • , where X 0 is a k • 0 -semistable formal scheme. Assume the points z 1 , . . . , z r come from k 0 -rational points of (X 0 ) η (identified with the z i 's) and the object (F , ∇) of MIC X (X( * Z)/k) from an object , is the infimum of the constant 1 and a finite set of functions of the form Arbitrarily high h can appear even in the simplest rank 1 case of the equation killing x α cf. [23, IV.7.3 (iv)] and our account below (4.0.9).
This statement in the classical situation appeared in [30]: we provide here a hopefully more convincing proof 6 . We point out the novelty of using Dwork's technique of descent by Frobenius on basic affinoid annuli which are not necessarily affinoid subdomains of A 1 , cf. §5.4.
If (F , ∇) has meromorphic singularity at a z i ∈ Z, and the previous p-adic non-Liouvilleness assumption holds at z i , then a similar result holds on the half-line E connecting z i with the point τ X (z i ) at the boundary of D X (z i , 1 − ). In terms of the canonical parametrization via the function "radius of a point" ρ : Under the previous assumptions (F , ∇) is a Robba connection on X.
It follows from [5, 4.5 Acknowledgements. Continuity of the radius of convergence was the main target of a collaboration with Lucia Di Vizio on the long delayed, and still unpublished, manuscript [2] dedicated to the higher dimensional generalization of the theory of Dwork-Robba and Christol-Dwork. While we plan to pursue that joint higher-dimensional program with Di Vizio in the next future, the present author had the good fortune of arousing the interest of Vladimir Berkovich in these problems. The influence of Berkovich and a series of letters he generously addressed to the author, modified the original plan. Namely, Berkovich insisted that the global 1-dimensional case should be treated first, using the theory of semistable reduction, and provided a detailed plan of the results to prove, and some of the proofs. To the author's regret, he did not accept to be a co-author.
The collaboration with Lucia over the past few years has greatly contributed to shape the ideas appearing in this paper.
We are also grateful to Yves André and Kiran Kedlaya for showing so much interest in our results, to Pierre Berthelot for a question he asked on Robba connections. It is a pleasure to acknowledge the generous help we received from Michel Raynaud on the topic of formal geometry. We thank Ahmed Abbes, Maurizio Cailotto, Gilles Christol, Antoine Ducros, Adrian Iovita, Qing Liu, Michael Temkin and especially Lorenzo Ramero for many useful discussions.
1 The structure of rig-smooth compact strictly k-analytic curves Notation 1.0.3. All over this section, the valuation of k is assumed to be non-trivial. In fact, except in subsection 1.1, which might be of independent interest, k will be assumed to be algebraically closed, in order to minimize technical difficulties. All k-analytic spaces are assumed to be separated. As in [21, §3], a k-analytic curve is a k-analytic space pure of dimension 1. All over this and the next section, X denotes a compact connected rig-smooth strictly k-analytic curve. For any field L an L-algebraic curve is a separated L-scheme of finite type, of pure dimension 1.
appear.
Any k-analytic curve is a good analytic space, i.e. every point of it has an affinoid neighborhood, and is paracompact [21, §3]. An irreducible compact k-analytic curve is either the analytification of a projective curve or it is affinoid [26], [21,Prop. 3.2]. So, in our case, X is either the analytification X an of a smooth projective k-algebraic curve X or it is strictly k-affinoid; its underlying topological space is a quasipolyhedron [5, 4.1.1].
Semistability
Classically, one gives the following definition. Notation 1.1.1. Let L be a non-archimedean field over k. An L • -scheme (resp. L • -formal scheme) is admissible if it is reduced, quasi-compact, separated, of (resp. topologically) finite presentation, and flat over L • . The algebra of an affine admissible L • -scheme (resp. L • -formal scheme) is also said to be admissible.
In the present article, all k • -schemes and k • -formal schemes will be assumed to be admissible.
For an admissible k • -scheme Z , we define the completion along the closed fiber or simply completion of Z as the admissible k • -formal scheme The definition is clearly independent of the choice of π and is functorial.
We further abuse the classical terminology as follows.
Definition 1.1.4. An admissible formal scheme X over k • is strictly semistable if, locally for the Zariski topology, it is of the form Similarly, a reduced separated k-scheme of finite type Y is strictly semistable if, locally for the Zariski topology, it is of the form The generic fiber of a semistable k • -formal scheme is rig-smooth. It follows from [8, Prop. 1.4 and step (1) in its proof] that a semistable k • -formal scheme is normal.
The notion of (resp. strict) semistability we use is the one-dimensional case of (resp. strict) nondegenerate polystability [8]. The following result is proven in [3]. Theorem 1.1.5. A normal admissible formal scheme X over k • is (resp. strictly) semistable if and only if its closed fiber X s is a (resp. strictly) semistable scheme over k. Definition 1.1.6. A normal admissible k • -scheme X is said to be (resp. strictly) semistable if its special fiber is (resp. strictly) semistable.
It follows from theorem 1.1.5 that a normal admissible k • -scheme X is (resp. strictly) semistable if and only if its completion is a (resp. strictly) semistable k • -formal scheme.
Semistable models
We assume from now on in this section that k is algebraically closed. The following strong form of the semistable reduction theorem for curves over k (in fact over a general basis) has recently been made available by Temkin [33].
is an isomorphism of k-algebraic curves (resp. of k-analytic curves). Theorem 1.2.2. (Strong Semistable Reduction Theorem) Let Y be an admissible k • -scheme of pure relative dimension one over k • . Let us assume that the generic fiber Y := Y η of Y is a smooth k-algebraic curve. Then, there is a semistable (and even a strictly semistable) For any such minimal choice of (Y ′ , ϕ ′ ), the morphism ϕ ′ is projective and is an isomorphism over the maximal semistable (open) subscheme of Y . Definition 1.2.3. The minimal pair (Y ′ , ϕ ′ ) appearing in theorem 1.2.2 is unique up to unique isomorphism inducing the identity on Y . It will be called the minimum semistable η-modification of Y and will be denoted by (Y st , ϕ st ).
The previous theorem has a formal counterpart as follows.
Theorem 1.2.4. Let Y be an admissible k • -formal scheme of pure relative dimension 1. Assume the generic fiber Y η is rig-smooth. Then, there is a semistable (and even a strictly semistable) k • -formal scheme Y ′ and an η-modification φ ′ : Y ′ → Y. Moreover, the k • -semistable formal scheme Y ′ may be chosen to be minimal in the following sense. If Y ′′ is a k • -semistable formal scheme and a morphism ϕ ′′ : For any such minimal choice of (Y ′ , ϕ ′ ), the morphism ϕ ′ is projective and is an isomorphism over the maximal semistable (open) formal subscheme of Y.
Proof. By uniqueness, it is enough to prove the statement for Y = Spf A affine. Under this assumption, by Elkik's theorem [25,Thm. 7 and Rmk. 2 on p. 587], one can find a finitely generated k • -algebra B such that B ∼ − − → A and that B ⊗ k • k is smooth over k. We now apply theorem 1.2.2 to the admissible affine k • -scheme Y = Spec B. We consider the minimal semistable η-modification ϕ ′ : Y ′ → Y . Then (a)-adic completion ϕ ′ : Y ′ → Y of ϕ ′ , for any a ∈ k •• \ {0}, satisfies the requirements of the theorem. It is in fact clearly semistable, projective and is an isomorphism over the maximal semistable (open) subscheme of Y. It is also minimal, because if φ ′′ : Y ′′ → Y has the properties of the statement, we may again assume that Y ′′ is affine. We apply again Elkik's result, to show that φ ′′ is the formal (a)-adic completion of a morphism ϕ ′′ : Y ′′ → Y of admissible k • -schemes. On replacing Y ′′ by an open neighborhood of its special fiber, we may assume that Y ′′ is semistable, as in theorem 1.
Then the (a)-adic completion of χ satisfies the requirements in our statement.
The following result is precisely what we need to start our arguments.
There exists a projective strictly semistable k • -scheme Y and an embedding of V in the generic fiber Y η of the formal completion Y of Y along its closed fiber, which identifies V to the affinoid sp −1 (Z) for an affine open subset Z of Y s . In particular, V is the generic fiber of the strictly semistable k • -formal scheme Y /Z , formal completion of Y along Z.
Proof. (Steps 1 and 2 hold for a higher dimensional affinoid V ).
Step 1. V is isomorphic to a strictly affinoid domain in the analytification V an of an affine scheme V over k. By the theory of Raynaud [11, §2.8 [25,Thm. 7 and Rmk. 2 on p. 587], one can find a finitely generated k Step Step 3. Assume now that V is one-dimensional. Then both V and Y are one-dimensional. By the semistable reduction theorem for curves 1.2.2, there is a projective strictly semistable η-modification Y ′ → Y . In particular, Y ′ is proper, strictly semistable, and has projective smooth generic fiber Y. The existence of a very ample line bundle on Y ′ relative to k • may be proven as in [13, §7]. If Z ′ is the preimage of Z in Y ′ s , then the lemma holds for We deduce as above from theorem 1.2.4 the notion of the minimum semistable ηmodification of Y, which is a pair (Y st , ϕ st ), consisting of a semistable k • -formal scheme Y st , and of a morphism ϕ st : Y st → Y: it is again unique up to unique isomorphism inducing the identity on Y. Definition 1.2.6. Let the k-analytic curve X and, in case, the projective curve X such that X = X an , be as before. Let F A(X) be the category whose objects are pairs (Y, φ), where Y is an admissible normal formal scheme and φ is an isomorphism φ : X ∼ −−→ Y η of k-analytic curves. We call the pair (Y, φ), and abusively also the formal scheme Y, an is the category of admissible formal models 7 of X. We similarly define the category SA(X ) of admissible models of X , namely pairs (Y , φ), where Y is an admissible normal k • -scheme and φ is an isomorphism φ : We define F S(X) (resp. SS(X )) to be the full subcategory of F A(X) (resp. of SA(X )) Similarly for the subcategory F S st (X) (resp. SS st (X )) of strictly semistable models. We will denote by PSS(X ) (resp. PSS st (X )) the full subcategory of SS(X ) consisting of proper (resp. strictly) semistable k • -models of X . Definition 1.2.7. A Tate curve over k is a smooth projective k-algebraic curve of genus 1, which does not admit a smooth projective model over the ring of integers k • . By a rational curve we simply mean the k-projective line P 1 . The analytification of a rational (resp. of a Tate) curve is also called rational (resp. a Tate curve).
The following result completely describes the category of formal semistable models of a given k-analytic curve.
Theorem 1.2.9. (Semistable reduction of compact k-analytic curves) Let Y be a compact rig-smooth strictly k-analytic curve. The category F S(Y ) of semistable models of Y and morphisms of k • -formal schemes inducing the identity on the generic fiber, is a non-empty The proof we propose is purely analytic. It will be sketched in the next subsections. It is based on an equivalence of categories between F S(Y ) and a category PS(Y ) whose objects are partitions of Y into a finite family of open annuli and of affinoids with good canonical reduction. The statement of the theorem, translated for PS(Y ), becomes an amusing discussion of intersections of disks and annuli in Y in the style of [13,Prop. 5.4]. For a complete proof, see [3].
We admit here the following relatively standard result discussed in [3] 10. Let Y be a smooth projective k-algebraic curve and let Y = Y an be the associated compact k-analytic curve. The completion functor Z → Z induces equivalences of categories The following corollary of theorem 1.2.9 and lemma 1.2.10 gives a complete description of the category of proper semistable models of a given smooth projective curve. We are not aware of a direct algebraic proof in the style of Temkin's theorem 1.2.2.
Theorem 1.2.11. (Semistable Reduction of Projective Curves) Let Y be a smooth projective k-algebraic curve. A proper semistable model of Y is necessarily projective. The category PSS(Y) of projective semistable models of Y, and morphisms of k • -schemes inducing the identity on the generic fiber, is a non-empty partially ordered set, where x → y ⇔ x ≥ y. For any objects x, y of PSS(Y), x ∨ y = sup{x, y} exists. Every morphism Y ′′ → Y ′ in PSS(Y) is a blow-up of a closed finite a-torsion subscheme of O Y ′ , for some non-zero a ∈ k •• (in particular, it is projective). Unless Y is a rational or a Tate projective curve, for any object x of PSS(Y), there is a unique minimal object m x of PS(Y), such that x ≥ m x and, for any objects x, y of PSS(Y), x ∧ y = inf{x, y} exists. In particular, PSS(Y) admits a minimum.
Formal coverings and formal models
For the definition and basic properties of formal affinoid coverings (always assumed to be strictly affinoid in this paper) we refer to the first part of [5, §4.3]. We want to describe a functorial equivalence between (resp. strictly) semistable models and the (resp. strictly) semistable coverings of X described below.
The set of formal coverings of X forms a category Cov(X) in which an arrow U → V is a refinement of coverings, i.e. a map of sets Φ : For any formal covering U of X there is a canonical specialization map sp U : X → X U , obtained by patching together the canonical reduction maps for all affinoids in the covering (their compatibility being precisely the definition of a formal affinoid covering). Specialization maps also have a functorial behaviour: Two formal coverings U and V of X are equivalent if U ∪ V is a formal covering of X. Equivalence of formal coverings is an equivalence relation: we then write U ∼ V, and denote by [U] the equivalence class of U.
It is easy to check that the class of quasi-isomorphisms in Cov(X) admits calculus of right fractions. The corresponding localized category is denoted Cov(X)/ ∼ and is called the category of formal affinoid coverings of X up to equivalence. Its objects are equivalence classes [U] of formal coverings of X, and, for any pair U, V of formal coverings of X, where U ′ → U is a quasi-isomorphism of formal coverings. If Φ is a quasi-isomorphism, ϕ Φ canonically identifies X U to X V . In particular, if U and V are any pair of equivalent formal coverings of X, the inclusions Φ : U ֒→ U ∪ V and Ψ : V ֒→ U ∪ V are quasi-isomorphisms and ϕ Φ and ϕ Ψ are isomorphisms which canonically identify sp U with sp V . In other words, the natural functor U → X U factors through the reduction functor Cov(X)/ ∼ → Sch k .
Let us now assume that the formal covering U is distinguished. One then defines a sheaf of topological rings, flat and topologically of finite type [ is a normal formal model of X whose specialization map of G-ringed spaces sp XU : X G → X U set-theoretically identifies with sp U . It is easy to check that if Φ : U → V is a refinement of formal coverings of X, there is a natural morphism of sheaves of topological rings : X U → X V whose generic fiber is the identity of the k-analytic curve X. Moreover, if Φ is a quasiisomorphism of distinguished coverings, ϕ Φ : X U → X V = X U is the identity of the formal scheme X U . Definition 1.3.2. We denote by Cov dis (X) the full subcategory of Cov(X) whose objects are distinguished formal coverings of X .
The class of quasi-isomorphisms in Cov dis (X) admits calculus of right fractions. The corresponding localized category Cov dis (X)/ ∼ is a full subcategory of Cov dis (X)/ ∼, and is called the category of distinguished formal coverings of X up to equivalence. Its objects are equivalence classes [U] of distinguished formal coverings of X. The following result will be detailed in [3].
As explained in [11,§2.8,Step (b) in the proof of Thm. 3], for two admissible formal models X and Y of X, there is at most one morphism ϕ : X → Y inducing the identity on generic fibers. Therefore, the category F A(X) is really a partially ordered directed set. It then follows that, for any pair U, V of distinguished formal coverings of X, where Φ : U ′ → V ′ is a refinement of formal coverings. Definition 1.3.4. Let X be as before, and let U be a distinguished formal strictly affinoid covering of X. We say that U is a (resp. strictly) semistable affinoid covering of X, if X U is a (strictly) semistable curve over k. It is clear that these notions are compatible with equivalence of formal coverings. We define CS(X) (resp. CS st (X)) as the full subcategory of Cov dis (X)/ ∼ whose objects are equivalence classes of (resp. strictly) semistable affinoid coverings of X.
Remark 1.3.5.
A semistable covering U of X is distinguished by assumption and the reduction X U is geometrically reduced. We deduce from [13, Prop. 1.2] that for any nonarchimedean extension field K of k and for U ⊗K = {U ⊗K} U∈U , one has X ⊗K U ⊗K = X U ⊗ K .
In particular, U ⊗K is a semistable covering of X ⊗K.
It follows from Theorem 1.1.5 that if the covering U is (strictly) semistable, then X U is a (strictly) semistable model of X. Corollary 1.3.6. The construction U → X U induces an equivalence of the category CS(X) (resp. CS st (X)) of (resp. strictly) semistable coverings up to equivalence, and the category F S(X) (resp. F S st (X)). It has the property that Hom(X U , X V ) is nonempty if and only if there are formal coverings U ′ and V ′ respectively equivalent to U and V, and a refinement Φ : U ′ → V ′ . In that case Hom(X U , X V ) = {ϕ Φ }, via the canonical identifictions of X U ′ and X V ′ with X U and X V , respectively.
Disks and annuli
For a ∈ A 1 (k) and r ∈ R >0 we denote by the standard open (resp. closed ) k-disk of radius r > 0 centered at a. The maximal point point t a,r of D(a, r + ) is defined by the multiplicative norm the standard open (resp. closed ) k-annulus of radii 0 < r 1 < r 2 (resp. 0 < r 1 ≤ r 2 ), centered at a ∈ A 1 (k). We write B(r 1 , r 2 ) (resp. B[r 1 , r 2 ]) for the standard annulus B(0; r 1 , r 2 ) (resp. B[0; r 1 , r 2 ]). The ratio r(V ) = r 1 /r 2 ∈ (0, 1) is the height of V = B(a; r 1 , r 2 ) or B[a; r 1 , r 2 ]. In general, an open or closed k-disk (resp. k-annulus) is a k-analytic space isomorphic to a standard open or closed k-disk (resp. k-annulus). It is k-rational if, moreover, the radii of the corresponding disk or annulus are in |k|. Two k-rational k-annuli, both open or both closed, are isomorphic iff they have the same height. Moreover, if a k-analytic curve V is isomorphic via the coordinate T to D(0, r − ) (resp. to B(r(V ), 1)), for any automorphism ϕ ∈ Aut(V ), and any x ∈ V , |T (ϕ(x))| equals |T (x)| (resp. either |T (x)| or r(V )/|T (x)|: ϕ is direct in the former case and inverse in the latter). An open k-disk (resp. k-annulus) is a simply-connected quasi-polyhedron; it has precisely one (resp. two) endpoint(s).
A k-disk or k-annulus in a k-analytic curve X will simply be called a disk or an annulus in X.
, where a ∈ k • , |a| = r. Notice that B[r, 1] is welldefined. A basic formal disk (resp. a basic formal annulus) over k • is an affine connected k • -formal scheme B = Spf A equipped with a dominantétale morphism T to the standard formal disk (resp. to a standard formal annulus) over k • , such that the inverse image of the point 0 ∈ A 1 ( k) (resp. (S, T ) ∈ Spec k[S, T ]/(ST )) consists of a single point, with residue field k. A basic affinoid disk (resp. basic affinoid annulus) over k is the generic fiber of a basic formal disk (resp. annulus) over k • . We write (B, T ) for a formal disk (resp. annulus) and (B, T ) for the corresponding affinoid disk (resp. annulus).
Semistable partitions
We introduce here a very simple notion, equivalent to the ones discussed in the previous subsections of semistable formal model and of semistable affinoid covering of X. It is the notion of a "semistable partition" of X. It has the virtue of making completely obvious two well-known statements for which there is no canonical reference in the literature. Namely, the existence of a minimum semistable model of X (unless X is the analytification of either a rational or a Tate curve), and the existence of a common minimum semistable refinement of two semistable formal coverings. Again, we need such explicit descriptions, in view of application to differential systems on X.
We refer to [3] for full proofs in a more general setting. . . , C r } of connected distinguished strictly affinoid domains with good canonical reduction will be called a semistable partition of X, and will be denoted by P (B, C). The semistable partition P(B, C) is a strictly semistable partition if every annulus B ∈ B has two distinct boundary points in X \ B. If X is the analytification of a smooth projective curve with good reduction, we also regard P(∅, {X}) as a (strictly) semistable partition. For two semistable partitions P = P(B, C) and P ′ = P(B ′ , C ′ ) of X, we say that P ′ ≥ P if for every B ∈ B (resp. C ∈ C), the elements of B ′ and C ′ contained in B (resp. C) give a partition of B (resp. C). We denote by PS(X) (resp. PS st (X)) the category, in fact a partially ordered set, of semistable (resp. strictly semistable) partitions of X. Let X be a semistable (resp. a strictly semistable) formal model of X and let sp : X → X s be the corresponding specialization map to a curve over k. Let Sing(X s ) be the singular locus of X s , and let X sm s = X s \ Sing(X s ) be the smooth part of X s . Then Sing(X s ) = {z 1 , . . . , z N } where each z i is a closed point of X s , and B i := sp −1 (z i ) is an open annulus in X. Similarly, let c 1 , . . . , c r be the connected (hence irreducible) components of the smooth locus X sm s , and let η cj be the generic point of c j . Then either X is smooth in which case r = 1 and c 1 = X s , or sp −1 (c j ) is a connected affinoid domain C j in X with good canonical reduction, whose Shilov boundary consists of the single point η Cj of type (2), unique inverse image of η cj . Let B X = {B 1 , . . . , B N } and C X = {C 1 , . . . , C r }. We conclude that P X = P(B X , C X ) is a semistable (resp. strictly semistable) partition of X. If ϕ : Y → X is a morphism of semistable (resp. strictly semistable) formal models of X, then P Y ≥ P X : we have defined a functor X → P X from the category of semistable (resp. strictly semistable) formal models of X to the category of semistable (resp. strictly semistable) partitions of X. Theorem 1.5.4. The functor X → P X induces an equivalence of categories between F S(X) (resp. F S st (X)) and PS(X) (resp. PS st (X)).
Proof. In order to construct a quasi-inverse of the functor X → P X , we first associate to a semistable (resp. strictly semistable) partition P = P(B, C) of X, a (resp. strictly) semistable covering U P of X, as follows. For any B ∈ B, let {C, C ′ } ⊂ C be the set of C ∈ C such that the closure B of B in X contains the maximal point η C of C. (We do not exclude that C = C ′ ). Notice that B ∪ C ∪ C ′ = B ∪ C ∪ C ′ is a compact connected analytic domain in X, hence it is either an affinoid or a projective curve. In both cases, cutting off a finite number of open disks contained in either C or C ′ , we get an affinoid domain U B such that B ⊂ U B ⊂ B ∪ C ∪ C ′ . Lemma 1.5.5. Let B, C, C ′ be as before and let U B be any affinoid domain of X such that B ⊂ U B ⊂ B ∪ C ∪ C ′ and such that (B ∪ C ∪ C ′ ) \ U B is the union of a finite number of maximal open disks contained in either C or C ′ . Then, the canonical reduction of U B is a reduced curve which is either irreducible with a single ordinary double point, if C = C ′ , or the union of two smooth irreducible curves crossing normally, if C = C ′ .
Proof. We apply Prop. 1.2.5 to realize U B as an affinoid domain in Y = Y an η = Y η , where Y is a strictly semistable projective k • -scheme of formal completion Y, and U B is of the form sp −1 Y (Z), for Z an affine open subset of Y s . Notice that this does not imply that the canonical reduction map of U B be just the restriction of sp Y to U B → Z. All we know is that there is a morphism compatible with specialization maps Z → U B . We now construct a semistable partition . . , m ′ ) having η C (resp. η C ′ ) as a cluster point. We assume that, for every i, B i is identified with B(r i , 1) in such a way that t 0,r → η C as r → 1, and B We now glue open disks to C (resp. C ′ ), via the natural embeddings B as r → 1, to obtain C (resp. C ′ ). The resulting k-analytic curve T = U B ∪ C ∪ C ′ is smooth and proper, and is therefore algebraizable T = T an , where T is a smooth projective curve over k. Moreover, the formal covering {U B , C, C ′ } determines a formal model T of T whose special fiber is the union of the canonical reductions { U B , C, C ′ } of the three affinoids in the covering. As for C and C ′ , they are disjoint affine open smooth k-subvarieties of T s . Moreover, X \sp −1 T ( C∪ C ′ ) = B implies that T s \( C∪ C ′ ) consists of a single k-rational point. By [13, 2.3], that point is an ordinary double point of T s . This concludes the proof.
We now define the affinoid covering U P of X, as the union of C and of the set of all affinoinds U B constructed as above, for all B ∈ B. It is clear that, for any choice of B, B ′ ∈ B and of affinoids U B and U ′ B ′ , as above, the intersection U B ∩ U ′ B ′ is a formal affinoid in both U B and U ′ B ′ . In other words, we have shown that U P is a formal semistable covering of X. We set X P for the formal scheme X U P . If P ′ ≥ P there a canonical refinement Φ : U P ′ → U P , and therefore a morphism X P ′ → X P .
The composite functor X → P := P X → U P → X U P is easily seen to be the identity of F S(X) (resp. of F S st (X)). Similarly, P → U P → X := X U P → P X is the identity of PS(X) (resp. PS st (X)).
The proof of the following results will be detailed in [3]. Lemma 1.5.6. Let X be a compact rig-smooth strictly k-analytic curve. Then 1. The union of two closed disks with nonempty intersection in X is either one of the two disks, or else it is all of X and then X is the analytification of a projective smooth curve of genus 0. In the former case the intersection of the two disks is one of them; in the latter, it is a closed annulus.
A non-empty intersection of a closed disk and a closed annulus in
X is either the disk or the annulus or a strictly smaller closed annulus. Their union is, respectively, the annulus, the disk or a strictly bigger closed disk.
3. A non-empty intersection of two closed annuli in X consists of either one or two disjoint closed annuli. Their union is a closed annulus in the former case and a Tate curve in the latter.
Corollary 1.5.7. The lemma holds by replacing "closed" by "open" everywhere in the statement.
Corollary 1.5.8. Let C and C ′ be connected strictly affinoid domains of X, with good canonical reduction. Then 1. Assume X is not rational. Then C ∩ C ′ equals either ∅, or C or C ′ , or else it is a formal affinoid domain in both C and C ′ .
2. If the union U = C ∪ C ′ is an affinoid domain in X, properly containing both C and C ′ , then U has good canonical reduction, and its specialization map is induced by those of C and C ′ .
3. The intersection of a closed annulus B of X with C, equals either ∅, or B, or C, or the complement in C of a finite number of open disks contained in distinct residue classes, and coinciding with the full residue class except for at most one of them.
The intersection of an open disk
D of X with a connected strict affinoid domain C of X, with good canonical reduction, equals either ∅ or D or C, or the complement of a closed disk contained in a single residue class of C.
The intersection of an open annulus B of X with a connected strict affinoid domain
C of X, with good canonical reduction, equals either ∅ or B or C.
Straightforward consequences of the previous lemmas are the following propositions [3].
Proposition 1.5.10. The partially ordered set of semistable partitions of X is directed: if P = P(B, C) and P ′ = P(B ′ , C ′ ) are semistable partitions of X, there exists a minimum semistable partition P ′′ of X with P ′′ ≥ P and P ′′ ≥ P ′ .
Proposition 1.5.11. Assume X is neither the analytification of a rational nor of a Tate curve. Then the partially ordered set of semistable partitions of X has a minimum P 0 = P(B 0 , C 0 ).
Subpolygons of an analytic curve
There is a fourth equivalent viewpoint we need to master for dealing with differential equations on X. This is the notion of a "subpolygon of X". Actually, this notion has been used implicitly by Dwork, Robba, Christol,... since the infancy of the theory of p-adic differential equations. The framework of Berkovich analytic spaces gives substance to what has been for a long time a somewhat artificial tool to describe the "generic" behaviour of radii of convergence of power series solutions to differential equations. The global description of radii of convergence on the full Berkovich analytic space is the main novelty of this paper.
This section is entirely due to Berkovich, but we are unable to give precise references for most of the material we have collected here. Full details are given in [3]. We recall that the non-archimedean field k is supposed to be non-trivially valued and algebraically closed. Notice that a vertex of Y necessarily belongs to S(Y ) and that, if Y is affinoid, any point in the Shilov boundary of Y is a vertex. Points of S(Y ) are of type (2) or (3). We denote by H(Y ) ⊂ Y the subset consisting of points of type (2) or (3), and call it the Berkovich hyperbolic subspace of the curve Y . For any algebraically closed non-archimedean extension Notice that, if a morphism ϕ : S ′ → S in GP(X) exists, S is a subpolygon of S ′ , and the fibers of ϕ over a vertex are simply connected trees. We also denote by (GP(X), ≤) the category associated to the order relation ≤: a morphism S ≤ S ′ in this category is called an inclusion S → S ′ of subpolygons of X. Let ι : S → S ′ be an inclusion of subpolygons of X. We say that ι is a subdivision (and that S ′ is obtained by subdivision of S) if S = S ′ . A subdivision is visibly the composition of a finite number of subdivisions obtained by the addition of one vertex inside an open edge. We call 1-step subdivision such a subdivision. In general, a subdivision S ≤ S ′ in which S ′ has N vertices more than S, i.e. an N -step subdivision, is a product of N 1-step subdivisions. An inclusion S ≤ S ′ is called exact if every vertex (resp. open edge) of S ′ contained in S is a vertex (resp. open edge) of S. An inclusion ι : S → S ′ canonically decomposes into a composition ι = ε • σ, where ε : S s → S ′ is an exact inclusion and σ : S → S s is a subdivision.
For a morphism ϕ : S ′ → S in GP(X), we say that it is a trivial retraction if S ≤ S ′ is a subdivision, and a neat retraction if S ≤ S ′ is an exact inclusion. Any morphism ϕ : S ′ → S in GP(X) canonically decomposes as ϕ triv • ϕ neat , where ϕ neat : S ′ → S s is neat and ϕ triv : S s → S is trivial. Notice that a neat retraction sends vertices to vertices. . . , C r }) of X, we naturally associate a complete subpolygon S(P), whose vertices are the maximal points of the affinoids C 1 , . . . , C r and whose open edges are the skeleta of the open annuli B 1 , . . . , B N . If P ′ ≥ P in PS(X), it is clear that S(P ′ ) ≥ S(P) as subpolygons of X, and that there is a morphism S(P ′ ) → S(P) in GP c .
The following results are proven in [3].
Theorem 1.6.5. The functor S establishes an equivalence of categories between PS(X) and GP c (X). Moreover, GP c (X) is the opposite category to (GP c (X), ≤). We also denote by S the composite functor X → P X → S(P X ) =: S(X), which induces an equivalence of categories between F S(X) and GP c (X).
Notice that [8, 1.7] the generic point of a component c i of X s has a unique inverse image η i in X under sp X , and the set of vertices V(X) of S(X) is precisely the set of those inverse images. The set of open edges E(X) is the set of analytic skeleta of the inverse images under sp X of the singular points of X s . For a morphism ϕ : Y → X in F S(X), we denote by τ X,Y : S(Y) → S(X) the corresponding morphism in GP c (X). Then, as a topological space, while the subset H(X) ⊂ X of points of type (2) or (3) is where Y varies among semistable models of X. In particular, we get, for any semistable model X of X, a retraction τ X : X → S(X), as the natural map to S(X). It has the property that the inverse image under τ X of any vertex or closed edge of S(X), is a strict affinoid domain in X. It follows that the same is true for any closed k-rational segment contained in a closed edge of S(X) (because it is a closed edge of S(X), for some Y ≥ X). If X = X 0 is the minimum semistable model of X, then S(X 0 ) = S(X) and τ X0 = τ X . Definition 1.6.6. For any semistable model X of X, and any rational point y ∈ X (necessarily, y ∈ X \ S(X)), we denote by D X (y, 1 − ) the maximal open disk neighborhood of y contained in X \ S(X).
Definition 1.6.7. Assumptions as in definition 1.6.6. An isomorphism is called an X-normalized coordinate at y.
Notice that the map τ X takes the disk D X (y, 1 − ) to the unique boundary point of D X (y, 1 − ) in X, not in D X (y, 1 − ), τ X (y) ∈ S(X). The fiber τ −1 X (τ X (y)) \ {τ X (y)} is the disjoint union of a family of disks having the same limit point τ X (y) ∈ S(X). It follows from theorem 1.2.9 that for any compact rig-smooth strictly k-analytic curve X, which is not rational, and any rational point y ∈ X (necessarily, y ∈ X \ S(X)), there exists a maximal open disk neighborhood D X (y, 1 − ) of y in X. It generalizes the neighborhood D y (X) defined in the introduction for an affinoid X in A 1 .
Admissible blow-ups
Let us recall that for any admissible k • -formal scheme X, any a ∈ k •• \ {0}, and any open ideal of finite presentation A ⊂ O X , one defines the blow-up of X along A as the morphism ϕ : Y → X of formal schemes, inductive limit as n → ∞, of the blow-up ϕ n : Y n → X n of the scheme X n = (X, O X /(a) n+1 ) along the ideal A ⊗ O X /(a) n+1 . Such a morphism is called an admissible blow-up, and is independent of the choice of a.
We now show that a morphism ϕ : X ′ → X in F S(X) is an admissible blow-up. We use the category PS(X) for our description. Let us consider two strictly semistable partitions P = P(B, C) and P ′ = P(B ′ , C ′ ) of X, with P ′ ≥ P, and let ϕ : X ′ := X P ′ → X := X P be the corresponding morphism of F S(X). Consider an affinoid with good canonical reduction C ′ ∈ C ′ . Then either C ′ is contained in an open annulus B ∈ B or it is contained in an affinoid C ∈ C. In the second case, either C ′ is contained in a single residue class D of C, or the maximal point η C ′ of C ′ coincides with the maximal point η C of C. In the latter case, C ′ is the complement in C of a finite number of residue classes D 1 , . . . , D r of C. Notice that for each i = 1, . . . , r, D i must contain a finite number of disjoint affinoids with good canonical reduction belonging to C ′ . The partition P ′ may be reconstructed from the family F = F (P ′ , P) of the elements of C ′ contained in either an open annulus B ∈ B, or in a maximal disk D of some affinoid with good canonical reduction C ∈ C. In fact, let C ′ 0 ∈ C ′ be an affinoid with good canonical reduction which is not in F . Then, C ′ 0 is obtained from one affinoid C ∈ C, by subtracting those residue classes which do contain some C ′ ∈ F . On the other hand, the annuli in B ′ are simply the connected components of the complement in X of the union of all affinoids in C ′ . Now if B (resp. D) is an open k-rational annulus (resp disk), the description of connected strictly affinoid domains with good canonical reduction in B or D is elementary. They are the complement of a finite number of maximal open disks in a closed strictly affinoid disk. We define C ϕ as the union of all affinoids V ∈ F and A ϕ as the sheaf of ideals of O X consisting of sections f of O X whose pull-back under sp X is < 1 on C ϕ . It is clear (by explicit description of generators) that A ϕ is an open ideal of finite presentation of O X .
It is easily seen (cf. [3] for more details) that Theorem 1.7.1. The morphism ϕ is the admissible blow-up of X along A ϕ . Definition 1.7.2. We will say that the morphism ϕ : Y → X in F S(X) is a trivial (resp. simple) 1-step blow-up, if the corresponding morphism S(ϕ) : S(Y) → S(X) in GP c (X) is a trivial (resp. simple) 1-step retraction.
is the blow-up of a single closed annulus of height 1 contained in an open annulus B i ∈ B i and intersecting its skeleton, where P i = (B i , C i ) is the semistable partition associated to X i ).
In particular, every morphism ϕ : X ′ → X in F S(X) is an admissible blow-up, it is a product of 1-step trivial or simple blow-ups, and each non-empty fiber of X ′ s → X s is a connected union of rational smooth projective curves whose graph is a tree.
1.8Étale morphisms of formal schemes
For any morphism ϕ : Y → X of semistable k • -formal schemes, with generic fiber ϕ η : Y → X, the following diagram is commutative [9, 4.4.2] (1.8.0.2) If, moreover, ϕ : Y → X isétale, then ϕ η (S(Y)) ⊂ S(X), and ϕ −1 η (S(X)) = S(Y), so that the previous diagram becomes Suppose the point y ∈ Y is a vertex of S(Y). This means that it is the unique inverse image under sp Y of the generic point of a component c of Y s . But ϕ induces anétale morphism from the smooth part c sm to a component c ′ of X s [8, 2.2 (i)]. Therefore, ϕ η (V(Y)) ⊂ V(X).
Continuity of real valued functions on rig-smooth kanalytic curves
Lemma 2.0.1. Let k be any non-archimedean field, Y be any k-analytic space, L a nonarchimedean field over k and Y L = Y ⊗ k L be the extension of Y to L. Then the natural topology of Y is the quotient topology of the natural topology of Y L via the projection map Proof. We first prove that the map ψ L is closed. Let C be a closed subset of Y L . Let y be a point of Y \ ψ L (C), and let D 2 be a compact neighborhood of y in Y . Then D 1 = ψ −1 L (D 2 ) is a compact subset of Y L . The intersection C ∩ D 1 is then compact; its image ψ L (C ∩ D 1 ) is then closed, so that D 2 \ ψ L (C ∩ D 1 ) is a neighborhood of y in Y not intersecting ψ L (C). The conclusion follows from [27, 2.4].
We assume from now on in this section that the non-archimedean field k is non-trivially valued.
Definition 2.0.2. Let X be a semistable k • -formal scheme, Γ a topological space, and let X = X η be the generic fiber of X. Let f : X → Γ be any function and, for any nonarchimedean field extension k ′ /k andétale morphism of semistable k ′• -formal schemes ψ : We say that (f ψ ) ψ is theétale-local system of functions on X with values in Γ associated to the function f . We identify f = f id X with the system (f ψ ) ψ .
Notice that if f : X → Γ is continuous, every component f ψ of theétale-local system of functions associated with f is also continuous. We present some basic results in the opposite direction. The first lemma is a consequence of lemma 2.0.1 and [8, Lemma 5.11].
Lemma 2.0.3. Let X be a semistable k • -formal scheme and f = (f ψ ) ψ be anétale-local system of functions on X with values in the topological space Γ, associated to the function f : X = X η → Γ. Assume there is a non-archimedean extension field k ′ /k and anétale covering {ψ α : Y → X ⊗k ′• } α of X ⊗k ′• , such that ∀α, f ψα is continuous. Then f is continuous. We consider from now onétale-local systems of functions on X with values in a topological space Γ of this type. Notice that Γ has a partial ordering ≤, where h ≤ g if h(x) ≤ g(x) ∀x ∈ [a, b], and that if g − ε < h < g + ε, with ε > 0 a constant function on [a, b], then d(g, h) < 2 ε.
We recall for completeness that a function ϕ : T → Γ, where T is any topological space is upper semicontinuous or USC (resp. lower semicontinuous or LSC) if ∀t 0 ∈ T and ε > 0, there exists a neighborhood U t0,ε of t 0 in T such that ∀t ∈ U t0,ε . If ∀α ∈ I, ϕ α is USC (resp. LSC), then is USC (resp. LSC). Notice that if ϕ is both USC and LSC at t 0 ∈ T , then it is continuous at t 0 . From now on in this section k is algebraically closed and X be a rig-smooth strictly k-affinoid curve.
Definition 2.0.5. A strictly affinoid domain V ⊂ X is said to be special if it is either isomorphic to a closed disk in τ −1 X (x) for a point x ∈ S(X) of type (2), or it is τ −1 X (L), where L is a k-rational closed interval in a closed edge of S(X), such that at most one of the ends of L is a vertex (the case of L consisting of a single point is not excluded). In the former case, S(V ) = Γ(V ) is the maximal point of the disk and, in the latter case, S(V ) = L and Γ(V ) = V(V ) consists of the ends of L.
In the following theorem, for a function f : X → Γ and for any algebraically closed non-archimedean field k ′ over k, we let X k ′ := X ⊗k ′ , ψ X,k ′ /k : X k ′ → X be the canonical projection, and f k ′ := f • ψ X,k ′ /k be the pull-back of f to X k ′ . Theorem 2.0.6. Any function f : X → Γ with the following five properties is continuous: 1. f is continuous at every k-rational point of X; 2. the restriction of f to every special affinoid subdomain V of X is such that f (x) ≥ min 3. for any algebraically closed non-archimedean field k ′ over k, the restriction of f k ′ to any open segment of X k ′ is continuous; 4. the restriction of f to S(X) is continuous at all vertices of S(X); Notice that the properties (3) and (4) of the theorem imply that the restriction of f to S(X) is continuous.
Proof. First of all, we observe that, given an algebraically closed non-archimedean field k ′ over k, the function f k ′ : X k ′ → Γ possesses the properties (1)−(5). Indeed, this is trivial for the property (3). Property (4) follows from the isomorphism S(X k ′ ) ∼ − − → S(X). Property (5) is a consequence of lemma 2.0.1. Furthermore, the validity of (2) for X k ′ follows from its validity for Xand the property (3). Finally, let x ′ be a k ′ -rational point of X k ′ , which is not the preimage of a k-rational point of X. If x is the image of x ′ in X, then the preimage of x in X k ′ contains an open neighborhood of x ′ in X k ′ (isomorphic to an open disk), and the property (1) follows.
By lemma 2.0.1 we may increase the field k and assume that it is maximally complete (and algebraically closed). By the property (5), it suffices to verify that, for every point x 0 ∈ X and every ε > 0, the set {x ∈ X | f (x) > f (x 0 ) − ε} contains a neighborhood of the point x 0 . According to (1), we may assume that x 0 / ∈ X(k).
Consider first the case when x 0 / ∈ S(X). We may then assume that x 0 ∈ D X (y, 1 − ), for some y ∈ X(k). Then D X (y, 1 − ) ∼ = D k (0, 1 − ), y → 0, and x 0 → t 0,r ∈ D k (0, 1 − ) for some 0 < r < 1. Take now a number r < R < 1. It follows from the property (3) that the restriction of f to the interval {t 0,r ′ |r ≤ r ′ ≤ R} is continuous and, therefore, we can find R sufficiently close to r with f (t 0,R ) > f (x 0 ) − ε. Let V be the closed disk D(0, R + ). It is an affinoid neighborhood of x 0 = t 0,r in D(0, 1 − ) with Γ(V ) = {t 0,R } and, by the property Suppose now that x 0 ∈ S(X). First of all, if x lies in an open edge of S(X), then the preimage of that edge under the retraction τ X : X → S(X) is isomorphic to an open k-annulus isomorphic to B(r 1 , r 2 ), where x 0 → t 0,r , with r 1 < r < r 2 . By the property (3), the function (r 1 , r 2 ) → Γ : r ′ → f (t 0,r ′ ) is continuous, and so we can find numbers r 1 < R 1 < r < R 2 < r 2 sufficiently close to r such that f (y 1 ), f (y 2 ) > f (x 0 ) − ε, where y 1 = t 0,R1 and y 2 = t 0,R2 . Let V be the closed annulus τ −1 . Then V is a neighborhood of x 0 , and Γ(V ) = {y 1 , y 2 }. Property (2) Furthermore, suppose that x 0 is a vertex of S(X). If S(X) = {x 0 }, then {x 0 } is the Shilov boundary of X and, by property (2), one has f (x) ≥ f (x 0 ) for all x ∈ X. Assume therefore that S(X) = {x 0 }. Consider a connected closed neighborhood L of x 0 in S(X) which does not contain vertices of S(X) other than x 0 . One has L = ∪ n i=1 L i , where each L i is homeomorphic to [r i , 1] with r i ∈ |k × | and x 0 corresponds to the point 1 of [r i , 1]. Let V i be the affinoid domain τ −1 X (L i ) ⊂ X. The union n i=1 V i is a compact neighborhood of x 0 in X, so that it suffices to verify that the set Since the restriction of f to S(X), hence to L, is continuous at x 0 , for any given ε > 0, we may find r i ∈ |k × | ∩ (0, 1) so close to 1 that f (y i ) > f (x 0 ) − ε. Property (2) then implies that f (x) > f (x 0 ) − ε for all x ∈ V i , and the theorem follows.
3 Normalized radius of convergence 3.1 Formal structures Notation 3.1.1. In this section, k is supposed to be non trivially valued, algebraically closed and of characteristic zero. Here X is the complement of a finite set (possibly empty) Z = {z 1 , . . . , z r }, consisting of r distinct k-rational points in a compact rig-smooth connected k-analytic curve X. We assume that X is the generic fiber of a semistable k • -formal scheme X. It is well known that the reduced positive divisor Z determines a finite flat closed reduced subscheme Z of X. We call Z the schematic closure of Z in X. It is known [33] that there is an admissible blow-up Y of X such that the inverse image of Z ′ of Z in Y is an effective relative Cartier divisor of Y finiteétale over Spf k • which does not intersect the singular locus of Y.
So, we will assume in the following that the schematic closure Z of Z in X isétale and is contained in the maximal smooth open formal subscheme of X. We point out that this fact is equivalent to the requirement that the disks D X (z 1 , 1 − ), . . . , D X (z r , 1 − ) are distinct and that their boundary points τ X (z 1 ), . . . , τ X (z r ) are vertices of S(X). Remark 3.1.2. We keep the notational distinction between Z and Z mainly for future use.
We generalize the skeleton functor We will say that the open half-line ℓ i connects z i to τ Y (z i ). For any rigid point The boundary point τ Y,Z (x) of D Y,Z (x, 1 − ) in X belongs to S Z (Y) and, as in (1.6.5.1) we have a canonical continuous retraction Let E be a locally free O X -Module of finite type equipped with an X/k-connection ∇. Notice that the abelian sheaf for the G-topology of X, is not in general locally constant. We use a canonical coor- 1 − ), and define, for r ∈ (0, 1), D Y,Z (x, r ± ) as T −1 (D(0, r ± )). For any r ∈ (0, 1) ∩ |k|, the restriction of E to The coordinate column vector y with respect to the basis e of a horizontal section of (E |D , ∇ |D ) satisfies a system of differential equations of the form (0.1.0.1) where G ∈ M µ×µ (O X (D)) depends on r and on the choice of the basis e. By iteration of the system (0.1.0.1) we obtain, for any i ∈ N, the equations For r 1 < r 2 < 1 we have R r1 (x) ≤ R r2 (x) ≤ 1, and therefore the quantity is well-defined and belongs to (0, 1]. We conclude that E ∇ |D Y,Z (x,R(x) − ) is a locally constant sheaf with fiber k µ on D Y,Z (x, R(x) − ) (for both the natural and the G-topology). Since From the previous discussion, we conclude the following Proposition 3.1.5. If for some analytic domain V ⊂ X, E ∇ |V is locally constant, then it is necessarily a local system of k-vector spaces of rank µ on V and the canonical monomorphism is the maximal open disk D centered at x, not intersecting S Z (Y), and such that E ∇ is a locally constant sheaf on D, or, equivalently, such that the restriction of (E, ∇) to D is isomorphic to the trivial connection on O µ D .
We may then give the Definition 3.1.6. Let X = X \ Z be a rig-smooth connected k-analytic curve as in 3.1.1.
Let (E, ∇) be an object of MIC(X/k), with E locally free of rank µ for the G-topology. We fix a semistable model Y of X over X as in (3.1.1). For any k-rational point x ∈ X, we define the Y-normalized radius of convergence R Y,Z (x, (E, ∇)) (or R Y,z1,...,zr (x, (E, ∇))) of (E, ∇) at x as the radius ρ ∈ (0, 1] of the maximal open disk D centered at x, contained in X and not intersecting the closed subset S Z (Y) ⊂ X, such that (E, ∇) |D is isomorphic to the trivial connection (O D , d D ) µ .
Lemma 3.1.7. Let X, X, Y, (E, ∇) be as in the previous definition, and let ψ : ′ \ Z ′ , and g : X ′ → X be the morphism induced by ψ η . Then g * (E, ∇) is an object of MIC(X ′ /k) and Proof. This follows from the discussion in (1.8) and the definitions. Definition 3.1.8. Let Y be a semistable model of X above X and X = X \ Z, as before. We will say that the O XG -Module E is coherent (resp locally free) over Y if it is of the form sp * Y (E), for a coherent (resp. locally free) O Y -Module E, where sp Y : X G → Y is viewed as a morphism of G-ringed spaces. We denote by MIC Y (X/k) (resp. MIC Y (X( * Z)/k)) the full subcategory of MIC(X/k) consisting of pairs (E, ∇), where E is an O X G -Module coherent and locally free over Y (resp. and ∇ extends to a connection on E with meromorphic singularities at Z ∇ : E → E ⊗ Ω 1 XG/k ( * Z) ). We will prove the following Theorem 3.1.9. Let (E, ∇) be an object of MIC Y (X/k). The function X(k) → R >0 , x → R Y,Z (x, (E, ∇)), extends (uniquely) to a continuous function X → R >0 .
The proof of theorem 3.1.9 will be based in fact on the construction of a function x → R Y,Z (x, (E, ∇)), at all x ∈ X, extending the definition given in this section for k-rational points x ∈ X(k). Namely, we set the following Definition 3.1.10. Let (E, ∇) be an object of MIC Y (X/k). For any x ∈ X, we consider the base-field extension ψ X,H (x)/k : and the canonical H (x)-rational point x ′ of X ′ above x ( cf. the comments following proposition 1.4.1 of [6]). For any semistable model Y of X, Let ε ∈ |k| ∩ (0, 1), and let Recall that we have assumed that all the points τ Y (z i ) are vertices of S(Y). Therefore, if P Y = P(B, C) is the semistable partition of X associated to Y, each of the open disks D Y (z i , 1 − ) is a maximal disk in an affinoid C i ∈ C, for i = 1, . . . , r. We consider the semistable partition P (ε) = P(B (ε) , C (ε) ), where, for each i, we have replaced the affinoid C i by the two affinoids C i \ D Y (z i , 1 − ) and D Y (z i , ε + ), and have added the open annulus s is the union of X s and of r projective lines ℓ is at the same time the canonical reduction of D Y (z i , ε + ), and the image of i ). We summarize the previous discussion: There is a morphism of semistable formal schemes X (ε) → Y, composite of an open immersion X (ε) → Y (ε) and of an admissible blow-up Y (ε) → Y, whose generic fiber identifies with the embedding X (ε) ⊂ X.
Let (B, T ) be anétale neighborhood of Y which is a basic formal annulus or disk. Suppose that the image B of (B, T ) = (B, T ) η in X is contained in X, and contains the point x ∈ X. Now, the canonical point x ′ ∈ X ′ above x is an interior point of the image B ′ in X
Apparent singularities and change of formal model
We describe here how the function x → R Y,Z (x, (E, ∇)) gets modified, by a change of formal model Y of X or by addition of an extra point z = z r+1 ∈ X(k) to Z = {z 1 , . . . , z r }. Notice that, if Y 1 → Y is a morphism of semistable models of X inducing the identity on X, there is a well-defined function ρ Y1/Y : X(k) → (0, 1) ∩ |k|, such that The function ρ Y1/Y is extended to all points x ∈ X, by using the canonical H (x)-rational point . The following lemma follows easily from the description 1.6.5.2 of H(X) ⊂ X.
Differential systems on an annulus: review of classical results
In this section k is a non-archimedean field extension of Q p . We let k alg be an algebraic closure of k.
We follow the common practice of saying that a function F , defined on a subset of R >0 and taking values in R >0 , has a certain property P logarithmically if the function log •F •exp has the property P. Christol-Dwork observe that the function ρ → R(Σ, ρ) is logarithmically concave (i.e. logarithmically ∩-shaped) on the interval [r 1 , r 2 ].
In the second part of loc.cit. , Christol-Dwork show that, as a consequence of their theory of the convergence polygon of a differential operator cf. §2.4 of loc.cit. , ρ → R(Σ, ρ) is also USC at r 1 and r 2 . They conclude that ρ → R(Σ, ρ) is continuous on [r 1 , r 2 ].
We recall that the system Σ is said to be solvable at t 0,r , for r ∈ [r 1 , r 2 ], if R(Σ, r) = r.
Pons [30, 2.2] proves the following theorem. Remark 4.0.5. The simpler case of this theorem, for a system solvable at r 1 and r 2 , appears in §4 of [18]. We cannot follow the topological arguments of Pons loc.cit. . We prefer to review entirely the proof of her theorem, based on [19] and [34], to make it completely clear. Actually, we need to combine the main theorem of [19,Thm. 5.4], with its variation proved by Kedlaya [29,Thm. 6.15].
In particular, if R(Σ, r)/r < |π|, then R(Σ, r)/r can only take values of the form |π|(|a|r s ) 1/j , where a ∈ k and s ∈ Z and j is an integer between 1 and µ.
Let us consider a finiteétale covering ϕ : B(s 1 , s 2 ) → B(r 1 , r 2 ) which is a composite of maps of the following forms: a Kummer covering x → x N , for some N = 1, 2, . . . , where r i = s N i , i = 1, 2, a dilatation x → ax, for a ∈ k × , in which case r i = |a|s i , for i = 1, 2, and an inversion x → γx −1 , with r 1 = |γ|s −1 2 and r 2 = |γ|s −1 1 , assuming such a γ ∈ k × exists. Among Kummer coverings we have the Frobenius covering 8 of degree h: For any ϕ as above, we may pull-back the system to a similar system on B(s 1 , s 2 ), which we denote by ϕ * Σ, in an obvious way.
Of special interest is the pullback by Frobenius: the main theorem of [19] combined with [29,Thm. 6.15] asserts that if, for some fixed h ∈ Z >0 , a system Σ of the form (0.1.0.1) satisfies |π| 1/p h−1 < R(Σ, r)/r < |π| 1/p h for any r ∈ (r 1 , r 2 ), then there exists a system Σ h with coefficients in H (r p h 1 , r p h 2 ), unique in the sense of H (r p h 1 , r p h 2 )/k-differential modules, such that Σ ∼ = ϕ * h Σ h , where " ∼ =" means isomorphism of H (r 1 , r 2 )/k-differential modules. Moreover R(Σ h , r p h )/r p h = (R(Σ, r)/r) p h , 8 The semilinear version of the map ϕ 1 is often used instead. Namely, for any continuous lifting σ ∈ Aut(k) of the absolute Frobenius of k, one considers φ (σ) : B(r 1 , r 2 ) → B(r p 1 , r p 2 ) ⊗ k,σ k, which is i a i T i → i a σ i T pi at the ring level. See section 5.4 below.
so that R(Σ h , r)/r < |π| in (r p h 1 , r p h 2 ), and Young's theorem can be applied. Notice that if the graph of R(Σ h , r)/r is logarithmically affine with slope α in the interval [r p h 1 , r p h 2 ], so is R(Σ, r)/r in [r 1 , r 2 ]: So theorem 4.0.4 is proven in the case R(Σ, r)/r < 1 in [r 1 , r 2 ]. If the function R(Σ, r)/r reaches the maximum value 1 in [r 1 , r 2 ], we are in the case treated by Christol-Mebkhout, and can follow their argument [18, 4.2]. Namely, we operate as in case M = 1 above, and reduce, possibly by an inversion, to the case when R(Σ, r)/r is strictly increasing to 1 in [r 1 , r 2 ]. Then we consider the sequence h → a h ∈ [r 1 , r 2 ], described above (where r ′ 1 = r 2 ), defined for h ∈ Z ≥0 as soon as R(Σ, r 1 )/r 1 ≤ |π| 1/p h , converging to r 2 , and such that R(Σ, a h )/a h = |π| 1/p h . The function R(Σ, r)/r satisfies |π| 1/p h−1 < R(Σ, r)/r < |π| 1/p h for r ∈ (a h−1 , a h ), as soon as a h−1 (resp. b h−1 ) is defined. If R(Σ, r 1 )/r 1 < |π|, then R(Σ, r)/r < |π| in [r 1 , a 0 ). So, by the previous argument, the function R(Σ, r)/r is continuous, logarithmically concave and piecewise affine with rational slopes with denominator at most µ. But in this special case, those slopes are positive and must be decreasing as r → r 2 . The constraint on the denominator shows that there is a non negative rational number β with denominator bounded by µ, such that, for sufficiently big h, the function R(Σ, r)/r on the interval [a h−1 , a h ] is of the form C h r β . So, C h is independent of h, and, since R(Σ, r 2 )/r 2 = 1, C h = r −β 2 . So, theorem 4.0.4 is proven in every case. as an analytic differential equation on any strictly affinoid annulus of the form with its minimal semistable formal structure X. For any x ∈ X, the solution of Σ α on A 1 which takes the value 1 at (the canonical point over) x is and assume p −m−1 < d(α, Z p ) ≤ p −m , with m ∈ Z ≥0 . The radius of convergence of the power series is then is of the form predicted by corollary 4.0.8 on S(X) = S(X) and is in fact constant all over X.
Dwork-Robba theory over basic annuli and disks
We keep the notation and assumptions of the previous section on the field k, but the system Σ of (0.1.0.1) is supposed to be defined on a basic affinoid annulus or disk (X, T ), as in definition 1.4.1. Recall that (X, T ) is the generic fiber of a formal coordinate neighborhood (X, T ) either in a standard formal annulus of height |γ|, Spf k • {S, T }/(ST − γ), with γ ∈ k • , or in the standard formal disk Spf k • {T } over k • , and in particular that the coordinate T on X is the pull-back of the formal coordinate T on X. In this case, X has canonical strictly semistable reduction, and the corresponding strictly semistable model of X is the minimum one, and coincides with X. So, D X (x, 1 − ) = D X (x, 1 − ), for any x ∈ X \ S(X) = X \ S(X).
We denote by || − || X the supnorm on X. Let E ∼ = O µ X and let ∇ be the connection on E, whose solutions are the column solutions of the system Σ. We consider the system of ordinary linear differential equations (0.1.0.1) on X. We define G [i] , for i ∈ N, as in (3.1.4.2), for the global coordinate T on the formal annulus or disk X. If X is a disk, we set If X is an annulus, we set
The Dwork-Robba theorem
We state here a useful form of the theorem of Dwork and Robba [23, Thm. 3.1, Chap. IV]. A multivariable version of it is given in [28] and [2, 4.2].
, for i ∈ N, and R = R(x, Σ) be as above. Then, for any n ∈ N we have the following estimate Proof. We may assume that (X, T ) is either a standard formal annulus of height |γ|, with γ ∈ k • , or the standard formal disk over k • , cf. (1.4.1), and that X = X η . We can also assume that k is algebraically closed. We first explain the notation and the result of remark 3.2 in [23, Chap. IV]. So, let t = t a,ρ ∈ X, for a ∈ k and 0 < ρ ≤ 1, and let A a,ρ = O X (D X (a, ρ − )) be the ring of analytic functions on D X (a, ρ − ). Let A ′ a,ρ be the quotient field of the ring A a,ρ , that is the field of meromorphic functions on D X (a, ρ − ). Notice that the k-linear derivation ∂ : f → d f dT of A a,ρ extends uniquely to a k-linear derivation ∂ of A ′ a,ρ . The boundary seminorm || || a,ρ : A ′ a,ρ → R ≥0 ∪ {∞} , is defined, for any f ∈ A ′ a,ρ as ||f || a,ρ = lim sup The next corollary is the prototype of a transfer theorem to an ordinary contiguous disk [23, V.5].
Corollary 5.1.2. Let D be an open disk in X, and let t be its limit point in X \ D. Then R(t, Σ) = inf x∈D R(x, Σ).
Proof. Let R = inf x∈D R(x, Σ), let r be the radius of D and let If R(x, Σ) > r, for some x ∈ D, then R(y, Σ) = R(x, Σ), for all y in an open disk D ′ with D D ′ ⊂ X. The statement is obvious in this case. So, we may assume that R ≤ r. We have, for any x ∈ D and n ≥ 0, shows that R(x, Σ) ≥ R(t, Σ), for any x ∈ D.
Upper semicontinuity of x → R(x, Σ)
We proceed in our discussion of the system (0.1.0.1) on the basic affinoid annulus or disk (X, T ) to show that the function x → R(x, Σ) is USC on X. Again, the multivariable version of this discussion appears in [2, 4.3]. We define )) if X is a disk min(|T (x)|, R(x, Σ)) if X is an annulus, and will equivalently prove that the function x → R(x, Σ) is USC. We exclude the case when there exist x ∈ X and s such that G [n] (x) = 0, for any n > s, since R(y, Σ) = ∞ and, for any y ∈ X, because the sequence s → ϕ s is an increasing sequence of functions on X. Then, ∀ ε > 0, ∃ s ε such that So, R is a uniform limit of USC functions on X, and is therefore USC.
The generalized theorem of Christol-Dwork
Theorem 5.3.1. Let Σ be a system of linear differential equations on the basic affinoid annulus 9 (X, T ), as above. The function x → R(x, Σ) restricts to a continuous function on S(X).
Descent by Frobenius
We recall that, for any morphism of schemes π : Y → S where p = 0 on S, we have a canonical commutative diagram where F abs (resp. F rel ) denotes absolute (resp. relative) Frobenius, and Ψ • F rel = F abs .
9 the case of a disk is here trivial.
Proof. Let us start with the special case of X = B(r 1 , r 2 ). In this case we know that there exists a commutative diagram We need to use a semilinear version of this map, as follows. We consider the diagram where the map Φ (σ) is defined at the ring level by (5.4.1.5) (Φ (σ) ) * : We observe that the previous diagram 5. 4 We now define π (σ) : X (σ) → Spf k as π such that Ψ * • Φ * = id A(r p 1 ,r p 2 ) . The generic fibers ϕ = Φ η : B[r 1 , r 2 ] → B[r p 1 , r p 2 ] and φ (σ) = (Φ (σ) ) η : B[r 1 , r 2 ] → B[r p 1 , r p 2 ] ⊗ k,σ k of the previous maps are both classically used in the theory of descent of differential modules by Frobenius, as we have illustrated (using the linear map) in the previous section (where ϕ = ϕ 1 ). We have the following generalization of the theorem of descent by Frobenius of Dwork and Christol. rel . Let (E, ∇) be an object of MIC X (X/k) and assume that R X (x, (E, ∇)) > p − 1 p−1 , ∀x ∈ X. Then, there exists a unique object (F , ∇) of MIC X (σ) (X (σ) /k), such that (E, ∇) = (φ (σ) ) * (F , ∇). For any x ∈ X, we have Proof. The construction of (F , ∇) is the same, mutatis mutandis as the one of Christol-Dwork [19, 4.3] and of [23, V.7]. It is important however to use the coordinate-free presentation of Kedlaya [29, 6.3], that eliminates the problem of apparent singularities, which causes so much technical complication in [19] and [23]. We do not give details here. Notice however that one may assume, without loss of generality, that the field k contains the p-th roots of unity. The group µ p acts on B(r 1 , r 2 ) via ζ → τ ζ , where τ ζ : (S, T, U ) → (ζS, ζT, ζ −1 U ) and the map Ψ * of formula 5.4.1.8 is The quotient map B(r 1 , r 2 ) → B(r 1 , r 2 )/µ p identifies with Φ in (5.4.1.1). The action of µ p on B(r 1 , r 2 ) uniquely lifts to an action on X and on X (σ) . The quotient map X → X (σ) /µ p identifies then with F (σ) rel in (5.4.1.2). We may now follow the method of Kedlaya loc.cit. to conclude.
6 Continuity, log-piecewise linearity and log-concavity of x → R Y,Z (x, (E, ∇)) We use here the notation of definitions 3.1.6 and 3.1.8. The assumption that k is algebraically closed may be now relaxed. We assume that k is a non-trivially valued non-archimedean extension field of Q p , X = Y η , for a semistable formal scheme Y, Z is a finiteétale closed k • -formal subscheme of Y, supported in the smooth locus of Y. The generic fiber Z of Z is supposed to consist of r distinct k-rational points {z 1 , . . . , z r } ⊂ X(k) and X = X \ Z. Proof. We have already shown in lemma 3.1.11, that the statement reduces to the case when there are no z i 's, i.e. X = X is compact. We then set Y = X for the given semistable model of X. For any non-archimedean field extension k ′ /k andétale morphism of semistable k ′• -formal schemes ψ : Y → X ⊗k ′• , the assignment So, the same holds true for R(x, Σ). Conditions (3) and (4) have been proved together in section 5.3. Condition (5) was proven in section 5.2.
This proves the theorem, too.
Remark 6.0.5. In [2, 5.3] we gave a direct proof of this statement, in the case of a system (i.e. for E free) over an affinoid domain of A 1 .
Corollary 6.0.6. Assume Y is strictly semistable, and let µ be the rank of E on X = X \ Z.
Let E be an open edge of S Z (Y) and let E be the closure of E in X. We have the following possibilities: | 2010-03-28T20:07:21.000Z | 2008-09-15T00:00:00.000 | {
"year": 2010,
"sha1": "1377ce9f22ede98c45b1a117aa39ea314108c9d7",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0809.2479",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "1377ce9f22ede98c45b1a117aa39ea314108c9d7",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
267031257 | pes2o/s2orc | v3-fos-license | Comparison between high-flow nasal cannula and conventional oxygen therapy in COVID-19 patients: a systematic review and meta-analysis
Background: High-flow nasal cannula (HFNC) and conventional oxygen therapy (COT) are important respiratory support strategies for acute hypoxemic respiratory failure (AHRF) in coronavirus disease 2019 (COVID-19) patients. However, the results are conflicting for the risk of intubation with HFNC as compared to COT. Objectives: We systematically synthesized the outcomes of HFNC relative to COT in COVID-19 patients with AHRF and evaluated these outcomes in relevant subpopulations. Design: This study was designed in accordance with Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. Data sources and methods: We searched PubMed, EMBASE, Web of Science, Scopus, ClinicalTrials.gov, medRxiv, BioRxiv, and the Cochrane Central Register of Controlled Trials for randomized controlled trials and observational studies that compared the efficacy of HFNC with COT in patients with COVID-19-related AHRF. Primary outcomes were intubation rate and mortality rate. Secondary outcomes were the ratio of arterial oxygen partial pressure to fractional inspired oxygen (PaO2/FiO2), respiratory rate, hospital length of stay, intensive care unit (ICU) length of stay, and days free from invasive mechanical ventilation. Results: In total, 20 studies with 5732 patients were included. We found a decreased risk of requiring intubation in HFNC compared to COT [odds ratio (OR) = 0.61, 95% confidence interval (CI): 0.46–0.82, p = 0.0009, I2 = 75%]. Similarly, we found HFNC was associated with lower risk of intubation rate compared to COT in the subgroup of patients with baseline PaO2/FiO2 < 200 mmHg (OR = 0.69, 95% CI: 0.55–0.86, p = 0.0007, I2 = 45%), and who were in ICU settings at enrollment (OR = 0.57, 95% CI: 0.38–0.85, p = 0.005, I2 = 80%). HFNC was associated with an improvement of PaO2/FiO2 and respiratory rate compared to COT. The use of HFNC compared to COT did not reduce the mortality rate, days free from invasive mechanical ventilation, hospital length of stay, or ICU length of stay. Conclusion: Compared to COT, HFNC may decrease the need for tracheal intubation in patients with COVID-19-related AHRF, particularly among patients with baseline PaO2/FiO2 < 200 mmHg and those in ICU settings. Trial registration: This systematic review and meta-analysis protocol was prospectively registered with PROSPERO (no. CRD42022339072).
Search strategy
We conducted a systematic review in accordance with the methods recommended in the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. 28 ', with the detailed search strategy available in the appendix (Supplemental Appendix, p. 2).The language of publication was restricted to English, and the geographical location was not limited.
Study selection
We included all RCTs and observational studies that met the following criteria: (1) population: adult patients (⩾16 years old) confirmed to have COVID-19 by polymerase chain reaction; (2) intervention: HFNC compared with COT.COT stands for conventional oxygen therapy which includes nasal cannula, venturi mask, simple face mask, NRM, rebreather mask, or a combination of them; and (3) predefined outcomes: intubation rate, mortality rate, or both.To evaluate the effects of HFNC and COT on the intubation risk for adult patients with COVID-19, we excluded studies where (1) patients did not meet the screening criteria, (2) the reports were not published in English or were commentaries, reviews, or duplicate publications from the same study, or (3) data could not be extracted by the reported statistical methods or non-targeted outcomes.
Outcomes and definition
The primary outcome was the intubation rate (28 days or in-hospital), and the secondary outcomes included mortality rate (28 days or inhospital), PaO 2 /FiO 2 , respiratory rate, hospital length of stay, intensive care unit (ICU) length of stay, and days free from invasive mechanical ventilation.We also performed two post hoc subgroup analyses comparing COVID-19-related AHRF patients on baseline PaO 2 /FiO 2 < 200 mmHg versus those with >200 mmHg, and those admitted to the ICU versus those admitted to non-ICU sections (ward and emergency department).In addition, we performed subgroup analyses of mortality at 28, 60, and 90 days.
Data extraction and study quality
Two authors (WT and JCW) extracted data independently using a predefined data extraction form, and a third author (YP) resolved disagreements.A case record form designed for the purposes of the study was used to record information on the following variables: enrollment location, follow-up days, age, sex, body mass index, Acute Physiology and Chronic Health Evaluation II score, Sequential Organ Failure Assessment score, baseline PaO 2 /FiO 2 , baseline physiological parameters, comorbidities, laboratory parameters, details of intervention, time from symptom onset to enrollment, duration of oxygen therapy, treatment (prone positioning, use of dexamethasone or Il-6 inhibitors), and the following outcomes: the intubation rate, mortality rate, PaO 2 / FiO 2 and respiratory rate at 4-6/24 h after oxygen therapy, hospital length of stay, ICU length of stay, and days free from invasive mechanical ventilation.Data from each study were tabulated and cross checked before inclusion in the analysis.
They independently assessed risk of bias using the Cochrane collaboration risk of bias tool for RCTs, 29 which considers allocation sequence generation, concealment of allocation, masking of participants and investigators, incomplete outcome reporting, selective outcome reporting, and other sources of bias.The Newcastle-Ottawa Scale (NOS) was used to assess the risk of bias in observational studies. 30In observational studies, scores of 0-4, 5-7, and 8-9 were judged as low, moderate, and high quality, respectively.Full details are provided in the Supplemental Appendix.
Data synthesis and analysis
All analyses were performed in RevMan 5.4.1 (Nordic Cochrane Review Centre, Copenhagen, Denmark) software and STATA 16 (StataCorp, College Station, TX, USA).Categorical outcomes were extracted as the number of patients with each outcome and the total number of patients in each group (denominator).Continuous outcomes were extracted as the sample sizes and means (standard deviations) or medians (interquartile ranges) provided in the studies.When required, medians and interquartile ranges were converted to means and standard deviations for the purpose of the meta-analysis. 31The DerSimonian and Laird random effects model was used to pool the weighted effect of estimates across all studies. 32The Mantel-Haenszel method was used to estimate study weights for categorical outcomes and inverse variance for continuous outcomes.For categorical outcomes, we calculated odds ratios (ORs) and 95% confidence intervals (CIs).For continuous outcomes, mean differences (MDs) and 95% CIs were calculated.
Discussion
In this systematic review and meta-analysis, we found that HFNC was associated with a lower intubation rate than the COT in patients who were hospitalized for COVID-19, in particular among patients whose baseline PaO 2 / FiO 2 < 200 mmHg and those who were in an ICU setting at enrollment.It also showed an improvement in oxygenation at 4-6 and 24 h, and in respiratory rate at 4-6 h in the HFNC group compared to COT.There were no significant differences in mortality rate, days free from invasive mechanical ventilation, hospital length of stay, or ICU length of stay between the HFNC and COT groups.
As with several meta-analyses in non-COVID-19 patients with AHRF, 37,38 we found that use of HFNC was associated with a reduction in intubation rate relative to COT in COVID-19 patients with AHRF.This aligns with a previous study that enrolled 782 patients with COVID-19related AHRF with baseline PaO 2 /FiO 2 < 200 mmHg, and all of these patients were in an ICU setting at enrollment. 11A post hoc subgroup analysis of non-COVID-9 patients with AHRF also found that the intubation rate was lower in patients with a PaO 2 /FiO 2 < 200 mmHg treated with HFNC relative to those treated with NIV or COT. 39For these reasons, we performed a subgroup analysis of patients with COVID-19 with baseline PaO 2 /FiO 2 < 200 mmHg and treatment in an ICU and found that use of HFNC significantly decreased the need for mechanical ventilation support compared to COT.The certainty of evidence for these outcomes was moderate or high, however, due to the small number of RCTs that have reported separate outcome data from these subgroups, more RCTs may needed in the future.
The findings of the subgroup analysis might have several explanations.First, patients with baseline PaO 2 /FiO 2 < 200 mmHg are more likely to progress to invasive mechanical ventilation.Taken together, lower PaO 2 /FiO 2 is associated with extensive areas of low ventilation-perfusion matching for COVID-19. 40The use of HFNC in these TherapeuTic advances in respiratory disease patients may be associated with an increase in endexpiratory lung volume and alveolar recruitment and a more homogeneous distribution of lung aeration relative to COT, which leads to better ventilation-perfusion matching and better oxygenation. 41herefore, we found that HFNC improves oxygenation and respiratory rate and further may reduce the intubation rate relative to COT.Second, experimental data suggest that patients with PaO 2 /FiO 2 < 200 mmHg may represent the most at risk population of patient self-inflicted lung injury (P-SILI). 42High flows of HFNC can more adequately match patients' respiratory flow demands, thereby decreasing the risk for P-SILI, 43,44 which may have important implications for the management of the heterogeneous pulmonary manifestations of patients with COVID-19. 45This effect can be obtained by starting with a minimum gas flow of 30 L/min and increasing with an increase in HFNC flow. 46However, subgroup analysis based on different HFNC settings requires further analysis.It is worth noting that too much delay for invasive mechanical ventilation may induce P-SILI as well.The HFNC treatment failure rate was highest among patients with low PaO 2 /FiO 2 ratios and the evolution of the respitatory oxygenation index (ROX index), we should determine the optimal timing of intubation for those in whom HFNC support fails.
The provision of an ICU setting and lower PaO 2 / FiO 2 are directly correlated, and the delivery of high flow allows matching patients' inspiratory peak flow, even in patients generating high inspiratory peak flow, avoiding dilution with room air and thus providing higher FiO 2 . 41Patients who require ICU admission are also sicker, the increased inspiratory effort and vigorous inspiratory effort can contribute to lung injury.As we discussed above, high flows of HFNC can more adequately match patients' respiratory flow demands, thereby reducing the work of breathing and mitigating the inspiratory effort, as well as decreasing the risk for lung injury. 43,44urthermore, ICU patients also experience more respiratory distress, the heat, and humidification delivered by HFNC help to maintain hydration and mobilize secretions, which positively affect the mucus hypersecretion of patients.It was not possible to conduct a meta-analysis of adverse events because they were variably reported across studies.However, patients receiving HFNC did not seem to have increased risk for adverse events compared to those receiving COT.HFNC and COT are both less invasive than invasive mechanical ventilation, and the adverse events associated with HFNC and COT are always self-limiting.
There were several limitations to this study.First, the results were based on both RCT and non-RCT studies.For all RCTs, most domains were evaluated to have low risk of bias, but performance bias was high due to the impossibility of blinding among patients and medical staff; all non-RCT studies were of medium-level quality or above, and the certainty of evidence for most outcomes was moderate or high, which may have led to an underestimation of the heterogeneity and reduced the precision of our pooled effect estimate.Thus, large-sample RCTs are necessary to confirm our findings.Second, there were crossovers between groups in some studies, whereas crossovers were not allowed in others.Third, the DNI policy and other treatments (Supplemental Table S4, Supplemental Appendix, pp.13-14) such as differences in amount of prone positioning, use of dexamethasone or IL-6 inhibitors in both groups also vary, which further increases heterogeneity.
In addition, only a few studies have reported accurate settings of HFNC, and this information might not have been recorded very precisely, as it was observed and recorded unsystematically.Furthermore, the comparison might be biased because COT is defined quite broadly, comparing with 4 L nasal canula is different than comparing with NRM.Therefore, we were not able to assess the different daily treatment durations and flow settings of HFNC for patient outcomes.
Conclusion
In conclusion, this systematic review and metaanalysis found that in patients who were hospitalized for COVID-19, HFNC was associated with a relatively lower intubation rate than the COT group, and may be more significant among patients whose baseline PaO 2 /FiO 2 < 200 mmHg and those who were in an ICU setting at enrollment.The evidence also shows improvement in the oxygenation and respiratory rate in the HFNC group relative to COT.However, our meta analyses do not demonstrate a benefit in the mortality rate, days free from invasive mechanical ventilation, hospital length of stay, or ICU length of stay in the HFNC group.
Declarations
Ethics approval and consent to participate Not applicable.
Two authors (WT and JCW) searched PubMed, EMBASE, Web of Science, Scopus, ClinicalTrials.gov,medRxiv, BioRxiv, and the Cochrane Central Register of Controlled Trials for eligible studies from 1 January 2020, to 1 October 2022.
Table 1 .
Characteristics of included studies. | 2024-01-19T06:17:43.280Z | 2024-01-01T00:00:00.000 | {
"year": 2024,
"sha1": "63520c183069a09c0761d842dc786e96921720fe",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/17534666231225323",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8679ce433c1cfff5360b7649d46fc00804288666",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
3282892 | pes2o/s2orc | v3-fos-license | Text4baby Influenza Messaging and Influenza Vaccination Among Pregnant Women
Introduction Pregnant women are at risk for severe influenza-related complications; however, only 52% reported receiving an influenza vaccination during the 2013–2014 influenza season. Text4baby, a free national text service, provides influenza vaccination education and reminders to pregnant women. This study examined reported influenza vaccination during pregnancy among Text4baby participants who reported receiving influenza messages and women who reported never participating in Text4baby. Methods Opt-in Internet Panel Surveys (April 2013 and 2014) of pregnant women collected demographic and other characteristics; influenza vaccination knowledge, attitudes, and behaviors; and Text4baby participation. Women aged 18–49 years, pregnant anytime from October to January (N=3,321) were included. Text4baby influenza message recallers reported receiving Text4baby influenza messages during their current/most recent pregnancy (n=377). Text4baby non-participants reported never receiving Text4baby messages (n=2,824). Multivariable logistic regression was performed (2014–2016) controlling for demographic and other characteristics, high-risk conditions, and provider recommendation and offer to vaccinate. Adjusted prevalence ratios (APRs) were calculated. Random sampling was assumed for this non-probability sample. Results Text4baby recallers were more likely than non-participants to report influenza vaccination regardless of receipt of provider recommendation and/or offer to vaccinate (provider recommendation/ offer APR=1.29, 95% CI=1.21, 1.37, provider recommendation/no offer APR=1.52, 95% CI=1.07, 2.17). Among women receiving neither a provider recommendation nor offer to vaccinate, Text4baby recallers were more than three times as likely to report influenza vaccination compared with non-participants (APR=3.39, 95% CI=2.03, 5.67). Conclusions Text4baby status was associated with higher influenza vaccination, especially among women whose provider did not recommend or offer the vaccine. Encouraging Text4baby enrollment may help ensure influenza vaccination is given to protect mothers and infants.
INTRODUCTION
Pregnant women are at high risk for developing severe influenza-related complications, including secondary pneumonia, acute respiratory insufficiency, premature labor, and death as a result of a shift from cell-mediated immunity to humoral immunity while pregnant. 1 Influenza vaccination is the best way to protect women during pregnancy and the postpartum period, and provides infants, another group at high risk for influenza-related complications, with protective immunity. 2 Maternal influenza immunity protects the infant from infection through the transfer of maternal antibodies via the placenta and breast milk and contributes to a "cocooning" protective environment for the infant. 3 The Centers for Disease Control and Prevention (CDC) recommends all women who are or will be pregnant during an influenza season be vaccinated to protect them and their infants from influenza; however, only 52% received the vaccination during the 2013-2014 influenza season. 2,4 A provider's recommendation with or without an offer to vaccinate increases selfreported vaccination rates among pregnant women. 4 Reminder/recall systems have been shown to improve vaccination coverage [5][6][7] ; texting has been used to deliver reminders and education because of its prevalent use and popularity among minorities and people with lower income and education levels. 8 Text4baby is a free mobile health (mHealth) service for pregnant women and mothers with infants aged <1 year that sends three weekly texts with health content timed to a woman's due date or her infant's birthday. Text4baby educates women about important health issues, encourages contact with providers, and promotes healthy behaviors. More than 1,400 partners nationwide promote the service and major medical associations share the service as a tool for their members. Women enroll in various ways, including by text, online, via the Text4baby mobile app, and directly via health plans and Medicaid agencies. Text4baby content is developed in accordance with established patient care guidelines and is kept current by the ongoing involvement of a Content Development Council comprising leading national medical health organizations and federal partners. 9 Text4baby identified maternal influenza vaccination as a critical issue to target and implemented seasonal modules of messages encouraging influenza vaccination. The 2012-2013 module included two components: (1) education tailored to participant-reported reasons for non-vaccination, and (2) an opportunity to schedule a text reminder to get vaccinated. Details on the design, content, and evaluation of the 2012-2013 module are published elsewhere. 10 The 2013-2014 module included information on low-cost influenza vaccination and a separate vaccination reminder. It also included two new components: (1) a coupon offer for a free influenza vaccination for mothers through a partnership with Rite Aid, and (2) additional education about influenza vaccination for infants sent to mothers with infants aged >6 months during influenza season.
The objective of this study is to compare self-reported influenza vaccination coverage during pregnancy among Text4baby participants who reported they received Text4baby influenza messages and women who reported that they never participated in Text4baby.
Study Sample
The data sources for this study were two Internet Panel Surveys conducted by CDC targeting pregnant women aged 18-49 years to collect information on influenza vaccination, demographic characteristics, access to care during pregnancy, and knowledge, attitudes, and behaviors regarding influenza vaccination. Since the 2010-2011 influenza season, CDC has conducted this survey in early April for end-of-season influenza vaccination estimates. 11 Survey data from April 2013 and 2014 were used for this study.
Measures
Women aged 18-49 years who were pregnant anytime from August 2012 through early April 2013 and from August 2013 through early April 2014 were recruited from SurveySpot, an optin general population internet panel operated by Survey Sampling International. Pregnant women were primarily recruited through a message advertising the survey on the main panel websites, inviting panelists to view the survey eligibility questions and sending an email invitation to a sample of panelists whose profiles indicated that they were women aged 18-49 years living in the U.S. A total of 2,047 eligible women completed the April 2013 survey and 2,042 completed the April 2014 survey, with completion rates of 93% and 96%, respectively. For this study, the sample was restricted to women who were pregnant anytime during the usual peak influenza vaccination period from October 2012 through January 2013 for the April 2013 survey and from October 2013 through January 2014 for the April 2014 survey (1,702 from April 2013, and 1,619 from April 2014; N=3,321). To develop statistical measures for this analysis, random sampling was assumed in this nonprobability sample. A non-probability sample was used, given that surveys of rare populations, such as pregnant women, can be time-consuming and costly and few national surveys collect information about receipt of influenza vaccination. For each year, the final sample was weighted through post stratification weighting to represent the age group, race/ ethnicity, and geographic distribution of the U.S. population of pregnant women based on data from National Vital Statistics Reports by the National Center for Health Statistics and the Guttmacher Institute, 1990-2008. a , 12,13 The April 2013 and 2014 surveys included Text4baby questions about receipt of Text4baby messages and about the helpfulness of the influenza messages ( Figure 1). The primary outcome of this study was self-reported influenza vaccination coverage, defined as a The total population of pregnant women in the U.S. in each year and the distribution of pregnant women by age and race/ethnicity groups were determined from reported data published in the National Vital Statistics Reports by the National Center for Health Statistics in June 2012, and included combined data on live births from birth certificate information, data on spontaneous abortions from the National Survey of Family Growth, and data on induced abortions from CDC's Abortion Surveillance System. The distribution of U.S. pregnant women aged 18-44 years by Census region in 2008 was determined based on estimates provided for each state in the Guttmacher Institute's state data center, and included pregnancies that ended in live births and spontaneous and induced abortions. vaccination received before and during pregnancy since July (July 2012 for the April 2013 survey and July 2013 for the April 2014 survey).
Survey respondents who reported being pregnant anytime during October through January were grouped as follows: (1) "Text4baby influenza message recallers" were women who reported they received Text4baby influenza messages during their current or most recent pregnancy (those who responded yes to Text4baby Questions 1 and 3), and (2) "nonparticipants" were women who reported they did not receive any Text4baby messages (those who responded no to Text4baby Question 1; Figure 1). Current and former Text4baby enrollees who reported they did not receive influenza messaging during their current or most recent pregnancy (those who responded yes to Text4baby Question 1, yes [current enrollee] or no [former enrollee] to Question 2, and no to Text4baby Question 3) were excluded, given that their past exposure to influenza messaging sent via the Text4baby platform could not be determined ( Figure 1).
Statistical Analysis
Differences in characteristics between Text4baby influenza message recallers and nonparticipants were tested using chi-square tests. The difference in vaccination coverage by demographic and access to care characteristics, high-risk conditions, Text4baby status, and provider recommendation and offer to vaccinate was assessed in a bivariate logistic regression model. To examine whether Text4baby status was independently associated with influenza vaccination coverage, weighted multivariable logistic regression analyses were performed controlling for demographic and access to care characteristics and high-risk conditions. Variables for inclusion were decided a priori based on factors previously reported to be associated with influenza vaccination. Year of the survey was included in the initial model to control for differences in Text4baby influenza messaging between seasons. Interaction between provider recommendation/offer X Text4baby status on vaccination coverage was tested. All analyses were conducted in 2014-2016 using SAS, version 9.3 survey procedures and SAS callable SUDAAN, version 11.1. Crude and Adjusted Prevalence Ratios (CPRs and APRs) and 95% CIs were estimated using predicted marginal proportions. Respondents gave informed consent to participate at the time of admission to the SurveySpot panel. The surveys were determined to be non-research by CDC and Abt Associates.
RESULTS
Among eligible women who completed the April 2013 or 2014 survey and were pregnant anytime from October through January (N=3,321), 497 (15.0%) reported they were current (378) or former (119) Text4baby enrollees ( Figure 1). Most current enrollees (327 [86.5%]) and slightly less than half of former enrollees (50 [42.0%]) reported they received influenza messages from Text4-baby during their current or most recent pregnancy, for a total of 377 Text4baby influenza message recallers ( Figure 1). The 120 Text4baby enrollees who reported they did not receive Text4baby influenza messages during their current or most recent pregnancy were excluded ( Figure 1); excluded Text4baby enrollees were more likely to report public insurance than Text4baby recallers (53.3% of excluded enrollees reported public insurance vs 42.6% of Text4baby recallers).
Crude influenza vaccination coverage and the bivariate and multivariable associations between vaccination coverage and participant characteristics are shown in Table 2. Demographic and access to care characteristics and high-risk conditions were included in the multivariable model to control for possible confounding. Provider recommendation and offer and an interaction term between provider recommendation and offer X Text4baby status were included to examine differences by strata. Survey year was not associated with vaccination status and was not included in the model.
Women in the following groups had higher crude influenza vaccination rates: those living in the Northeast (versus South), aged 25-49 years (vs 18-24 years), those who were college educated or greater (versus less than a college education), married (versus not married), living at or above the poverty threshold (versus below poverty), currently working (versus not working), who reported a high-risk condition (versus no high-risk condition), and who reported six or more provider visits (versus zero to five visits). Non-Hispanic black women had lower crude influenza vaccination rates than non-Hispanic white women. Rates of vaccination were lowest for those who received neither a recommendation nor an offer (13.5%), higher for those who received a recommendation but no offer (39.5%; CPR=2.92, 95% CI=2.33, 3.66), and highest for those who received a provider recommendation and offer (70.5%; CPR=5.21, 95% CI=4.28, 6.35). Influenza vaccination coverage for Text4baby recallers was 81.3% compared with 47.1% for non-participants (CPR=1.73, 95% CI=1.61, 1.85; Table 2).
The test for interaction between provider recommendation and/or offer X Text4baby status in the multivariate model was significant (p < 0.01). For women who reported that their provider recommended and offered the vaccination (n=1,865), Text4baby recallers were more likely to report influenza vaccination than non-participants (APR=1.29, 95% CI=1.21, 1.37). Similarly, among those who received a provider recommendation but no offer to vaccinate (n=493), Text4baby recallers were more likely to report influenza vaccination (APR=1.52, 95% CI=1.07, 2.17). Finally, among those who received neither a provider recommendation nor offer to vaccinate (n=755), Text4baby recallers were more than three times as likely to report receipt of influenza vaccination (APR=3.39, 95% CI=2.03, 5.67).
DISCUSSION
In this study, Text4baby participants who reported receiving Text4baby influenza messages were more likely than non-participants to report influenza vaccination, and the effect was strongest among those who received neither a provider recommendation nor an offer to vaccinate. This finding supports the potential of Text4-baby to improve influenza vaccination coverage among a group with historically low vaccination coverage, those who receive neither a provider recommendation nor offer to vaccinate. 4 Text4baby and Text4baby partners implement national and community-based campaigns to reach and enroll women who may not be connected to the healthcare system. Text4baby can serve as a reminder system that providers can offer to further encourage influenza vaccination in addition to providing a recommendation and offer to be vaccinated. More than three quarters of Text4baby influenza message recallers reported that Text4baby influenza messages helped them make a decision about vaccination and reminded them to be vaccinated. Given these results, it is possible that the positive association between Text4baby status and vaccination may be attributed specifically to Text4baby influenza modules. Findings from this study are consistent with two RCTs that found a positive association between text-based influenza messaging and documented influenza vaccination among children, adolescents, and pregnant women. 6,7 Findings are also consistent with a federally funded evaluation that found Text4baby pregnant participants were significantly more likely to report influenza vaccination compared with participants who had never heard of Text4baby. 14 An evaluation of the 2012-2013 Text4baby influenza module found text reminders and information on low-cost influenza vaccination effective at improving reported influenza vaccination among Text4baby mothers. 10 Finally, findings are consistent with qualitative research that suggests multi-component approaches, including positively framed, tailored messages that highlight vaccination benefits for pregnant women and their children-all of which were incorporated as part of the Text4baby influenza modules -may lead to increased vaccination. 15,16 To the authors' knowledge, this study was the first to use a sample of women from across the U.S. to examine the association between reported receipt of specific text messages within a texting intervention and a preventive health recommendation, and more specifically, the first to use a sample of pregnant women from across the U.S. to assess the association between receipt of Text4baby messages and a preventive health recommendation. The approach taken to examine a texting intervention by means of an existing survey allowed for quick assessment and serves as a model for other mHealth interventions in need of timely evaluation given the rapid evolution of technology and survey mechanisms. Future mHealth evaluations should consider controlling for health consciousness when using an external control group and assessing the number and specific content of messages and different incentives, such as the coupon offer for a free influenza vaccination that resulted in a 1.7% redemption rate during the 2013-2014 flu season, which could lead to improved outcomes. 17 The use of non-probability sampling for public health evaluations should also be further assessed, particularly for evaluations involving rare populations.
Limitations
This study has limitations. First, all data, including vaccination status, were self-reported and not independently validated. Second, the association between Text4-baby status and vaccination may be biased if women who enroll in Text4baby are more likely to be health conscious and therefore more likely to get vaccinated or if they have other demographic characteristics or health seeking behaviors not measured in this study that are associated with increased vaccination. However, findings from the aforementioned Text4baby evaluation show no significant differences in health information seeking, referenced as a key dimension of health consciousness, between Text4baby participants and other prenatal patients who heard of Text4baby but decided not to sign up. 14,18 Third, this study reports an association between Text4baby status and influenza vaccination among a sample of volunteer members of a non-probability Internet panel. 19 Because the sample was not randomly selected, estimates of sampling error are usually not considered valid and not computed. 20 Statistical measures of association were computed as a guide to assess the value of Text4baby on uptake of influenza vaccination. Population-based surveys of a rare population, such as pregnant women, are time-consuming and potentially costly. The Internet Panel Surveys are the only national surveys that collect information about receipt of influenza vaccination and vaccine-related knowledge, attitudes, and behaviors. The estimates of vaccination coverage may be biased if the selection processes for entry into the survey and a woman's decision to participate were related to receipt of vaccination. To reduce bias, data were weighted to be more representative of the U.S. population of pregnant women. Additionally, comparisons to influenza vaccination coverage estimates among pregnant women from population-based surveys such as the Behavioral Risk Factor Surveillance System have shown that, whereas Internet Panel Survey estimates are consistently higher, trends in coverage are similar. 21 Finally, it is possible that the magnitude of association between Text4baby status and vaccination could be overstated among women who received neither a provider recommendation nor an offer for vaccination if Text4baby messages encouraged early vaccination and providers assessed vaccination status before providing a recommendation or offer. The magnitude of the association could also be overstated if Text4baby enrollees who truly received the influenza messages, but did not remember receiving them, were excluded and if this group of enrollees was less likely to be vaccinated than Text4baby influenza message recallers.
CONCLUSIONS
This study suggests Text4baby participants who reported receiving Text4baby influenza messages might be more likely to report influenza vaccination than non-participants, even among women whose provider already recommends and offers the vaccine. Text4baby participants in this study were more likely to report influenza vaccination among the more vulnerable group of women whose providers do not recommend or offer the vaccine. Text4baby is an example of an evidence-based intervention, a reminder system, designed to increase vaccination and enhance care. 5 In recent years, the American College of Obstetricians and Gynecologists has included information on Text4baby in a mailing to providers with resources on influenza for patients and families. Study findings support the need for continued efforts not only to encourage busy providers to recommend and offer vaccination, but also to reinforce a recommendation and offer with other approaches, like Text4baby, that can maximize opportunities to provide preventive care to protect mothers and infants. b Those who were employed for wages and the self-employed were grouped as working. Those who were out of work, homemakers, students, retired, or unable to work were grouped as not working.
c Conditions associated with increased risk for serious medical complication from influenza, including chronic asthma, a lung condition other than asthma, a heart condition, diabetes, a kidney condition, a liver condition, obesity, or a weakened immune system caused by a chronic illness or by medicines taken for a chronic illness.
d Excluded women who did not visit a provider since August 2012 (n=27) or women who did not know whether they received a provider recommendation or offer (n=55).
e Among women who received Text4baby influenza messages and influenza vaccination (n=318 | 2018-04-03T04:11:22.091Z | 2017-08-30T00:00:00.000 | {
"year": 2017,
"sha1": "a4972d3c1d0ab15b3d4237b865725ee01983ff88",
"oa_license": "CCBYNCND",
"oa_url": "http://www.ajpmonline.org/article/S0749379717303501/pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "a4972d3c1d0ab15b3d4237b865725ee01983ff88",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
251629978 | pes2o/s2orc | v3-fos-license | CHCHD2 and CHCHD10: Future therapeutic targets in cognitive disorder and motor neuron disorder
CHCHD2 and CHCHD10 are homolog mitochondrial proteins that play key roles in the neurological, cardiovascular, and reproductive systems. They are also involved in the mitochondrial metabolic process. Although previous research has concentrated on their functions within mitochondria, their functions within apoptosis, synaptic plasticity, cell migration as well as lipid metabolism remain to be concluded. The review highlights the different roles played by CHCHD2 and/or CHCHD10 binding to various target proteins (such as OPA-1, OMA-1, PINK, and TDP43) and reveals their non-negligible effects in cognitive impairments and motor neuron diseases. This review focuses on the functions of CHCHD2 and/or CHCHD10. This review reveals protective effects and mechanisms of CHCHD2 and CHCHD10 in neurodegenerative diseases characterized by cognitive and motor deficits, such as frontotemporal dementia (FTD), Lewy body dementia (LBD), Parkinson’s disease (PD) and amyotrophic lateral sclerosis (ALS). However, there are numerous specific mechanisms that have yet to be elucidated, and additional research into these mechanisms is required.
Introduction
Coiled-coil-helix-coiled-coil-helix domain (CHCHD)-containing proteins are nuclear genes that encode mitochondrial protein (Liu et al., 2020a). The CHCH domain is characterized by a pair of cysteines separated by nine amino acids, knowns as the CX9C motifs (Liu et al., 2020a). The CHCHD-containing protein family is highly conserved in various physiological functions (Zhou et al., 2020). CHCH mutation or deletion on cysteine residues will lead to a loss of mitochondrial functions associated with targeting and position. Apart from mitochondrial functions, the CHCHD family, especially CHCHD1, can also be found in the nucleus. Zhou et al. (2017) have summarized nine CHCHD-containing protein members and their major functions, implying roles in neurological disorders (Zhou et al., 2017).
CHCHD2 and CHCHD10 are imported to the mitochondrial intermembrane space and form a complex to regulate mitochondrial function (Martherus et al., 2010;Aras et al., 2013). CHCHD2 is involved in the regulation of oxidative phosphorylation as well as the inhibition of apoptosis (Modjtahedi et al., 2016). CHCHD10 is involved in regulating the activity of mitochondrial Cyclo-oxygenase (COX) and mitochondrial respiration during hypoxia (Ajroud-Driss et al., 2015). Many studies have shown that CHCHD2 and CHCHD10 are of great significance in mitochondrial dynamics, morphology, and functions (Funayama et al., 2015).
Increasing evidence has shown that CHCHD2 and CHCHD10 are associated with cognitive disorders and motor neuron diseases through their interactions with proteins such as OMA-1, OPA-1, TDP43, PINK, and p62. This review synthesizes evidence related to the functions of CHCHD2 and CHCHD10 and their molecular mechanisms in the progression of neurodegenerative diseases, providing implications for novel strategies in brain injury. Figure 1 provides a brief introduction to CHCHD2 and CHCHD10 functions in the nervous system.
Roles of CHCHD2 and CHCHD10 in physiological and pathological processes CHCHD2 and CHCHD10 modulate mitochondrial dynamics, morphology, and respiratory chain OPA1, OMA1, and DRP1 are mitochondrial proteins involved in fusion and fission processes. Among these, OPA-1 is necessary for mitochondrial fusion, and DRP1 is essential for mitochondrial fission (Mishra et al., 2010). In the absence of these dynamic processes, the accumulation of fragmented mitochondria may induce the clinical manifestation of mitochondrial-related diseases (Zhao et al., 2021). A previous study has found that CHCHD2 single knockout as well as CHCHD10 and CHCHD2 double knockouts are significantly reduced in mitochondrial fusion and fission (Ruan et al., 2022). It is important to highlight that changes in mitochondrial fusion and fission were unaffected by changes in cell viability after the knockdown of CHCHD2/10. In HELA cells with CHCHD2 single knockout, L-OPA1 decreases significantly and S-OPA1 increases under the action of OMA1 (Guo et al., 2020). Interestingly, a single knockout of CHCHD10 has no effect on the number of OPA-1, although a double knockout of CHCHD2/10 worsens the clearance of L-OPA1 (Liu et al., 2020b). DRP1, which promotes mitochondrial fission, is significantly reduced in CHCHD2 single knockout and CHCHD2/10 double knockout mice (Zhou et al., 2020). Thus, these findings suggest that the loss of CHCHD2/CHCHD10 may reduce mitochondrial dynamics. Other mitochondrial dynamics-related proteins, including MFN1, MFN2, and MFF, exhibit relatively similar levels of expression in control and CHCHD2/CHCHD10 knockdown cells (Zhou et al., 2020).
Cristae is the inner layer of the membrane inside the mitochondria and can be thought of as the basic bioenergy unit within the mitochondria (Nakamura et al., 2020). The multi-subunit complex present in the inner mitochondrial membrane (MICOS), which is concentrated at the cristae junction, regulates the maintenance of the cristae structure (Kondadi et al., 2020). CHCHD10 single knockdown cell lines had normal cristae structure, while CHCHD2 single knockdown cells, as well as CHCHD2 and CHCHD10 double knockdown cells, exhibit cristae abnormalities (Liu et al., 2020a,c).
The mitochondrial oxidative phosphorylation (OXPHOS) process can generate cellular energy in the form of ATP in eukaryotes (Garcia-Bartolome et al., 2020). A series of enzymatic complexes, including cytochrome c oxidase (COX, complex IV) are involved in this process. Guilt-by-association analysis (GBA) shows that CHCHD10 is likely involved in the regulation of COX activity, which is verified by vitro experiments (Liu et al., 2020a). CHCHD2, on the other hand, has been found to directly interact with COX to modulate COX activity, which influences mitochondrial membrane potential, ROS production, and cellular redox state (Zhou et al., 2020).
CHCHD2 and CHCHD10 inhibit apoptosis
Apoptosis is essential for homeostasis (Chen, 2022). During apoptosis, the pro-apoptotic protein Bax activates and accumulates on the outer membrane of the mitochondria, increasing mitochondrial membrane permeabilization through an unknown mechanism. Liu et al. (2015) discovers that CHCHD2 can inhibit mitochondrial apoptosis . CHCHD2 deletion can increase exposure to nuclear fragmentation and phosphatidylserine, both of which are apoptotic molecular markers. Besides, overexpression of CHCHD2 can reduce PARP lysis . CHCHD2 regulates Bax localization, activation, and oligomerization through interacting with Bcl-xL . Upon apoptotic stimulus, the level of CHCHD2 in the mitochondria decreases . A reduction in mitochondrial CHCHD2 levels is associated with Frontiers in Neuroscience 02 frontiersin.org CHCHD2 and CHCHD10 functions in cognitive disorder and motor neuron disease. CHCHD2 and CHCHD10 play a role in mitochondria through mitochondrial fusion, fission and gene synthesis. CHCHD2 and CHCHD10 in mitochondria enter the cytoplasm of neurons through the combination with Bax, resulting in apoptosis. Moreover, CHCHD2 and CHCHD10 also appears in synaptic, binding to TDP43, inhibiting TDP43 neurotoxicity as a result. All these maladjustment contribute to cognitive disorder and motor neuron disease.
a loss of Bcl-xL ability to inhibit Bax, allowing mitochondrial outer membrane permeabilization (MOMP), and apoptosis to continue . Overall, these findings imply that CHCHD2 acts as a negative regulator of apoptosis. It is still unknown what roles CHCHD2 and CHCHD10 play in apoptosis. It is unclear how the interaction of CHCHD2 with Bcl-xl affects Bax activation. It is unspecific that indicators induced mitochondrial CHCHD2 depletion. The CHCH structural domain of CHCHD2 may act as a sensor for the redox changes that occur during mitochondrial apoptosis, resulting in appropriate CHCHD2 conformational changes that regulate Bax oligomers and MOMP. This hypothesis warrants further investigation. In addition, CHCHD2 may perform a nuclear role in regulating apoptosis at the mitochondrial membrane. These hypotheses require additional investigation.
CHCHD2 and CHCHD10 modulate synaptic plasticity
Synaptic plasticity is the capacity of neurons to change the synapse in response to external activity over time (Noriega-Prieto et al., 2022). Since memories are thought to be programming of widely linked neuronal circuits, synaptic plasticity is one of the crucial neurochemicals, forming the basis of learning and memory (Grafe et al., 2022).
A study of Liu et al. (2022), via purified recombinant proteins, demonstrated that CHCHD10 S59L increased the size of TDP-43 aggregates but CHCHD10WT prevented TDP-43 aggregation. These changes paralleled the anomalies in motor neuron unit function, sciatic nerve action potential velocity, grip strength, and rotarod performance seen in CHCHD10 R15L and CHCHD10 S59L animals with functional deficits in longterm synaptic plasticity . They found that restoring CHCHD10WT in TDP-43 transgenic mice (TAR4; D10WT) decreased TDP-43 pathology and recovered TDP-43induced deficits in long-term synaptic plasticity in vivo . Overall, these results indicate that CHCHD10mediated modulation of TDP-43 aggregation in mitochondria is a substantial contributor to impairments in long-term synaptic plasticity and motor unit function.
In addition, roles of CHCHD2 and CHCHD10 in synaptic plasticity are unclear, while its role with TDP43 is recognized, its relationship with additional target proteins that can influence synapses requires further investigation.
CHCHD2 and CHCHD10 promote cellular migration
Cell migration is an evolutionary conserved process which is essential for embryonic development, wound healing, immune response, angiogenesis, and cancer metastasis (Kurosaka and Kashina, 2008). Seo et al. (2010) have identified CHCHD2 as a gene that promotes cell migration and when CHCHD2 is overexpressed, it promotes cell migration, and when CHCHD2 is knocked down, it reduces cell migration. They concluded that CHCHD2-induced cell migration was associated with increased formation of actin stress fibers and adhesive patches, and CHCHD2 protein directly interacted with hyaluronan-binding protein 1, which possesses migration-inhibiting function, to balance cell migration (Seo et al., 2010). In addition, Wei et al. (2015) have shown that the copy number and protein levels of the CHCHD2 gene correlated with the epidermal growth factor receptor in non-small cell lung cancer in a co-amplified positive manner, and CHCHD2 is also thought to be an effector of cell proliferation, migration, and respiration, interacting with mitochondrial and extra-mitochondrial proteins in non-small cell lung cancer cell lines.
CHCHD2 and CHCHD10 adjust lipid metabolism
In yeast, Mia40-regulated import is responsible for controlling the homeostatic state of mitochondrial phospholipids via Mia40/Erv1 import of the substrate Mdm35 carrying a CX9C2-type pattern (Modjtahedi et al., 2016). Mdm35 maintains stable interactions within the IMS with members of the evolutionary-conserved UPS/PRELI-like proteins UPS1 and UPS2 (Watanabe et al., 2015). These proteins are the mediators of the transfer of phospholipids between the outside and inner mitochondrial membranes. For example, UPS1 is responsible for regulating the movement of phosphatidic acid (PA) from the outer to the inner membrane (Modjtahedi et al., 2016). The reaction with a series of enzymes through PA eventually leads to the production of cardiolipin (CL).
Cognitive disorder Frontotemporal dementia
Recent research has led to the discovery of a CHCHD10 genetic variation in patients with late-onset frontotemporal dementia (FTD) (Liu et al., 2020a). These individuals had mitochondrial myopathy and COX-negative fibers, both of which were related to numerous deletions in their mitochondrial DNA (mtDNA). In addition, aberrant crista structure was seen in patients' fibroblasts, as well as decreased respiration activity and various deficiencies of respiratory chain components (Liu et al., 2020a). Therefore, they concluded that CHCHD10 mutations were the cause of FTD. Related protein-protein mechanisms are as below.
OMA-1
OMA1, an ATP-independent zinc ion metalloprotease expressed by the OMA1 gene, is also a redox-dependent protein with multiple transmembrane structural domains and zinc finger binding motifs located in the inner mitochondrial membrane (Hu et al., 2021). Mammalian OPA1 plays a role in a variety of cellular activities, including the construction of mitochondrial cristae, apoptosis inhibition, maintenance of mtDNA integrity, and oxidative phosphorylation, all of which interact with mitochondrial dynamics (Kaymak and Ryder, 2013).
The OMA1-dependent OPA1 processing is affected by the loss of quality control proteases, such as YME1L, AFG3L2, and SPG7, as well as endosomal scaffolding proteins including SPFH family and SLP-2 (Liu et al., 2020c). OMA1 provides an escape route for cristae lacking protein quality control, separating defective mitochondrial units from the network for autophagy-mediated destruction (Liu et al., 2020a). It has been shown that CHCHD2/CHCHD10 double knockout may lead to cristae abnormalities due to increased stress-induced processing of L-OPA1 by the protease OMA1 in cell culture (Liu et al., 2020b; Figure 2). Liu et al. (2020c) propose that mutant CHCHD2 or CHCHD10 may cause aberrant OMA1 activation via two possible mechanisms. One is that mutations in CHCHD2 or CHCHD10 may activate OMA1 at a lower concentration than wild-type (WT). If the concentration of CHCHD2/10 in cells reaches the threshold, OMA1 may be activated at physiological levels of CHCHD2 and CHCHD10 in the absence of aggregation. This mechanism is associated with the CHCHD10 G58R mutation, which causes myopathy and activates OMA1. Another is that CHCHD2 or CHCHD10 mutations may make proteins easily aggregate, resulting in an increase in total CHCHD2/CHCHD10 and a decrease in soluble CHCHD2/CHCHD10 (Liu et al., 2020c).
PINK1
Parkin is recruited by PINK1 to damage mitochondria in order to induce mitophagy . The PINK1 parkin-mediated mitophagy pathway has received a lot of attention (Quinn et al., 2020). Once PINK1 imports into the mitochondrial matrix, it will be cleaved by presenilin-associated rhomboid-like (PARL) protein, rendering it typically undetectable. Following being released into the cytoplasm, PINK1 fragments are destroyed by the ubiquitinproteasome system through the N-end rule pathway. By directly interacting with parkin, cytosolic PINK1 fragments inhibit parkin translocation to mitochondria. PINK1 is stabilized in the OMM of injured, depolarized mitochondria, where it phosphorylates ubiquitin and parkin (at Ser65) to activate parkin's E3 ligase activity and recruits Parkin from the cytosol to the mitochondria. CHCHD10 S59L does not retain WT -like activity, indicating a dominant-negative mechanism. CHCHD10 S59L expression caused mitochondrial PINK1 stability, and genetic/pharmacological suppression of PINK1 alleviated CHCHD10 S59L -induced cell toxicity (Baek et al., 2021). According to previous studies, decreasing PINK1 or parkinmediated pathways is beneficial in vivo illness models of SOD1, FUS, and TARDBP mutations, according to prior research (Baek et al., 2021).
Lewy body dementia
Clinically, Lewy body dementia (LBD) is the second most common neurodegenerative dementia after Alzheimer's disease (AD), with clinical manifestations of fluctuating cognitive impairment, Parkinson's syndrome, and psychiatric symptoms highlighted by visual hallucinations (Berman and Miller-Patterson, 2019). Ogaki et al. (2015) collected 610 patients with pathologically confirmed DLB and these patients underwent whole-exome sequencing of CHCHD2 gene and nine rare but not clinically significant exon mutations were detected: p.P2L, p.G4R, p.P14S, p.A16A, p.V31V, p.P34L, p.A37V, p.A49V, and p.A93V, eight of which were located in within the mitochondrial targeting sequence (MTS) of the CHCHD2 gene. In spite of the role of CHCHD2 gene mutations in DLB remains for further clarification, the possibility that rare variants in the mitochondrial targeting sequence are risk factors for LBD cannot be ruled out (Kee et al., 2022).
Motor neuron disorder
Amyotrophic lateral sclerosis CHCHD10 mutations associated with FTD/amyotrophic lateral sclerosis (ALS) have been shown that CHCHD 10 mutations impair ability to bind both OPA1 and mitofilin, and these mutants reduced the molecular weight of endogenous CHCHD10, mitofilin, and OPA1 (Zhou et al., 2017). And these mutations are associated with mitochondrial fusion and respiration (Xiao et al., 2020). In the brains of human FTLD-TDP and TDP-43 transgenic mice, TDP-43 causes a reduction in CHCHD10 while also disrupting the natural complexes of CHCHD10, OPA1, and mitofilin (Penttila et al., 2017). TDP-43 is responsible for the mitochondrial defects linked with CHCHD10 mutations, although wild-type CHCHD10 is able to rescan (Liu et al., 2020a). Detailed mechanisms are as below.
OPA-1
OPA1 is heavily regulated by mitochondrial bioenergetics and proteostatic stress to dynamically shape the inner membrane in response to the changes in the mitochondrial network (Khin et al., 2021). When mitochondrial stressors activate the peptidase OMA1, it cleaves the active long-form of OPA1 (L-OPA1) from its membrane anchor, resulting in mitochondrial fragmentation and alterations in cristae structure (Morio et al., 2021). Liu et al. (2020b) have found that YME1L expression in flies reduces Opa1 levels. p32, a binding partner of CHCHD2, also binds to YME1L to increase YME1L activity, so CHCHD2 competes with YME1L to interact with P32 and decrease YME1L activity (Liu et al., 2020b). It is surprising that fly YME1L could reduce Opa1 levels because the mammalian homolog of YME1L is required for OPA1 fusion activity (Liu et al., 2020a). The L-OPA1 form is necessary for mitochondrial inner membrane fusion, whereas the S-OPA1 form promotes inner membrane fission. L-OPA1 and S-OPA1 participate together in balancing the mitochondrial fusion-fission process. It has been previously proposed that CHCHD2 regulates Opa1 levels by competing with YME1L for P32 binding, thereby regulating mitochondrial fusion; even though YME1L activity increases upon binding to P32, deletion of CHCHD2 increases YME1L binding to P32, increases YME1L activity, and decreases L-Opa1 levels, resulting in mitochondrial fission (Liu et al., 2020b). As Figure 3 depicts, decreased binding of the CHCHD2 mutant to P32 leads to a corresponding increase in YME1L activity, addressing some of the deficiencies of the CHCHD2 mutant (Liu et al., 2020b). In this study, they have found that disease-associated variants of CHCHD2 proteins retain their ability to bind to P32. More studies are needed to determine how these mutant variants of CHCHD2 affect CHCHD2 function.
TDP-43
TAR DNA-binding protein 43 (TDP-43) is a heterogeneous ribonucleoprotein (hnRNP) nuclear protein that regulates RNA splicing, stability, and transport (Candia et al., 2022). TDP-43 is frequently detected in the cytoplasm of abnormal neurons, where it is ubiquitinated and fragmented, making it prone to aggregation (Smirnov et al., 2022). Although TDP-43 mutations are detected in a small fraction of ALS and FTLD-TDP cases, TDP-43 pathology is linked to the overwhelming majority of ALS and FTLD cases (Kee et al., 2021). Increasing data suggest that TDP-43 appears to be particularly neurotoxic, in large part because it induces mitochondrial malfunction (Smirnov et al., 2022). TDP-43 is associated with mitochondrial colocalization, promoting mitochondrial fission and leading to abnormal mitochondrial transport.
The malfunction of CHCHD10 results in cytoplasmic TDP-43 accumulation (Woo et al., 2017). CHCHD10 formed physical complexes with TDP-43, which required intact N-terminal 16 residues, and TDP-43 boosted the nuclear localization of CHCHD10 through retrograde signaling, which was linked with an increase in nuclear-encoded and mitochondrial-targeted transcripts (Zhou et al., 2017). In contrast to WT CHCHD10, FTD/ALS-associated CHCHD10 mutations (R15L and S59L) cause TDP-43 cytoplasmic mislocalization in irregularly shaped inclusions that often co-localized with mitochondria (Woo et al., 2017).
Parkinson's disease
Parkinson's disease (PD) is characterized by degeneration of dopaminergic neurons in the dense nigrostriatal region and the formation of Lewy bodies, which usually manifest clinically as resting tremor, myotonia, and bradykine (Corti et al., 2011). CHCHD2 mutations are associated with both late-onset autosomal dominant PD and sporadic PD (Zhou et al., 2017). To date, three CHCHD2 genetic mutants have been identified in autosomal dominant PD patients: c.182C > T (p.Thr61Ile), c.434G > A (p.Arg145Gln) and c.300 + 5G > A. Funayama et al. (2015) identified by second generation sequencing in a Japanese PD family line. The c.182C > T (p.Thr61Ile) heterozygous mutation in the CHCHD2 gene was discovered by second generation sequencing in a Japanese PD family, The interaction between CHCHD2 and P32 regulates the levels of OPA1 to modulate mitochondrial fission. (A) In the wild type groups, P32 binds to CHCHD2 and YME1L. The interaction between P32 and YME1L enhances OPA1 degradation. CHCHD2 competes with YME1L for P32, and as a result, OPA1 levels are maintained. (B) In CHCHD2 mutants, YME1L has access to a greater quantity of P32, which facilitates the degradation of OPA1 and leads to mitochondrial fragmentation. and it was found in eight PD patients from two generations of the family. Next, Funayama et al. (2015) expanded their sample and identified two other mutations in CHCHD2 that may be associated with PD in 340 patients with familial PD in Japan: c.434G > A (p.Arg145Gln) and c.300 + 5G > A; and a single and leotide polymorphism in the CHCHD2 gene was found in 517 patients with sporadic PD c.-9T > G (rs10043) and Pro2Leu (c.5C > T; rs142444896), which increased the risk of developing sporadic PD in the Japanese population by 2.51-fold and 4.96fold, respectively. In a recent study, four familial PD patients from Western Europe, p.Ala32Thr, p.Pro34Leu, and p.Ile80Val mutations in the CHCHD2 gene were also seem to be potentially associated with familial PD (Foo et al., 2015). Furthermore, Meng et al. (2017) discovered that deletion of CHCHD2 in Drosophila leads to abnormalities in its gene structure and impaired mitochondrial oxygen respiration, resulting in PD pathological mechanisms such as oxidative stress, loss of dopaminergic neurons, and motor dysfunction. These findings support the hypothesis that mutations in CHCHD2 may cause mitochondrial dysfunction and participate in the pathological process of PD. Table 1 shows the CHCHD2 and CHCHD10-related mechanisms in the diseases addressed above.
Conclusion
CHCHD2 and CHCHD10, as mitochondrial inner membrane proteins, regulate apoptosis, synaptic plasticity, cell migration, and lipid metabolism, in addition to their essential activities in mitochondria. Furthermore, the ability of these proteins to carry out their functions is dependent on their interactions with a wide number of other proteins, including OPA1, OMA1, Bcl-xL, Bax, PINK, and TDP43. In the long run, they can regulate neurological disorders such as LBD, FTD, PD, ALS, and other neurological diseases through these responses. Furthermore, the research that has been done on the effects of CHCHD10 on the reproductive system has seen some progress, while more efforts in this field worth to be taken.
However, the majority of the recent studies have remained at the level of genetic analysis. Their mechanisms on these diseases still require more investigations and additional signaling pathways should be investigated. It is possible to examine the mechanism of action of CHCHD2 and CHCHD10 on neurological illnesses by using developing bioinformatics methods. The results of these investigations can then be corroborated with wet experiments in order to be corroborated. If the therapeutic effects of CHCHD2 and CHCHD10 are further confirmed, it will be necessary to conduct additional and more in-depth experimental studies to determine how they can be applied to the clinic.
Author contributions
JX designed the review. XW revised the article and made language touch-ups. YW made changes to the figures and made grammatical corrections. TJ completed the writing of the review, the drawing of the figures, and the production of the table. All authors contributed to the article and approved the submitted version. | 2022-08-18T13:40:49.038Z | 2022-08-18T00:00:00.000 | {
"year": 2022,
"sha1": "3ea5a545821a63b56a837aea1cf608ce815a907a",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "3ea5a545821a63b56a837aea1cf608ce815a907a",
"s2fieldsofstudy": [
"Biology",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
240095690 | pes2o/s2orc | v3-fos-license | Determining factors for the creation of innovation-based ventures
This research aims to determine the impact of internal and external factors of Social Capital, Entrepreneurial Self-Efficacy, and Person-Entrepreneurship Fit in the transition of nascent entrepreneurs toward venture creation. For the development of this research, a sample of 500 entrepreneurs located in Colombia who are creating a company was taken. The research methodology is hypothetical-deductive with a cross-sectional multiple causal correlation design with an explanatory scope and is divided into three stages: first, an exploratory analysis of the data relating to the study variables is carried out. Second, a principal component analysis is carried out. Finally, the third stage is the modelling using the Partial Least Squares-Path Modelling methodology. Among the most relevant findings, it is found that Social Capital is significant in explaining Person-Entrepreneurship Fit. In turn, Social Capital and Person-Entrepreneurship Fit are significant in explaining Entrepreneurial Self-Efficacy. Finally, the construct of Venture Creation is only significantly explained by Person-Entrepreneurship Fit. Consequently, it can be concluded that the influence of social and psychological variables within the business cycle is significant, and models and strategies must consider these elements to design tools that support the optimal development of start-ups within the business cycle, so that they consolidate as stable ventures. This study contributes to filling the research gap by focusing on the factors determining the entrepreneurial process beyond its initial phase.
perspective, making it possible to understand the individual and their environment in the transition toward the gestation of new businesses. However, contemporary definitions of entrepreneurship or approaches to entrepreneurship research focus on its emergence, i.e., the stage of conception. While this has been crucial to analyzing the phenomenon, given the wealth of information and abundance of material about this stage, Davidsson and Honig (2003) suggest that studies should address aspects in earlier stages, such as how opportunities are identified and acted upon, or how new organizations emerge. It has also been identified that business cycle research has primarily focused on businesses that can survive and grow, i.e., studies focus on the stages of infancy and adolescence. However, if the process is analyzed as a single system, it is clear that most entrepreneurial efforts do not lead to the founding of new organizations (Aldrich & Martinez, 2007).
However, there is very little information on the number and characteristics of nascent entrepreneurs who attempt to create a start-up and the likelihood that these attempts will lead to the gestation of new businesses. While it is possible to see the results of entrepreneurial activity in the form of venture creation and innovations, there is little information on how these new businesses came into existence. Studies of entrepreneurship demonstrate the successes of entrepreneurial activity, but there is little insight into why these particular entrepreneurial efforts succeeded, while other efforts failed (Reynolds et al., 2004a(Reynolds et al., , 2004b. One of the arguments put forward by the authors for more research in this context is the problem of identifying people who are starting businesses and are engaged in entrepreneurial activities to create a company potentially. Starting a new business is challenging due to multiple internal and external limiting factors (Weiss et al., 2019), or as Kannadhasan (2018) argues, new venture creation is the result of the interaction between the entrepreneurs' external environments and internal factors. Understanding those factors in the gestation phase enabling entrepreneurs to move toward the founding of their companies, makes it possible to strengthen the state of the art in this business cycle phase.
Identifying and analyzing the factors that can improve entrepreneurial performance has become a key topic in the management and entrepreneurship literature. Understanding how and why only some business ventures are successful enough to become valid drivers of wealth creation and economic and social development is even more critical in developing countries (Vila et al., 2013). Accordingly, there is great merit in discovering how to help nascent entrepreneurs avoid being "stillborn" (P. Reynolds et al., 2004aReynolds et al., , 2004b. To identify critical aspects that facilitate the transition toward the development of start-ups, Drnovšek et al. (2010), studying the writings of Gatewood, argue that "selfefficacy influences the development of attributions of nascent entrepreneurs' for creating new businesses. " Based on social cognitive theory, Bandura (1982) developed the theory of self-efficacy to explain the variability of individuals in goal attainment. Individuals with different levels of self-efficacy beliefs are expected to differ systematically in the amount of effort they expend on goal-directed tasks, the extent to which they engage in coping activities to overcome impediments, and the degree to which they persistently pursue goals despite obstacles.
The development of self-efficacy theory in psychology and its contribution to entrepreneurship led to the development of the concept of entrepreneurial self-efficacy (ESE). Drnovšek et al. (2010) state that "ESE involves individuals' beliefs regarding their capabilities for attaining success and controlling cognitions for successfully tackling challenging goals during the business start-up process. " Along the same lines, McGee et al. (2009) define entrepreneurial self-efficacy as "a construct that measures a person's belief in their ability to launch an entrepreneurial venture successfully. " In turn, Hsu et al. (2019) argue that, despite having a high level of entrepreneurial self-efficacy, if entrepreneurship does not meet their personal needs, there will be no intention to start a business. The authors study the boundary discussed above through the person-environment fit (P-E Fit) theory; consequently, these authors develop a new construct called person-entrepreneurship fit (P-ENT Fit). This finding moderates the relationship between entrepreneurial self-efficacy and entrepreneurial intention and should be considered when understanding the factors that influence an individual to transition toward developing a start-up.
This finding moderates the relationship between entrepreneurial self-efficacy and entrepreneurial intention and should be considered when understanding the factors that influence an individual to transition toward developing a start-up.
Analyzing the phenomenon and the contributions of different authors, the research's objective is to describe the association of determining factors that facilitate the transition toward the gestation of start-up companies in innovation-based entrepreneurship programs in Colombia. Consequently, the research question is defined as follows: How do social capital, entrepreneurial self-efficacy, and person-entrepreneurship fit positively impact the transition of nascent entrepreneurs toward the creation of start-ups?
Literature review
Research in the field of venture creation is not a recent topic. These studies attempt to explain why start-ups fail or succeed. This section presents the elements, variables, and techniques used over time to respond to the question above.
The origins of this research can be traced back to the 1970s and 1980s. Cooper and Bruno (1977) designed a statistical model that takes as its input the calculated value of teamwork, cooperation, and the level of education and experience of team members, while the model's output corresponds to the performance level of the business. Their study took a sample of 250 firms, of which 58% are large organizations and the remaining 42% are small organizations (for this study, the authors define large firms as those with 500 employees and small firms as those with fewer). The model can identify the variables that have the most significant influence on the final objective, thus offering a way to study corporate behavior. Fama and Jensen (1983) construct a qualitative model from the agency perspective. Their study attempts to explain which factors are critical for creating a company and affect its survival. The results of this model identify the features of business survival from the viewpoint of a control agency.
Subsequently, Bird (1988) developed a model based on a discovery-oriented study, which attempts to explain the behavior of entrepreneurial ideas. Through 20 interviews with entrepreneurs (seven in service, eight in manufacturing, and five in marketing, with between 4 and 20 years of experience as entrepreneurs), she was able to identify distinct patterns of thinking and behavior. Subsequent analysis of transcripts and observer notes suggests that these patterns are relatively consistent across the entrepreneurs. Bird (1989) develops a second model to complement this study. This second model analyses the interaction between the personal and social context with rational and intuitive thinking while intending to create businesses or new values for existing businesses. According to Bird, the social context comprises the individual's social, political, and economic environment (Turner et al., 1994;Webster & Ward, 2011), while the personal context is constructed through their personal history, personality development, and abilities (Bandura, 2002a;Noonan, 2019). In her conclusion, she argues that an entrepreneur's new intentions will be steeped in uncertainty and result from rational, analytical, and cause-and-effect thought processes or intuitive, holistic, and contextual thinking; these new intentions ultimately become entrepreneurial actions (Lubada et al., 2021;Urban, 2020).
For their part, Kamm et al. (1990), based on the conceptual foundation of the model described by Cooper and Bruno (1977) and studies carried out by Bird (1988), designed a qualitative model of logical relationships that takes as its input responses to a bank of questions that are closely related to teamwork, development, behavior, and entrepreneurial action. In this study, the model results identify the dimensions that have a significant influence on start-ups.
Along the same lines, Mosakowski (1998) builds on the earlier paper and the agency approach of Fama and Jensen (1983), developing an agency model based on the locus of entrepreneurial resources, studying the interaction between entrepreneurial resources, organizational decisions, and business outcomes. In the model results, Mosakowski concludes that the way entrepreneurial resources (e.g., work teams) are established and allocated creates greater uncertainty about the organization's performance than individual action. The author proposes controlling this uncertainty through a risk reduction monitoring system for individual and collective action.
In turn, Shook et al. (2003a) combine the models proposed by Shapero (1982), Bird (1988), and Ajzen (1987 to understand the entrepreneurial intention. The paper brings together the most important variables in each model, such as individual perceptions of feasibility and desirability, and finally, they add a third variable called social support; these variables correspond to the determinants of entrepreneurial intentions. The authors relate entrepreneurial self-efficacy to social support and understand entrepreneurial self-efficacy as a moderator of the relationship between individual perceptions and the development of entrepreneurial intention Page 5 of 28 Escorcia et al. Journal of Innovation and Entrepreneurship (2022) 11:51 (Gielnik et al., 2020). Consequently, these authors develop a measurement tool based on opportunity creation, risk-taking, and belief in one's capabilities. Along the same lines, De Carolis (2006) develops a qualitative model based on papers by Bandura (2002b) and Bird (1988Bird ( , 1989. The model involves the link between an individual's external factors (i.e., social capital) and internal factors (i.e., cognitive aspect) and how this link influences entrepreneurial behaviors. In their conclusion, they indicate that entrepreneurial behavior is the result of the interaction of environments (i.e., social networks) and certain cognitive biases in entrepreneurs. De Carolis proposes that both individual cognition and social capital are important for understanding entrepreneurial behavior. She further suggests a nexus between the presence of lucrative opportunities and the presence of entrepreneurial individuals. This nexus influences the link between the variables proposed in the article.
From another perspective, some authors argue that venture creation is influenced by the dimension of social relations and the acquisition and development of abilities during university. Accordingly, Fueglistaller (2006) develops a quantitative model in which he evaluates the intention to create businesses based on the personal context and the university context. The social context comprises variables, such as age, gender, personal goals, and level of innovation, while the university context includes level of study, skills, and abilities. The model explains entrepreneurial intention by analyzing the interaction of those factors and indicates more generally that intention-based models contend that the development of business ideas must precede venture creation. Therefore, understanding actions, attitudes, and behaviours can predict venture creation intention better.
Building on the previous paper, Obschonka et al. (2011) carries out a study that asks the following questions: "What predicts a person's venture creation success throughout their career, such as making progress in the venture creation process and multiple successful venture creations?" To respond to these questions, Obschonka designed a regression analysis model based on prospective and retrospective data from two independent samples of 88 nascent founders and 148 successful founders (the database was collected retrospectively using the Life History Calendar method). The study's conclusion suggests that early entrepreneurial skills in adolescence positively affect the development of the venture creation process (Anwar & Abdullah, 2021). The current human and social capital of nascent founders also has a direct effect but is not a mediator of the effect of early skills. Finally, the data revealed that early entrepreneurial skills in adolescence positively predict habitual entrepreneurship (creation of multiple successful businesses) exhibited over a longer period in the individual's career (specifically, 18 years).
Subsequently, considering the plethora of tools that are used to explain the venture creation process, Schlaegel (2014) identifies the high interest in developing theories that are capable of predicting and explaining an individual's propensity to start a business, as well as explaining the reason for its success or failure. Schlaegel develops a meta-analytic structural equation model to examine the empirical fit of the theory of planned behavior and the entrepreneurial event model. To apply the model proposed in this study, a sample of 123 investigations is used, considering the operational construct, type of publication, publication status, variables used, and model implemented. The paper's conclusion indicates that the theory of planned behavior and the entrepreneurial event model have a moderating role in the environmental conditions of the individual during the venture Page 6 of 28 Escorcia et al. Journal of Innovation and Entrepreneurship (2022) 11:51 creation process. Furthermore, the author asserts that the integrated model provides additional explanatory power and a complete understanding of the process through which entrepreneurial intention develops.
Tracing the trajectory of research on venture creation, Venessar et al. (2014) observe that the vast majority of studies point to the contribution of entrepreneurs collectively but have not examined individual actions. According to the authors, the individual differences of entrepreneurs (e.g., attitudes, behaviours, capabilities, traits, abilities, expertise, cognitive differences, leadership, ethical and moral values) influence the development of entrepreneurial intentions, the pursuit of new opportunities, risk-taking, and decision-making. These authors design a multivariate statistical analysis that attempts to identify the study variables that influence venture creation. The population of the paper was 52,899 students, and they concluded that age, gender, level of studies, field of study, field of action, type of financing, and parents' occupation are the factors that predict entrepreneurial intention. To extend the idea of this study and make a significant contribution, Aragon et al. (2016) develop an experiment with 120,536 individuals from 25 countries. Their study seeks to provide new evidence on how men and women process information in the venture creation decision and on the differences that may arise when making this decision, depending on the level of development of the countries and their institutions. The results indicate that institutions are necessary but not sufficient to achieve quality entrepreneurship and that information processing is different between men and women, as women-regardless of the level of institutional development-process information in a similar way, while men do not.
In turn, FrankhrEldin (2017) develops a logistic regression model to address the problem posed by Bird et al. (2012), which is an evolution of papers previously presented by Bird (1988Bird ( , 1989. In this study, FrankhrEldin takes emotional intelligence as the main moderator of venture creation, understood as the set of abilities and skills that enable the individual to influence their emotions by adapting to changes in the environment and thus make the correct decisions. This paper concludes that the emotional intelligence of entrepreneurs has a strong effect on new venture creation and explains why some individuals have a greater potential to create new businesses than others. The results indicate that necessity-driven entrepreneurs have a higher internal motivation to create a business, while opportunity-driven individuals have a lower motivation. She also adds that the individual must have the ability to relate and communicate, thus increasing the venture's success. Furthermore, considering the idea presented in the study carried out by Obschonka (2011), Venessar et al. (2014) on age and entrepreneurial education, and the paper on risk-taking by Shook et al. (2003b), Basinska and Daderman (2018) develop an experiment to examine in young adults during entrepreneurship-related education the relationships between risk-taking and self-efficacy, and the mediating effects of emotions and attitudes (resilience, self-confidence, attentiveness). The method used by the authors was applied to 153 individuals (mean age 22 years), collecting information on risk-taking, general self-efficacy, attitudes, and emotions through questionnaires. The study used a bootstrap analysis with single and multiple mediators, controlled for gender, to estimate the indirect effects of attitudes and emotions on risk-taking and self-efficacy. The authors found that risk-taking was significantly correlated with self-efficacy. The Page 7 of 28 Escorcia et al. Journal of Innovation and Entrepreneurship (2022) 11:51 attitudes of resilience and self-confidence, but not attentiveness, were complete mediators of the relationship between risk-taking and self-efficacy. Based on these findings, the authors argue that self-efficacy may be strengthened in young adults during entrepreneurship-related education and that emotions can lead to an attentive use of resources, including self-efficacy (Newman et al., 2019;Schmitt et al., 2018). In conclusion, these factors may encourage young people to be brave and try new experiences. Finally, from a new point of view-the psychoanalytic perspective-Metallo et al. (2020) developed a research study to examine the entrepreneurial process for the generation of new enterprises through a psychoanalytical approach. Based on the existing psychoanalytic literature, the manuscript proposes a model to explain the entrepreneurial process that results in people developing ideas and, consequently, moving toward new venture creation. The structure of the model consists of a set of interconnected processes that describe entrepreneurial behavior through the analysis of three stages: dream, business idea, and creation of the new company. The innovation of this study lies in its emphasis on the unconscious mechanisms that encourage new business ventures. The proposed model gives a complete overview of the behavior of entrepreneurs and offers new possibilities for understanding the evolution of the entrepreneurial process.
This section has presented the essential bibliographic references about venture creation from the 1970s to the present day. It has demonstrated how authors have addressed the topic of relating social variables to venture creation models and studies and research on them. Nevertheless, there is no agreement regarding which variables should be used, which are most appropriate for the topic, and which complement entrepreneurial studies. Reaching a consensus would make it possible to propose a structure for the study of the topic that could be used in different investigations while respecting the proportions of the contexts.
Methodological design
Now, to understand the interaction of the variables and the operationalization with the present research, the sections on operationalization of constructs and hypotheses, design of data collection, and operationalization of the study are presented.
Operationalization of constructs and hypotheses
The key theories underpinning the study are summarized as follows:
Venture creation (VC)
According to the description of the business cycle by Aldrich and Martinez (2007), it can be deduced that venture creation is a stage of entrepreneurship that occurs in the transition from nascent entrepreneur (gestation phase) to fledgling entrepreneur (infancy phase). To identify which stage of the venture creation and process a business is in, the researchers explore whether the entrepreneur is in one of three stages: planning to start a business, engaging in entrepreneurial activities, or newly established (De Carolis et al., 2009). Accordingly, the "Venture creation" variable has two levels: (a) Entrepreneurs with the intention to start a business (engaging in activities or planning to start and newly established) and (b) Entrepreneurs who have abandoned the process or failed.
Page 8 of 28 Escorcia et al. Journal of Innovation and Entrepreneurship (2022) 11:51 In this study, venture creation acts as a dependent variable, directly associated with the constructs of social capital, entrepreneurial self-efficacy, and person-entrepreneurship fit.
Entrepreneurial self-efficacy (ESE)
The development of self-efficacy theory in psychology and its contribution to the field of entrepreneurship (Bandura, 1982) led to the development of the concept of "Entrepreneurial Self-Efficacy. " The study of Drnovšek et al. (2010) asserts that "entrepreneurial self-efficacy involves individuals' beliefs regarding their capabilities to attain goals and control positive and negative cognitions that an entrepreneur has during the process of starting up a business. " Along the same lines, McGee et al. (2009) define entrepreneurial self-efficacy as "a construct that measures a person's belief in their ability to successfully launch an entrepreneurial venture. " Based on the literature review presented in Chapter 2, the operational characterization of entrepreneurial self-efficacy is presented based on five dimensions suggested by McGee et al. (2009): (1) Searching dimension (Search), the entrepreneurs' confidence in their abilities to develop new ideas and identify opportunities and/or needs; this dimension is related to the entrepreneur's ability to create and innovate. (2) Planning dimension (Plan), the entrepreneurs' confidence in their ability to roadmap and conceptualize the business model in terms of market quantification, pricing, investment projection, and marketing strategies. (3) Marshalling dimension (Marsh), the entrepreneurs' confidence in their ability to convince other people to identify with their business ideas and share their vision or contribute to the needs of their venture. This ability allows entrepreneurs to gather the necessary resources (financing, clients, suppliers, work team, among others) to start their businesses. (4) Implementing-people dimension (People), the entrepreneurs' confidence in their ability to attract, direct and guide their business's human resources to scale and move through each phase in the business cycle. This dimension is associated with the entrepreneurs' business management skills. (5) Implementing-financial dimension, the entrepreneurs' confidence in their ability to lead the businesses in the accounting and financial terms, enabling them to manage and raise sufficient funds to continue to grow the company. This dimension is associated with the entrepreneurs' business management skills. Therefore, by associating entrepreneurial self-efficacy with new venture creation, after conceptualizing its dimensions, the following hypotheses are established: H 1 In nascent entrepreneurs, the searching dimension of entrepreneurial self-efficacy is positively associated with new venture intention or creation. H 2 In nascent entrepreneurs, the planning dimension of entrepreneurial self-efficacy is positively associated with new venture intention or creation. H 3 In nascent entrepreneurs, the marshalling dimension of entrepreneurial self-efficacy is positively associated with new venture intention or creation. H 4 In nascent entrepreneurs, the implementing-people dimension of entrepreneurial self-efficacy is positively associated with new venture intention or creation. H 5 In nascent entrepreneurs, the implementing-financial dimension of entrepreneurial self-efficacy is positively associated with new venture intention or creation.
Page 9 of 28 Escorcia et al. Journal of Innovation and Entrepreneurship (2022) 11:51 Social capital (SC) Social capital is defined by Nahapiet and Ghoshal (1998) as the "sum of the actual and potential resources embedded within, available through, and derived from the network of relationships possessed by an individual or social unit" p. 243. This construct has been associated with venture creation and, in this regard, the study of Kannadhasan et al. (2018), based on Leana and Van Buren (1999); Adler and Kwon (2002), states that social capital not only facilitates the information, but it also accelerates the timing, relevance, and quality information, which is vital to exploit the opportunity and to start a new venture. The construct of social capital is described in three dimensions (Nahapiet & Ghoshal, 1998) and is adopted to analyze entrepreneurs and their venture creation process: (1) Structural social capital, the overall network of entrepreneurs, and the nature of the connection between the members of that network. (2) Relational social capital, the capacity for links and the development of personal relationships that entrepreneurs can have with the members of the network of contacts they have created through their interactions.
(3) Cognitive social capital, the resources that encourage collective responsibility and action among the members of the entrepreneur's network. Based on this theoretical framework, as well as Kannadhasan et al. (2018) findings that relational capital is not significant in venture creation intention, the following hypotheses are postulated: H 6 In nascent entrepreneurs, structural social capital is positively associated with new venture intention or creation. H 7 In nascent entrepreneurs, relational social capital is not positively associated with new venture intention or creation. H 8 In nascent entrepreneurs, cognitive social capital is positively associated with new venture intention or creation.
Person-entrepreneurship fit (PEF)
Person-entrepreneurship fit is understood as the satisfaction of personal needs through venture creation. This construct moderates the relationship between entrepreneurial self-efficacy and the venture creation process. It involves a strong perception of fit with entrepreneurship, which impacts the decision about whether or not to start a business (Hsu et al., 2019). Based on the above description, the following hypotheses are proposed: H 9 In nascent entrepreneurs, PEF is positively associated with new venture intention or creation. H 10 In nascent entrepreneurs, PEF positively moderates the relationship between the dimensions of entrepreneurial self-efficacy and new venture intention or creation. According to the study by Kannadhasan et al. (2018), social capital was positively related to the mediator of self-efficacy. Given that the present study establishes different dimensions to measure entrepreneurial self-efficacy, the following hypotheses are posited: Page 10 of 28 Escorcia et al. Journal of Innovation and Entrepreneurship (2022) 11:51 H 11 In nascent entrepreneurs, the dimensions of social capital positively moderate the relationship between the dimensions of entrepreneurial self-efficacy and new venture intention or creation.
Consequently, the hypothetical research model is presented (see Fig. 1).
Partial Least Squares-Path modelling (PLS-PM)
Partial Least Squares-Path modelling (PLS-PM) is a statistical data analysis methodology that combines the concepts of regression modelling, structural equation modelling, and cross-tabulation analysis methods. Three approaches to the PLS-PM concept are identified in the literature. First, PLS-PM is conceived as a partial least squares approach to structural equation modelling, and it is thus common in the literature to use the terms "Path Modelling" and "Structural Equation Modelling (SEM)" interchangeably. However, for Götz et al. (2010), the concept of PLS-PM is much broader, as, to create the variable blocks of a model, it is necessary to establish prior theoretical knowledge of the phenomenon under analysis; accordingly, each variable block is considered to be a theoretical concept represented in the form of a latent variable. One of the differences between structural models and PLS models is the covariance analysis approach present in structural models (Ondé & Alvarado, 2018); PLS-PM models have a broader range of applications due to the absence of a fit to a known statistical distribution.
One of the advantages of PLS-PM models is the possibility of graphically representing the relationships between the variables that comprise the model through a directed graph. Consequently, a PLS-PM can be conceived as a network of variables in which the arcs (arrows) are assumed to represent a cause-effect relationship. It is assumed that the information provided by the variables flows through the network. The main objective is thus to quantify the relationships between the variables in the network. Along the same lines, assuming that each variable can be represented as a combination of other variables, PLS-PM quantifies the relationships between variables by considering the network to be a system of multiple interconnected linear regressions.
Furthermore, two essential concepts must be understood to implement the methodology: latent and manifest variables. There are different social phenomena in which the variable of interest cannot be observed or measured directly. These situations are latent variables but are also referred to as constructs, compounds, factors, conceptual and intangible variables in the literature. Specific examples enable a better contextualization of the concept. For example, companies are often interested in identifying "customer satisfaction" in marketing, while education experts are usually interested in improving "academic success. " Industrial engineers are often interested in measuring "perceived quality" in service management. In contrast, manifest variables are actual variables that can be measured and observed; they indirectly measure latent variables. In general, manifest variables are assumed to contain information reflecting one or more latent variable aspects. Manifest variables are divided into two categories: the reflective category considers that the latent variables cause the manifest variables. In contrast, the formative category considers that the manifest variables form the latent variable. The relations of the latent and manifest variables are shown in Table 1.
Designing questionnaire and sampling
The measurement of the constructs raised in the research is taken through questionnaires derived from the literature review exposed, taking into account the methodology created by Churchill (1979). However, the questionnaire presents three sessions, with single-choice and multiple-choice questions.
In the first session, control variables related to age, gender, geographic location, level of education, previous experience in entrepreneurship, support in incubation, or acceleration programs are established. These items are important, because they allow to compare the results by groups and also to isolate them, as well as to test in the multivariate analysis the relationships of the items and if there is a direct influence on the behavior of business creation. In addition, it is guaranteed that the people who answer the questionnaire are nascent entrepreneurs in this session.
The second session focuses on the characterization of the entrepreneurial self-efficacy construct, addressed in five dimensions and 19 items with a conventional Likert-type scale with five response options that measure a person's confidence to develop different entrepreneurial activities; the scale is rated from very little to very much. The questionnaire was designed by the researcher McGee et al. (2009) in their studies to measure entrepreneurial self-efficacy among nascent entrepreneurs and was adjusted and validated for the present research.
Finally, the third session has two approaches. The first one characterizes the social capital construct addressed by the three dimensions described in the literature by Nahapiet (1998), the questionnaire for this construct was developed by Kannadhasan et al. (2018) and addressed with 9 items on a seven-response Likert-type scale. The second approach characterizes the Person-Entrepreneurship fit (P-EMP) construct. The questionnaire for this construct was developed by Hsu et al. (2019) and designed with 3 items on a Likerttype scale with five response options. For the questionnaire, two new items were added to relate to economic needs in person-entrepreneurship fit.
Questionnaire evaluation and refinement
This refinement process refers to the adaptation and validation of the measurement questionnaire used in the research. It is important to highlight that each questionnaire was previously validated by the authors in their research and contexts, presenting an acceptable Cronbach's alpha for each of the dimensions of the constructs. In addition, each validated questionnaire was translated from English to Spanish, and for the adaptation for use in Colombia, we proceeded in two stages.
In the first stage, the questionnaire was translated, and the complete questionnaire with the control variables was exposed to thirteen people electronically and distributed through Google forms. The selected group consisted of five nascent entrepreneurs, five early stage entrepreneurs, two Ph.D. experts in quantitative research, and an entrepreneurship mentor. Seventy-seven per cent of the respondents answered from their cell phones and 23% from computers. As a result, each participant was interviewed to adjust the questionnaire's ergonomics using a more sophisticated tool that provides a clearer view of the answer options. Similarly, the number of questions made the questionnaire lengthy to complete, and questions with numerous answer options tended to bias the responses. However, the observations of this test were the following: use of the Qualtrics tool to replace Google forms, adjust the design of the questionnaire adaptable to cell phones, reduce the selection options from seven to five options, consider questions in the control variables, and adjust the wording of the questions.
In the second stage, the corrections of the first stage are applied and sent digitally. With the Qualtrics tool, the questionnaire was distributed among the same thirteen people who provided feedback. The new questionnaire was answered by seven people who, when interviewed, stated that the user experience improved considerably, generating a more pleasant process when answering the questionnaire. There were no further observations that would lead to the development of new changes. The questionnaire was reviewed by the statistical advisor of the research to corroborate that the answer options were compatible with the computer tools used to process the data collected. Page 14 of 28 Escorcia et al. Journal of Innovation and Entrepreneurship (2022) 11:51 Finally, in the third, a pilot test was designed. The adjusted questionnaire was sent to a database of 186 entrepreneurs provided by the innovation agency Punto Estratégico SAS. The database was composed of 89 nascent entrepreneurs and 97 budding entrepreneurs. It was distributed by Qualtrics and answered digitally by 31 entrepreneurs corresponding to 16.6% with an average response time of 9.5 min.
The set of tests and pilot tests validated the translation, comprehension, ergonomics, and randomness of the questionnaire, making it possible to have a questionnaire ready to be administered to the people under study in Colombia.
Operationalization of the study
This study follows the estimation procedure developed by Hair et al. (1999) for statistical analysis using structural equation models for a correct sampling. Although there is no correct sample size, these authors state it should be between 100 and 200; this size is the critical sample size. They also suggest increasing the size when misspecification is suspected, the model is too large or complex, the data exhibit non-normal characteristics, or an alternative estimation procedure is used.
According to the Global Entrepreneurship Monitor (2017), the process of starting a company can take up to 45 months before it moves on to the next stage as an established business in the adolescence phase. Therefore, the sample of the present study comprises 500 entrepreneurs located in Colombia who are involved in the process of starting a company, namely, nascent entrepreneurs in the gestation or infancy phases of the business cycle (Aldrich & Martinez, 2007). The following inclusion criteria are proposed for the sample: 1. Adults (over 18 years). 2. Nascent entrepreneurs located in Colombia. 3. People who are considered to be nascent entrepreneurs. They are engaged in at least two activities involving the intention to create a business (developing a business plan/model, building a work team, renting an office, attending business courses or seminars, developing a product or service, investing resources in the business idea). 4. People with a newly established business. This means that it has not been in existence for more than 4 years.
Results
Following the proposed theoretical model, the modelling results using the methodology presented are described below.
Multivariate descriptive analysis
The exploratory analysis aims to identify the behavioral patterns of the variables. To this end, the first step is to carry out a univariate analysis of each of the manifest variables present in the model. Now, an alternative method for achieving a comprehensive analysis of the multivariate behavior of the data set is to analyze the correlation between the variables. Accordingly, a PCA is carried out, making it possible to visualize the breakdown of the variables in a new dimensional space. Figure 2 shows the correlation circle Page 15 of 28 Escorcia et al. Journal of Innovation and Entrepreneurship (2022) 11:51 associated with the first two principal components of the data; the graph can be read as a radar. For example, the fact that two variables point in the same direction means that they are highly correlated; two variables that form a 90-degree angle indicate a zero correlation; and finally, two variables that point in opposite directions represent a negative correlation. The first aspect to note in the correlation circle is that there are two groups with negative correlations in the group of variables representing ESE (in red). In turn, the variables related to entrepreneurship are found in the first quadrant of the plane. There are four manifest variables related to Person-Entrepreneurship Fit (PEF) and four related to Social Capital (SC) in this quadrant. The PEF latent variable shows the greatest group consistency of responses, as four of its five variables are located in the same quadrant of the plane. The PCA analysis enables an exploratory contextualization of the problem, providing important information for the PLS-PM modelling process regarding the manifest variables that have the greatest association with the latent variables they reflect.
PLS-PM modelling
As described in the methodological section, PLS-PM models are comprised of two submodels: the inner model and the outer model. The inner model is formed solely by the latent variables (see Fig. 3), while the outer model represents the relationships of each latent variable with its respective block of manifest variables.
Fig. 2 Correlation circle of the manifest variables
Page 16 of 28 Escorcia et al. Journal of Innovation and Entrepreneurship (2022) 11:51 Unidimensionality of the model In the model, the manifest variables are considered to be reflective indicators, i.e., it is assumed that the entire block of variables is measuring the same latent variable and that the manifest variables of each block must, therefore, have a strong mutual association. That is, if any of the variables increases its value, the rest of the variables in the block must increase their value, and if any variable decreases its value, all of them must decrease their value. Moreover, one of the key characteristics of reflective models is the quality of the representation, i.e., it would be expected that there is the highest degree of association between a manifest variable and the variable in the same block. Therefore, if a manifest variable has a higher degree of association with a manifest variable outside of its block, it is considered an abnormality based on the consistency criterion between the latent variables and their group of manifest or indicator variables. In this study, Cronbach's Alpha will be used as a criterion to evaluate how well a block of manifest variables measures its latent variable. Table 2 presents the results of the indices for assessing the unidimensionality of the model. It is evident that for all the latent variables, the Cronbach's Alpha value is greater than 0.7, which is the minimum value accepted in the specialist literature, thus demonstrating that the manifest variables are aligned with the latent variables that they represent.
The representation of the loadings of the latent variables with their respective manifest variable is shown in Fig. 4., and dome variables do not reach the threshold of 0.7 to be considered representative within the model. For this reason, a second run will be carried out to determine the variables with the highest representation of their latent variable.
Given the above and considering the lack of representativeness of some manifest variables in the first run of the PLS-PM model, a second model was developed involving only the variables that had a specific weight of 65% in the first iteration. Figure 5 shows Page 17 of 28 Escorcia et al. Journal of Innovation and Entrepreneurship (2022) 11:51 the latent variables with the loading of each of their manifest variables, demonstrating that all the variables exceed the required threshold of 0.7. For a more illustrative visual comparison, a bar chart with a cutoff line of 0.7 is presented (Fig. 6). As a result of the second model, the composition of the groups of manifest variables for each latent variable is as follows: However, one of the assumptions of PLS-PM models is the uniformity of the block of indicators that represent a latent variable. For this reason, there must be no crosscorrelations within the model, as this could lead to confounding problems when an indicator can better explain a latent variable outside of its block. To verify the fulfilment of the assumptions, Fig. 7 shows that the maximum relationship of the manifest variables is with their direct latent variable; this situation is evident when the diagonal of the figure is examined.
Structural evaluation of the model PLS-PM models are an interrelated network of linear regressions. Accordingly, the structural equations for each latent variable are presented in Table 3.
It is clear that Social Capital is significant in explaining PEF. In turn, Social Capital and PEF are significant in explaining ESE. Finally, the Venture Creation construct is
Fig. 4 Specific loadings by latent blocks
Page 18 of 28 Escorcia et al. Journal of Innovation and Entrepreneurship (2022) 11:51 significantly explained only by PEF. The results of inner model validation are shown in Table 4; the result of the R-squared value for the model's variable of interest is 0.5425, which is within the middle range according to the criteria presented by Sanchez (2013). Consequently, the communality results indicate how much of the variability of the block is reproducible by the respective latent variable. In this case, all the mean communality values are higher than 0.5, showing consistency between the behaviour of the indicators and their latent variable. Page 19 of 28 Escorcia et al. Journal of Innovation and Entrepreneurship (2022) 11:51 Inter-variable effects A significant result in PLS-PM models is estimating the effects of the latent variables that comprise the model. The effects quantify the direct or indirect relationship between constructs. As mentioned, effects can be direct, i.e., those directly associated with the inner model and represented by the network coefficients. There are also indirect effects, Page 20 of 28 Escorcia et al. Journal of Innovation and Entrepreneurship (2022) 11:51 representing the influence between latent variables following indirect paths in the model. Figure 8 shows the impact of the effects for each construct relationship. It is important to note that Social Capital, which theoretically is not directly related to Venture creation, has a significant indirect effect in comparison with ESE. At the same time, the impact of PEF on the Venture creation construct is also evident.
Validation of the model
As mentioned in the methodological section, PLS-PM models do not depend on assumptions about the fit to a known distribution, and therefore, sampling techniques are used to demonstrate the variability of the estimated parameters. The sampling procedure chosen to validate the model is bootstrapping, performing 200 successive samples. The model can be validated as the consistency of the inner-outer model has been proven. The validation results of the inner model's coefficients are shown in Table 5. The validation by bootstrapping will be performed at a confidence level of 95%. Thus, the coefficients for which the zero is not in the percentile interval of 2.5-97.5% will be considered significant. Accordingly, the coefficient for the relationship between ESE and VC is found to be non-significant.
Consequently, the significance of the effects is assessed under the bootstrapping procedure. The result in Table 6 identifies that the effect between ESE and VC is non-significant. Table 6 validates the significant relationships between the latent variables by bootstrapping sampling. The second column (Original) contains the coefficient value of the relationship; the third column (Mean.Boot) contains the value of the coefficient validated by sampling; the fourth column (Std.Error) contains the standard deviation of the validation; the fifth column (perc.025) contains the lower bound of the coefficient interval, and the sixth column (perc.975) contains the upper bound of the Page 21 of 28 Escorcia et al. Journal of Innovation and Entrepreneurship (2022) 11:51 coefficient interval. It should be noted that if the confidence interval contains the number zero, it can be stated that the value of the coefficient will have the value of zero, and therefore, the relationship is not significant. Based on the above, it can be stated that all of the relationships between the latent variables are significant, with the exception of the relationship between Entrepreneurial Self-Efficacy (ESE) and Venture Creation (VC), considering a confidence interval of 95%.
Finally, the inner model is validated, verifying the significance of all the manifest variables with respect to their direct latent variable. Table 7 shows that all the relationships of the inner model are significant, considering the 95% confidence interval.
Thus, after the internal validations of the structural equations and the external validation through bootstrapping, the final PLS model can be seen in Fig. 9, in which the thickness of the line represents the magnitude of the impact of each variable.
In summary, the structure of the general model (inner model and outer model) is presented in Fig. 10. Figure 10 shows the relationship between the manifest variables and the latent variables (constructs) and their corresponding quantification (path coefficients). It should be noted that this model compiles the results of the research and presents a new way of performing a holistic analysis of the business venture creation process.
Discussion
Although there are strategies and methods to guide start-ups, many of them do not reach the stage of conception despite the advice offered. Some authors suggest not to bother with considerations of social and psychological factors during the entrepreneurial process analysis. This study focuses on determining the impact of social and psychological Escorcia et al. Journal of Innovation and Entrepreneurship (2022) 11:51 factors as influencing the venture creation business cycle. These factors are entrepreneurial self-efficacy, social capital, and person-entrepreneurship fit. Entrepreneurial self-efficacy was adopted as a self-efficacy variable from Albert Bandura's Social Cognitive Theory. It is considered to be the individual's knowledge of their capabilities, skills, and coping mechanisms in different situations, how they can take on specific tasks, valuing what they have and what they are, but focused on believing that they can achieve their goals with their resources. For the entrepreneur, it represents how to achieve what they have set out to do, even without previous experience, due to their confidence in their belief and their abilities. It is worth noting that psychology can identify traits of creative and proactive personalities among entrepreneurs, in which people take advantage of opportunities to develop innovative solutions for their initiatives. In the entrepreneur, the attribute of entrepreneurial self-efficacy may be linked to entrepreneurial behaviors and the achievement of business outcomes. Consequently, the key characteristics are goal-setting and visioning, entrepreneurial orientation, development of new products and market opportunities, building an innovative environment, and planning. These elements can be strengthened through education and training. The concept of social capital was used as a transversal element in business processes, characterized by its collective nature, producing individual and group benefits. This variable is notable for referring to people's abilities to contribute to the pursuit of a common goal beyond material contributions. Goods with social characteristics are constructed in groups. The exchange of relationships allows individuals to learn to make their personal resources available to the network to build interaction and trust. Shared relationships thus enable benefit-producing networks to exist and individuals to understand that the gains from belonging to a social group will outweigh their efforts.
The Social capital concept comprises three dimensions: structural, based on the construction of relationships between individuals and institutions and the social structure; relational, such as the interaction of individuals based on trust and associability; and cognitive, which are the common meanings, language, and codes that a group can use. Trust and interaction are important for entrepreneurship, because they contribute to establishing successful market relationships and forming alliances that offer positive results within the environment. This is how interpersonal relationships contribute to satisfying interests.
Person-entrepreneurship fit is a moderating element for the relationship between entrepreneurial self-efficacy and the venture creation process. It is the satisfaction of their personal needs that an entrepreneur finds by developing an entrepreneurial idea. This component establishes three traits that form a triangle: identification (of the opportunity), needs (that can be satisfied), and coping skills (that they have to carry out tasks). By analyzing this set within the context of opportunities (the social, Fig. 10 General structure of the model Page 24 of 28 Escorcia et al. Journal of Innovation and Entrepreneurship (2022) 11:51 economic, and governmental structures), the entrepreneurs can develop a true commitment to their business ideas. Theoretically, if they have the skills to perform an activity and that activity satisfies their personal needs, they will have a high level of person-entrepreneurship fit, strong motivation, and credibility to pursue a venture. Based on the literature review, a theoretical model of relations between entrepreneurial self-efficacy, social capital, person-entrepreneurship fit, and venture creation was developed.
The research question is defined as follows: how do social capital, entrepreneurial self-efficacy, and person-entrepreneurship fit positively impact the transition of nascent entrepreneurs toward the creation of start-ups?
To test the relations demonstrated in the theoretical model by analyzing the empirical data, the operationalization of the constructs was needed. Constructs as latent variables were turned to manifest variables. Then, it was possible to propose ten hypotheses (H1-H10) linking each of the manifest variables with venture creation. The last hypothesis (H11) suggested the role of social capital as a moderator of the relationship between entrepreneurial self-efficacy and new venture creation. Definition of venture creation was determined to indicate how this paper understands that process.
To answer the research question, empirical data were needed. The questionnaire was sent via email to a database of 7000 entrepreneurs supplied by the innovation agency Punto Estratégico SAS). The data were collected and distributed using the digital tool Qualtrics through validated questionnaires. Five hundred (500) entrepreneurs returned the fully answered questionnaires.
The final number of manifest variables was identified using R software. This model operates by defining manifest variables that describe the latent variable, which, when passed through the set of regression equations, establishes an outcome variable (which is also considered a latent variable). It is essential to note that this step enabled us to identify 4 manifest variables for social capital, 7 for entrepreneurial self-efficacy, 4 for person-entrepreneurship fit, and 2 for the definition of the Venture Creation outcome variable.
For data analysis, hypothetical-deductive research methodology is applied with a cross-sectional multiple causal correlation design with an explanatory scope. It is divided into three stages: first, an exploratory analysis of the data relating to the study variables is carried out. Second, a principal component analysis is carried out. Finally, the third stage is the modelling using the Partial Least Squares-Path Modelling methodology. The analysis enabled us to test the hypotheses describing the relations between the constructs and answer the research question.
Conclusion
Venture creation (understood as implementing a business project), which is the outcome variable, is executed through the relationship of the variables defined: entrepreneurial self-efficacy, social capital, and person-entrepreneurship. The development of innovation-based start-ups has become an opportunity to create jobs needed in Columbia, primarily by young population. Though many young people opt for entrepreneurship to generate their own income, these initiatives have economic constraints, are informal in Page 25 of 28 Escorcia et al. Journal of Innovation and Entrepreneurship (2022) 11:51 nature, are mostly non-innovative, and the jobs are focused on family and friends. Innovation-based entrepreneurship is challenged by fear, doubts, and mistrust. Therefore, the initial activities do not turn into venture creation. Numerous studies mention the factors responsible for the situation, such as economic, political, and cultural ones. Weak policies supporting graduates' confidence in creating a venture is an element of an environment discouraging creativeness. This study seeks to identify the social and psychological factors that influence entrepreneurship processes. It is focused on entrepreneurial selfefficacy, social capital, and person-entrepreneurship fit as factors impacting venture creation. Accordingly, the study seeks to describe the association of factors that facilitate the transition toward the gestation of start-ups in innovation-based entrepreneurship programmes within Colombia. It also enables us to identify factors that contribute to initiatives failing to become consolidated within the business cycle. Application of empirical data to the model describing the relations between the mentioned factors and venture creation resulted in the following findings: 1. Only the Person-Entrepreneurship Fit variable has an influence on venture creation. 2. Entrepreneurial Self-Efficacy does not have an influence. 3. Social Capital has an indirect effect on venture creation and, additionally, a direct effect on the Person-Entrepreneurship Fit and Entrepreneurial Self-Efficacy variables.
The study encourages researchers to develop interdisciplinary models that better reflect reality. It contributes to enriching the spectrum that can be studied to promote holistic analyses. Practical implication of the study findings determines the measures which make it possible to design sustainable strategies over time to support the development of innovative ventures. In turn, this type of analysis is very useful for start-ups, nascent entrepreneurs, and decision-makers in government, because decisions are made on the variables that actually affect the process of business creation, and it is also supported by objective evidence.
Including social and psychological variables (entrepreneurial self-efficacy, social capital, and person-entrepreneurial fit) that influence the business cycle processes broadens research context. It contributes to the development of holistic models and encourages scholars to undertake interdisciplinary research addressing a variety of factors impacting the entrepreneurial processes. This way, research findings will better reflect reality. The present study is an invitation to consider factors beyond the economic and political ones. The next step would be a super model capable of successfully linking what has been done in previous studies and what is proposed now. | 2021-10-29T15:06:45.970Z | 2021-10-27T00:00:00.000 | {
"year": 2022,
"sha1": "4ef1d162e52b6977c979b3b18feaa85c8dcb2f84",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-982312/latest.pdf",
"oa_status": "GREEN",
"pdf_src": "Springer",
"pdf_hash": "658b05f9e50408fe173dd2fa6024b43c3e2d3bed",
"s2fieldsofstudy": [
"Business",
"Economics"
],
"extfieldsofstudy": []
} |
265670140 | pes2o/s2orc | v3-fos-license | Intrahepatic cholangiocarcinoma: a dose threshold evaluation in those undergoing transarterial radioembolization
Background Intrahepatic cholangiocarcinoma (ICC) is a rare primary hepatic malignancy. One of the treatment strategies which has shown some promise is transarterial radioembolization (TARE). However, data on dose thresholds, arguably the most important aspect of the procedure itself, is still limited. The study aims to evaluate the relationship between dose to tumor and radiologic response in intrahepatic cholangiocarcinoma patients undergoing transarterial radioembolization. Methods Twenty-patients who underwent treatment for 26 tumors were retrospectively reviewed. Radiologic response at 3-month was evaluated and post yttrium-90 bremsstrahlung single photon emission computerized tomography computed tomography was evaluated to determine tumor dose. Other factors such as particle load and activity per particle were evaluated. Results The mean tumor dose for those with progressive disease or stable disease, partial response, and complete response (CR) by European Association for the Study of Liver (EASL) criteria for the glass cohort was 294±0, 465.4±292.4 and 951.8±666.5 Gy respectively (P=0.039). A receiver operating characteristic (ROC) curve analysis of tumor dose demonstrated an area under the curve (AUC) of 0.738 (P=0.038) with Youden-index analysis demonstrated a cutoff point of >541.7 Gy (sensitivity: 55.56%; specificity: 92.86%) for the glass cohort. Significantly longer survival was noted in those who achieved a CR [HR: 4.79 (95% CI: 1.41–16.25)] and those treated with glass as compared to resin [HR: 5.02 (95% CI: 1.23–20.55), P=0.025]. Of the 17 treatments in 13 patients which were done concomitantly with chemotherapy 7/17 (41.2%) required a delay in chemotherapy, however all patients reinitiated chemotherapy after a delay. Conclusions There appears to be a relationship between tumor dose and radiologic response, with this study suggesting a target of ≥541.7 Gy being warranted in patients receiving treatment with glass microspheres.
that are flat or wander above and below the line of statistical chance.This would explain the perplexing finding of a cutoff point <103 Gy---does this mean that the lower the radiation dose, the better the response?Surely it does not.This is a meaningless finding based on ROC curve that is not statistically strong enough.Better to acknowledge that good ROC analysis is not possible than to present information that reflects the weakness of the data.
Reply: Thanks for the comment.The 103 Gy mark was poorly reported.As the reviewer points out the higher radiation dose was more likely to result in the more favorable response.The authors after consultation with their statistical team did maintain the ROC analysis.While certainly the low numbers is a significant limitation, the Youden's index does provide some insight into the realm of dose that is likely to be needed and the statisticians felt that while certainly not ideal it was a reasonable analysis to apply.
2. It is quite transparent to the reader that the cutoff of >542 Gy is so driven by the glass product that the result for the entire cohort that includes resin is the same >542 Gy!It is quite clear that the data needs to be separated by device type.
Reply: Thanks for the comment, the portion of the paper which combined the two products in analysis were removed.
3. If the ROC curves are pretty much flat and the statistics just not strong enough, then trying to find a cutoff with the Youden index may simply be inappropriate.Please discuss with your statistician.
Reply: Thanks for the comment.The authors after consultation with their statistical team did maintain the ROC analysis.While certainly the low numbers is a significant limitation, the Youden's index does provide some insight into the realm of dose that is likely to be needed and the statisticians felt that while certainly not ideal it was a reasonable analysis to apply. 2 shows that 33% of resin cases had either PD or SD and 6% of glass had SD (no PD).This large discrepancy of 33% v. 6% needs to be addressed and statistics applied to the extent that the very small samples sizes allow.These are categorical data that can be compared between two groups, possibly using Fisher exact test or Chi square, whatever applies better.It is puzzling why the authors did not explore this stark difference between the groups.It may be a very fertile ground to analyze statistically.
Table
Reply: This analysis has been added to the beginning of the results section. 5.In the subsection "RR and relationship to dose" it is quite clear that the dose was greater for the glass group compared to the resin group.Analysis requires separation according to the type of sphere used.
Reply:
The requested changes have been made and the entire cohort analysis removed.
6. "RR and relationship to particle-load and Specific activity:" Previously, the authors analyzed data based on device type, but in this section no such effort was presented, even though the most fundamental differences between glass and resin are related to particle size, specific activity and the particle load typically given at the time of therapy.It is imperative that the authors present this data categorized by device type because it simply is not scientifically sound to mix devices that are physically so different.
Reply:
The requested changes have been made and table 3 has been reworked to reflect device specific data.7. Once the authors have presented data using m-RECIST and not RECIST, it will be interesting to see if the multivariate analysis changes using the m-RECIST findings.
Reply: When the responses were re-reviewed all patient categorical responses were the same when using either mRECIST or EASL, therefore mRECIST was not included as it was felt to be redundant.However, RECIST has been moved to supplemental material as requested.
8. Table 1 mixes data for both resin and glass regarding activity delivered, particle load and activity per bead.This must be broken down by device type.They cannot be combined given their differences.The term "activity per bead" is not used in other parts of the manuscript.Change to "specific activity".It would be OK to clarify that specific activity means "activity per bead" in the Methods section.Likewise, Figure 3B should be labeled "specific activity" and not "activity per sphere" to be consistent.
Reply:
Thanks for the comment.The requested changes have been made to this table.Figure 3b has been removed based on this and the other reviewers comments.9. Table 2. Convert to mRECIST.
Reply: When the responses were re-reviewed all patient categorical responses were the same when using either mRECIST or EASL, therefore mRECIST was not included as it was felt to be redundant.However, RECIST has been moved to supplemental material as requested.
10. Table 3.The entire table must be re-worked to account for the physical differences between resin and glass.Can't mix the two.Must re-draw using mRECIST instead of RECIST, and all data should be categorized by resin and glass.
Reply: When the responses were re-reviewed all patient categorical responses were the same when using either mRECIST or EASL, therefore mRECIST was not included as it was felt to be redundant.However, RECIST has been moved to supplemental material as requested.Furthermore, the table has been reworked to only present data by microsphere type use.
11. Table 4.If the authors believe that meaningful statistics cannot be drawn if the data is divided in two parts to account for device type, this must be presented and discussed.Table 4 in its present form loses meaning insofar tumor dose, specific activity and particle load are concerned.
Reply: Thanks for the comment, the authors humbly disagree.The devices are different, however, becoming more similar all the time as resin provides hotter and hotter flex dose options and glass continues to offer 2 nd week dosing.This is not represented in this data set as no flex dosing or 2 nd week dose was used.However, in leaving these together it may inform either resin or glass users as to which of these various options may be ideal.This approach also allowed the team to evaluate the effect microsphere type had on outcomes, helping to address a previous comment.Furthermore, the reviewer is correct in their assertion that analysis in this manner after dividing by microsphere type would also not be sound, per our statistician.
12. Figures 1, 2 and 3 have in common a fundamental problem.The cutoffs presented for absorbed dose in Gy cannot be analyzed with the devices mixed.This must be done after separating device used and determining cutoffs that apply accordingly.Same for specific activity.This sub-analysis is particularly important because it is possible that the resin product is more effective at lower radiation dose because of the greater particle load.How will the authors discern these issues if analysis is not done by device type?Additionally, so many KM curves come across as forced into the analysis.It is likely that a simple table showing the responses, survival, and TTP of resin versus glass would be a better stating point that would allow choice of which KM curve to include in the manuscript.
Reply: Thanks for the comment figures, 1 and 2 have been removed.13.Supplemental Figure 1.Again, the devices are mixed.Unless categorized by device, the authors may not be able to glean important findings that apply to a particular device because of the mixed data.Moreover, the scatter plot charts may ultimately add little and possibly could be omitted.
Reply: Thanks for the comment, the supplemental figures have been removed from the revised manuscript.
Discussion:
Much of the discussion may need to be re-written once the data is analyzed as suggested above.Different issues may come up and different conclusions reached.
Reviewer B
The authors presented a rather small cohort of unresectable iCCA treated with Y90 radioembolization over 10 years at a single institution, aiming to identify the relationship between tumor-absorbed dose and radiologic response as measured by EASL and RECIST criteria.Using iCCA has gained popularity over the last 10 years in the interventional oncology field, and this topic is of great importance as knowing the threshold of tumorcidal Y90 dose would allow effective treatment planning.However, there are few major weaknesses in this study: 1. Authors should describe tumor characteristics in more detail: tumor number, vascular invasion, and extrahepatic disease.
Reply: Thanks for the comment.The requested details have been added to the first paragraph of the materials and methods.
2. More details are needed regarding patient demographics, such as prior systemic treatment, concurrent treatment, post-TARE treatment, which affect radiologic responses as well.
Reply: Thanks for the comment the requested changes have been made to table 1. 3. Authors should also report adverse events, as high Y90 dose not only leads to tumor necrosis but also hepatic failure.If a high dose Y90 may result in high toxicity or post-chemotherapy delay/withhold, then the use of high dose Y90 can be limited.
Reply: Thanks for the comments, an adverse events portion of the paper has been added the results section of the revised manuscript and is discussed in the revised discussion section.
Was pathologic data available in any patients? Did any patient receive resection or transplant?
Reply: No patients were taken to surgical resection or transplanted after treatment, this has been added to the revised manuscript.
Of note, pages are not numbered.Introduction: 1) "the results of chemotherapy alone are less than desirable" This trial is outdated.Please also include TOPAZ-1.
Reply:
The requested changes have been made to reference 2 of the revised manuscript.
Reply: Thanks for the comment, the sentence has been reworked with the reviewers comments in mind.
Method:
3) "If patients had multiple TARE treatments of a single-lesion each was included and analyzed separately for the primary endpoint provided the treatments were separated by at least 3-months"-please revise/rephrase.
Reply: Thanks for the comment, the sentence has been reworked in the revised manuscript.
Reply: Thanks for the comment.The suggested changes have been made to the written manuscript and tables.
5) "A ROC curve analysis evaluating tumor dose in those who did and did not achieve an ORR by EASL was performed for the entire cohort and resin only cohort, however, could not be performed for the glass only cohort as only a single patient did not have an ORR"-please revise Reply: Thanks for the comment.The sentence has been reworked as requested.
6) "monstrates scatter plots for those who did and didn't achieve a"--please spell out.
Reply:
The authors apologize, but we are unsure what the reviewer is asking to be spelled out.No acronym is utilized in the referenced sentence.
Reply: Thanks for the comment the requested changes have been made.Reply: Based on the other reviewer comments these figures have been removed.
Discussion: ", especially in conjunction with systemic chemotherapy further data is needed (8-17)"-please revise
Reply:
The sentence has been reworked with the reviewers comments in mind.
"In particular more data on dose thresholds is needed"-authors can include data on dose/effectiveness on HCC to show why knowing dose thresholds on iCCA is important.
Reply: This sentence has been reworked with the reviewers comments in mind.
"Similarly, if you maintain a constant total activity delivered but drop the activity per particle a higher particle load will be necessary, and this may affect factors such as TNR"--too colloquial.Suggest authors to improve scientific writing throughout the manuscript.
Reply: Thanks for the comment, the sentence has been removed.
Reply: Thanks for the comment, no modified lobectomy technique was used.Reply: Respectfully the authors felt that making these changes made the table more difficult to follow.The suggested changes are shown below, but not in the revised manuscript.If the editor feels strongly that the suggested format is superior then the authors are willing to make the changes.
Table 2 :
Suggest list as count/total, percentage.I.e in the first column: "2/26, 7.7% instead of 2(2/26, 7.7%)" Reply: The requested changes have been made to all tables and written manuscript
Table 4 :
Please list OR, 95CI, and p values in separate columns. | 2023-11-01T15:08:07.815Z | 2023-09-22T00:00:00.000 | {
"year": 2023,
"sha1": "107468fa76bde53e46748b9ec79378fe677be9d1",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "cf6aaefa49ce1764a143208f6113340b9a22db7d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
7273710 | pes2o/s2orc | v3-fos-license | Cancer, Fertility Preservation, and Future Pregnancy: A Comprehensive Review
Given the increases in 5-year cancer survival and recent advances in fertility preserving technologies, an increasing number of women with cancer are presenting for discussion of fertility preserving options. This review will summarize the risk of infertility secondary to cancer treatment, available treatment options for fertility preservation, and techniques to reduce future risks for patients. Concerns that will be addressed include the risk of the medications and procedures, the potential delay in cancer treatment, likelihood of pregnancy complications, as well as the impact of future pregnancy on the recurrence risk of cancer. Recent advances in oocyte cryopreservation and ovarian stimulation protocols will be discussed. Healthcare providers need to be informed of available treatment options including the risks, advantages, and disadvantages of fertility preserving options to properly counsel patients.
Introduction
An estimated 1 out of 47 women will be diagnosed with some type of invasive cancer by age 40 years, and approximately 774,370 women will be diagnosed with cancer in 2011 [1]. The most common cancers in reproductive age women are breast, melanoma, cervical cancer, non-Hodgkin's lymphoma, and leukemia [2]. Fortunately, the 5-year relative survival for all cancers is up from 50% (1975-1977) to 68% (1999-2006) reflecting improved diagnosis and treatment. The 5-year female cancer survival is dependent on stage at diagnosis but is currently 90% for breast, 91% for melanoma, 71% for cervical, 69% non-Hodgkin lymphoma, and 55% for leukemia [1]. Given the relatively high incidence of cancer in reproductive age women and improvements in 5-year survival, an increasing number of women are presenting for discussion of fertility preservation and pregnancy after cancer treatment.
Fertility preservation is a rapidly evolving field that includes medical and surgical treatments to decrease the impact of cancer treatments on future fertility. Traditional fertility preserving techniques for patients undergoing radiation treatment included pelvic shielding or surgical repositioning of the ovaries out of the pelvis. Medical treatments to suppress ovarian function during chemotherapy have also been reported to decrease the effect on cancer treatments on future ovarian function. These modalities still rely on residual ovarian function after cancer treatments to conceive. Newer techniques to preserve ovarian reserve, oocytes, and embryos prior to cancer treatments have been developed to provide an opportunity to conceive in the event that cancer treatments result in permanent loss of ovarian function.
This review will summarize available treatment options for fertility preservation in cancer patients. Concerns that will be addressed include the risk of the medications and procedures, the potential delay in cancer treatment, as well as 2 Obstetrics and Gynecology International the impact of future pregnancy on the recurrence risk of cancer. Healthcare providers need to be aware of available treatment options including the risks, advantages, and disadvantages of fertility-preserving options to properly counsel patients.
Methods and Materials
We performed a MEDLINE search to July 2011 using the following terms: fertility preservation, cancer, in vitro fertilization, assisted reproduction, ovarian stimulation, oocyte vitrification, ovarian preservation, hereditary cancer, childhood cancer, preimplantation genetic diagnosis, ovarian reserve, pregnancy, and cancer. Studies included in this review include publications in peer-reviewed journals.
Counseling Patients on Future Fertility Prior to Cancer
Therapy. Future fertility is a significant concern for patients undergoing cancer treatment. Studies have shown that the psychological impact of cancer-related infertility is substantial with 77% of patients reporting clinically significant levels of distress in relation to loss of fertility [3]. A survey of over 600 women with breast cancer indicated that 73% of women reported some degree of concern about the possibility of becoming infertile after treatment and 29% of patients indicated that their desire for future fertility impacted their cancer treatment decisions. In fact, many women indicated that they may choose a less toxic dose of chemotherapy to help preserve fertility even if it may increase the risk of cancer recurrence [4]. Cancer survivors also have higher depression and distress scores if they have unmet informational needs about future reproductive options [5]. Recognizing these concerns, the American Society of Clinical Oncology (ASCO) published recommendations in 2006 on fertility preservation in cancer patients. These guidelines state that oncologists should address the possibility of infertility with cancer patients and be prepared to discuss possible fertility preservation options or refer the patient to a reproductive specialist [6].
Despite the 2006 ASCO recommendations, a nationwide survey of oncologists in 2009 reported that less than 50% referred patients to a reproductive specialist [7]. A survey of academic medical centers reported similar results with less than 40% referring patients to a reproductive specialist although 95% reported that they routinely discussed the effect of cancer treatment on fertility [8]. Factors that were associated with a higher likelihood to refer in patients with breast cancer included patients with a family history of breast cancer, older age, early stage cancer, and receiving care at an academic center [9]. Interestingly, a review of NIH intramural clinical trials for pediatric cancer, gynecologic cancer, or for stem-cell transplantation found that only 47% of patient consents addressed future infertility risks after cancer treatment [10].
Risk of Infertility after Cancer Treatment.
The adverse effects of chemotherapy and radiotherapy on female reproduction have long been recognized. Part of the difficulty in counseling patients regarding the risk of infertility and/or subsequent pregnancy complications is that the risks are dependent on several factors. These risks include the dose and duration of treatment, other risk factors for infertility, the age of the patient, and the patient's baseline ovarian reserve at the time of initiation of treatment.
Pelvic and/or abdominal radiation impacts future fertility by affecting both uterine and ovarian function. Radiation is typically administered as external beam therapy (teletherapy), intracavitary (brachytherapy), or total body irradiation as is utilized with stem-cell transplantation. Radiation is most commonly used in children for treatment of Wilm's tumor, abdominal rhabdomyosarcoma, and Ewing's sarcoma of the pelvis or spine. The effects of radiation therapy are dependent on the dose and the field applied. Radiation is typically targeted at the affected area; however, the impact of scattered radiation during treatments is also a consideration. Total body irradiation as used in stem-call transplantation has an over 80% risk of permanent amenorrhea. Limited field external beam radiation has a reduced risk depending on the location, dose, fractionation schedule, and age of the patient at the time of radiation treatment. In a study of 2000 women treated with pelvic radiotherapy, 95% had permanent ovarian failure following radiotherapy of 5-105 Gy [11]. It was reported that radiation doses over 5 Gy for women over 30 years results in permanent amenorrhea, however, it has been reported more recently that the lethal dose (LD50) of the human oocyte is actually less than 2 Gy [12].
Although the uterus is relatively resistant to the effect of radiotherapy, there is a concern that radiation may decrease uterine blood supply, volume, and endometrial thickness. Exposure to 20-30 Gy of abdominal or pelvic radiation has been shown to increase the future risk of miscarriage, preterm labor, and low birth weight [13]. The impact of radiation on future uterine function is dependent on the age at radiation in childhood cancers. The prepubertal uterus appears to be more vulnerable to the effects of radiation. Hormonal stimulation with estrogen and progesterone to improve endometrial thickness, and blood flow after radiation has been evaluated with variable effectiveness. The uterine volume increased significantly from 6.5 mL to 16.3 mL but was still less than controls after 3 months of hormonal replacement provided to patients with amenorrhea after radiation treatments. It was noted than patients exposed to prepubertal radiation had less improvement that patients exposed postpubertally [14]. Another study evaluating 3 childhood cancer survivors that received high-dose abdominal or pelvic radiation found no increase in uterine volume, blood flow, or endometrial thickness with highdose estrogen therapy [15]. These limited studies indicate that higher dosages of radiation affect uterine function and are most significant if administered prepubertally. Hormonal therapy may have limited benefit for improving endometrial development and patients that conceive should be considered at higher risk of preterm labor, delivery, and lowbirth weight.
Obstetrics and Gynecology International 3 The primary impact of chemotherapy on fertility is related directly to the loss of ovarian function secondary to the gonadotoxicity of many chemotherapeutic agents. Cellcycle nonspecific alkylating agents such as cyclophosphamide will destroy resting primordial oocytes while antimetabolite agents (methotrexate) have limited effect on ovarian function. The greatest risk is in women over age 40 years receiving alkylating agents with up to 80% of patients having permanent amenorrhea after treatment. However, in women under 30 years, the risk of permanent amenorrhea is substantially decreased to less than 20% [6]. The effect of chemotherapy will also depend on whether it is radical or adjuvant, single agent, or combination. Fortunately, the more recent ABVD regimen (Doxorubicin, Bleomycin, Vincristine, and Dacarbazine) used in the treatment of Hodgkin's disease is significantly less toxic to fertility than the older MOPP (Mechlorethamine, Vincristine, Procarbazine, and Prednisilone). The classical CMF (Cyclophosphamide, methotrexate, 5-fluoruracil) regimen for breast cancer will result in over 70% amenorrhea rates for women over 40 years [16]. The newer Taxanes are still being evaluated for their impact on fertility but hopefully will be less gonadotoxic than currently used regimens. Unfortunately, estimates in the impact on fertility vary widely dependent on various factors, and, therefore, there is no definitive predictor prior to treatment making counseling on future fertility challenging for health care providers.
Determining the Impact of Cancer Treatment on Ovarian
Reserve. The peak number of oocytes is found in females at 20 weeks of fetal life, and this number declines until menopause. The number of primordial follicles is approximately 500,000 at menarche; menopause occurs once that pool is nearly depleted. Although chronologic age is the most important predictor of oocyte quality and quantity, there is variability in the rate of ovarian aging. The term "ovarian reserve" is used to describe remaining ovarian oocyte quantity. Although menstrual cycles do not start to become irregular until a mean age of 45 to 55 years, endocrinologic changes associated with ovarian aging have been demonstrated for women age 35 to 40 years and at earlier ages after cancer treatment. Several modalities have been evaluated as markers of ovarian reserve including cycle day 2-3 FSH, antimullerian hormone (AMH), and ovarian ultrasound of antral follicles. Assessment of a patient's ovarian reserve both before and after cancer treatment may provide valuable information for patients in discussion of fertility preserving option prior to treatment and future fertility after treatment.
Basal FSH values drawn during menstrual day 2-3 have been the routinely utilized as a marker of ovarian reserve.
As FSH values increase, ovarian responsiveness decreases. An FSH value of 10-15 IU/L is generally considered borderline, and values over 15 IU/L are considered significantly elevated [17]. It is important when assessing FSH values to also evaluate basal estradiol as an elevation may suppress FSH and give a falsely reassuring value. A normal basal estradiol may vary between laboratories but typically is less than 60 pg/mL. FSH values may fluctuate widely between cycles particularly for patients with decreased ovarian reserve which limits its effectiveness as a marker of remaining ovarian function.
AMH is a member of the transforming growth factor B family and is produced by the granulosa cells of the secondary, prenatal, and antral follicles. AMH levels decrease progressively until it becomes undetectable at menopause. Theoretically, this may be a better marker of ovarian reserve as it represents the number of early and developing follicles and appears to have less intercycle variability than FSH [18]. One significant advantage of this test is that it does not require assessment on cycle day 2 or 3 since there is limited variability during the menstrual. However, this test may not be routinely available at all laboratories, and no international standard has been developed yet for this assay.
Ultrasound of ovarian follicle counts (AFC) has also been evaluated as a tool for predicting ovarian reserve. The number of antral follicles (<10 mm) present on a menstrual cycle day 2 to 3 by transvaginal ultrasound has also been correlated with other serum markers of ovarian function [19]. AFC is directly correlated with the number of oocytes retrieved during IVF and may prove to be the best predictor of ovarian reserve. Intercycle variability is present with all forms of ovarian reserve testing, and no single test has been consistently recommended. Basal FSH has been the mainstay of screening, but basal AFC and AMH may prove to be superior.
There is limited information on the impact of cancer treatment on markers of ovarian reserve. Small studies in young cancer patients have indicated that FSH, AMH and AFC have all been demonstrated to change in response to chemotherapy [20]. One study evaluated 42 premenopausal women receiving neoadjuvant chemotherapy that were followed over 5 years. Pretreatment FSH, AMH, and AFC and all were found to reflect future ovarian activity for women with menses after chemotherapy, but AMH was the most predictive by logistic regression [21,22]. Further research is needed to determine the impact of cancer treatments on markers of ovarian reserve and any correlation with future fertility. It is important to consider that most research has evaluated these markers in relation to success of ovarian stimulation for IVF; therefore, caution must be used in counseling patients on the likelihood of spontaneous pregnancy or with other fertility treatments.
Ovarian Suppression during Chemotherapy.
It has been well documented that chemotherapeutic agents, particularly alkylating agents, have high levels of ovarian toxicity. Oocytes are contained in ovarian primordial follicles, and it is estimated that hundreds to thousands of these follicles initiate the maturation process each month and are susceptible to the gonadotoxic effects of chemotherapy. Primordial follicles are stimulated to initiate maturation through a complex process that is initiated by follicle stimulating hormone (FSH) release from the pituitary in response to hypothalamic gonadotropin releasing hormone (GnRH). Suppression of ovarian function through manipulation of GnRH has been evaluated as a mechanism to decrease the loss of primordial follicles.
Administration of GnRH analogs results in downregulation of pituitary receptors within 10-14 days of administration and subsequent suppression of FSH release. This has been studied in animal models with promising results but data regarding effectiveness in humans is limited to small retrospective reports. A recent systematic review [23] evaluating the utility of GnRH agonists in patients with breast cancer summarized data on the 5 available nonrandomized studies. The largest study of 100 women receiving 12 months of GnRH analogs during cancer treatment found that 67% of patients recovered normal menses with 100% return of menstrual function for women less than 40 years of age. However, only 3 pregnancies were reported [24]. Smaller studies have reported resumption of menses for 72-90% of patients with several pregnancies reported.
Unfortunately, it cannot be determined from these studies that the administration of GnRH agonists provided definitive ovarian protection. There are 4 reported ongoing prospective, randomized trials in women with hormone receptor-negative breast cancer to evaluate the effect on preserving fertility [25]. Outcome data from these studies will provide valuable information on the utility of this treatment in preserving ovarian function. It should be noted that there is some concerns regarding the use of GnRH agonists. It has been suggested that GnRH agonists may decrease the effect of tamoxifen if administered simultaneously and until more data is available, the ASCO recommends that women interested in this treatment receive it only as part of an approved clinical trial [6].
Embryo Cryopreservation.
The basic principle of cryopreservation is to store cells or tissue for future use. Damage to cells during the cryopreservation process has been a barrier to the general use of this technology. Cryopreservation is typically performed by incubation in a low concentration of cryoprotectant to minimize ice crystal formation during freezing; however, cells with a high osmotic content such as oocytes are particularly vulnerable to damage. Embryos are composed of multiple blastomere cells and are more stable for cryopreservation. Due to the difficulties with oocyte cryopreservation, embryo cryopreservation has been the primary modality for fertility preservation and has been available since the 1980s. The most recent available data have indicated that over 21,000 embryo transfers occurred in USA in 2009 from frozen, thawed embryos resulting in percycle pregnancy rates of 35. Embryo banking has several advantages for patients interested in preserving fertility. It provides reassurance to a patient that she will have some potential to conceive if the cancer treatments result in permanent amenorrhea. There is also over 20 years of outcome data for cryopreserved embryos showing no effect on miscarriage, implantation rates, or live birth [26]. A disadvantage of embryo banking is the need to administer ovarian stimulation medications to obtain oocytes for fertilization. Ovarian stimulation is a particular concern for patients with hormonal sensitive tumors such as breast cancer and will be addressed further in this review.
The American Society for Reproductive Medicine (ASRM) Ethics Committee published guidelines in 2005 on fertility preservation and reproduction in cancer patients. These guidelines state that the only established method of female fertility preservation is embryo cryopreservation and that experimental procedures such as oocyte or ovarian tissue cryopreservation should be offered only in a research setting with IRB oversight [27]. However, these recommendations may be revised with improvements in oocyte preservation technology and increasing numbers of live birth reported in the past several years.
Oocyte Cryopreservation.
Recent advances in oocyte cryopreservation technology have expanded the use of this technology for fertility preservation. Disadvantages are similar to those of embryo banking including the risk of ovarian stimulation for patients with hormonally responsive cancers and the potential delay in starting cancer treatments. Oocyte banking is preferable over embryo banking for patients that do not have a partner and/or are not interested in utilizing donor sperm or have ethical concerns regarding cryopreservation of embryos.
Until recently, the primary disadvantage of oocyte banking has been the lower success rate compared to embryo cryopreservation. The first pregnancy from oocyte cryopreservation was reported in 1986 [28], but few pregnancies were subsequently reported due to poor survival rates for oocytes. The poor survival rates for oocytes that have been cryopreserved and thawed are attributable to several factors. Oocytes have a relatively high volume compared to other cells and are susceptible to intracellular ice crystal formation. Cryopreservation of oocytes has also been shown to result in chromosome and DNA abnormalities as the meiotic spindle of oocytes is very sensitive to chilling. Oocytes are also more susceptible to damage from reactive oxygen species than other cells. Many of these parameters improve after fertilization, making embryos less susceptible to damage than oocytes [29].
The more recent development of oocyte vitrification incorporates several modifications to traditional cryopreservation that result in less toxicity to oocytes. Oocyte vitrification exposes oocytes to higher concentrations of cryoprotectants for shorter durations of time followed by very rapid cooling. There have been over 500 pregnancies reported worldwide since 2005 with improvements in oocyte cryopreservation techniques [30]. A meta-analysis of randomized controlled trials assessing efficacy of oocyte vitrification reported similar fertilization, embryogenesis, and pregnancy from oocytes derived from vitrified oocytes compared to fresh oocytes. The authors state that increasing reports of successful cryopreservation of oocytes warrant reexamination of whether oocyte vitrification should still be considered an experimental technique [31].
Obstetrics and Gynecology International 5
Ovarian Tissue Cryopreservation.
Ovarian tissue cryopreservation has also been evaluated as a modality to preserve future fertility. A portion of ovarian cortex is cryopreserved and then transplanted back to the pelvis, or other location (arm or abdominal wall has been reported) [32,33]. The first report of an ovarian transplant operation occurred in 2000 [34] with the first pregnancy reported in 2004 [35] in a patient with non-Hodgkin's lymphoma. There have been fewer than 15 reported pregnancies worldwide with this technique; however, the first report of a woman that gave birth to a second child by natural conception after ovarian tissue transplantation has recently been reported [36].
Advantages of ovarian tissue transplantation include that it can be performed in prepubertal girls and adolescents, can be performed at any point in the menstrual cycle, has the potential to save large numbers of oocytes, and may allow for spontaneous pregnancy in the future without in vitro fertilization or ovarian stimulation. Disadvantages include the need for surgery (typically by laparoscopy) to remove the tissue and risk of graft failure. There is also some evidence that oocyte quality may be compromised with lower than expected fertility rates even with IVF. Another very significant concern which may limit its usefulness for cancer patients include the possibility of contamination of ovarian tissue by malignant cells which has been reported with hematologic cancers and Ewing's sarcoma [21,22].
Patients undergoing ovarian tissue cryopreservation may still require future ovarian stimulation with gonadotropins and/or in vitro fertilization. Options that have been investigated to eliminate the risk of exposure to gonadotropins include in vitro maturation, (IVM) or in follicle maturation (IFM) of oocytes. These techniques require surgical removal of immature oocytes followed by in vitro exposure to gonadotropins to mature oocytes outside the body. There has been limited success with this approach utilizing immature oocytes aspirated during either the follicular or luteal phase of the menstrual cycle and matured in vitro. Although the survival rate is lower than with oocytes matured in vivo and vitrified, survival rates of 67.5% and clinical pregnancy rates of 20% have recently been reported [37]. Further data is needed to determine if this will be an effective treatment option for patients.
Recent Developments in IVF for Cancer Patients.
Until recently, preserving oocytes or embryos have required a delay in cancer treatment of up to 4-6 weeks to complete the IVF cycle. Traditional ovarian preparation for IVF required 10-14 days of ovarian stimulation with exogenous gonadotropins preceded by ovarian suppression with GnRH agonists for approximately 2 weeks to prevent premature ovulation. Medications were initiated in the luteal phase of the cycle which may add up to 3 additional weeks to the process depending on when the patient presents for treatment.
Recent advances that include the development of GnRH antagonists have significantly decreased the interval from patient presentation to gamete cryopreservation. In contrast to GnRH agonists, GnRH antagonists immediately suppress pituitary release of FSH and LH and do not require the 10-14 days of administration prior to gonadotropin initiation. GnRH antagonists are initiated at approximately day 6 of gonadotropin stimulation which begins on day 2-3 of a menstrual cycle. This approach still requires awaiting menses prior to initiating gonadotropins but decreases the interval to oocyte retrieval compared to traditional IVF stimulation protocols.
A recent report of 3 patients initiating "random start IVF" evaluated the effectiveness of initiating GnRH antagonists at the time of patient presentation (menstrual cycle day 11, 14, and 17) rather than waiting for menses. This was then followed by the standard 10-14 days of ovarian stimulation and subsequent oocyte retrieval. The goal was to decrease the time to oocyte retrieval for breast cancer patients and resulted in a reasonable ovarian response with 7-10 embryos cryopreserved per patient [38]. This approach provides a significant advantage by decreasing total time for the IVF cycle, but further data is needed to determine its effectiveness compared to traditional IVF stimulation regimens.
In addition to the delay in cancer treatment, ovarian stimulation for IVF poses another theoretical risk to patients with hormonally responsive cancers. Ovarian stimulation with gonadotropins for IVF often results in supraphysiologic estradiol levels of over 2000 pg/mL compared to normal physiologic peak estradiol levels of 200-350 pg/mL. The high estradiol levels sustained during IVF treatment are a particular concern in women with estrogen receptor positive breast cancer. In the initial nonrandomized studies, stimulation protocols that include the selective estrogen receptor modulator tamoxifen or aromatase inhibitors such as letrozole administered during gonadotropin treatment have been shown to decrease estradiol level production while not decreasing overall oocyte numbers. Initial reassuring data indicates that this approach has not been shown to increase short-term cancer recurrences for breast cancer patients [39,40].
Additionally, estradiol levels may be reduced after oocyte retrieval by the use of GnRH agonists to trigger ovulation instead of hCG. This has been shown to substantially reduce the risk of ovarian hyperstimulation in patients undergoing IVF by decreasing ovarian stimulation after retrieval. This has been evaluated in oocyte donors undergoing oocyte vitrification and has been shown in a retrospective study to result in similar numbers of oocytes retrieved. There was also no significant difference in the percentage of oocytes surviving thawing, oocyte fertilization, and pregnancy rates [41]. Further research is needed to determine if this will be beneficial in cancer patients undergoing oocyte or embryo banking but holds promise to further decrease any theoretical risks of breast cancer progression or recurrence as a result of ovarian stimulation.
Additional Considerations.
Counseling of patients for future fertility should also include a discussion of alternative options including third-party reproduction. Third-party reproduction includes the use of either oocytes donated by another individual (either known or anonymously) as well as gestational carriers ("surrogates") to carry a pregnancy.
Oocyte donation may be utilized for patients without residual ovarian function after cancer treatment. Oocyte donation enables a patient to conceive and carry a pregnancy if she is unable to conceive with her own oocytes. Gestational carriers are most commonly used for patients that do not have a functional uterus to carry a pregnancy. They may also be considered for patients that are concerned about the recurrence of hormonally responsive tumors during pregnancy or any increased risks of pregnancy complications after cancer treatment. Adoption is also a consideration for family building but may be more difficult for cancer survivors than patients without a history of cancer [42].
Unfortunately, a significant barrier for many cancer patients is the cost of fertility-preserving treatments. Insurance coverage is often not provided for these treatments that they are often considered "elective." It has been argued that insurance companies should provide coverage for iatrogenic infertility as a result of cancer treatments similarly to coverage provided for other iatrogenic postcancer treatment conditions such as breast reconstruction after mastectomy and wigs for alopecia [43]. The average cost of fertility preservation for female cancer patients pursuing either embryo or oocyte cryopreservation is $8655 [44] and remains a barrier to access. Resources such as Fertile Hope's Sharing Hope Program can help patients and clinicians find centers with fertility preservation services as well as programs to provide financial assistance (http://www.fertilehope.org/).
Despite the concerns for patients including costs and potential risks, a followup survey of 28 cancer survivors who attempted fertility preservation found that 92.3% felt positively about their decision to undergo fertility preservation with only one patient, diagnosed with metastatic cancer shortly after oocyte retrieval, expressing regret [45]. The fact that patients with cancer recurrence may die and leave a minor child with one parent is an ethical concern. It has been suggested that it may be unethical to enable a woman to reproduce if she is expected to have a shortened lifespan. A review of fertility preservation and reproduction in cancer patients by the Ethics Committee of the Society for Reproductive Medicine stated that this concern may not be persuasive given that the risk of recurrence for many patients may not be excessively high, and the child may have a meaningful life despite the death of a parent [27].
Conceiving after Cancer and the Risk of Pregnancy Complications.
The likelihood of conceiving after cancer treatments is dependent on the type of cancer, age at diagnosis, treatments with gonadotoxic agents including type and duration, and various other fertility factors. The chance for conception at best can only be estimated based on individual patient history and characteristics. It also appears that future fertility may also be influenced by gender. Overall, the likelihood of future children was found to be lower for female cancer survivors than male survivors either spontaneously or with fertility treatments [46]. When stratifying for age at diagnosis and estimating from probability charts, men with a cancer diagnosis prior to age 30 years had the highest overall chance of future parenthood (50%), followed by women diagnosed at age 30 years or younger (32%), then males diagnosed after age 30 years (12%), and then females diagnosed after age 30 years (<5%). For female patients the likelihood of pregnancy was dependent on the type of cancer and was highest for patients after uterine choriocarcinoma (65%), followed by lymphoma (23%) and malignant melanoma (22%), all other cancers (<5%).
If pregnancy is established, there are several potential risks to a fetus conceived after cancer treatment. Both radiation and chemotherapy may induce chromosomal aberrations in oocytes that may theoretically increase the risk of birth defects and genetic disease in offspring. A review of studies evaluating the risk of malformations in offspring of breast cancer survivors did not report an increased risk of birth defects compared with controls [47]. It may be that any remaining pool of primordial follicles after treatments is unaffected by the prior treatment, and/or those oocytes that fertilize and develop into ongoing pregnancies are from a cohort of oocytes that do not demonstrate any carcinogenic effect. However, when considering the half-life of treatments and the duration of time for oocyte maturation, it has been recommended to delay pregnancy for at least 6 months [47] after treatment with chemotherapy and 12 months following completion of radiotherapy to minimize risks to offspring [48].
Pregnancy complications and the subsequent risk to the fetus are another concern for cancer patients. A review of pregnancies in patients previously treated for breast cancer reported variable outcomes [47]. This meta-analysis evaluated 6 studies reporting birth outcome data after breast cancer compared to women without breast cancer. Four studies found no increased risk of any pregnancy complications; however, one study reported a higher risk of miscarriage and another reported no higher risk of miscarriage but a higher risk of cesarean section, preterm birth, low birth weight, delivery complications, and congenital abnormalities. The authors of the review suggest that although the large majority of births from women previously treated for breast cancer had no adverse effects, these women are at higher risk and may need careful monitoring until additional studies resolve the discrepancy in the data.
A recent report of birth outcomes obtained from a childhood and adolescent cancer registry from 4 US regions has reported that infants born to female childhood cancer survivors were more likely to be preterm (RR 1.54), and to weigh less than 2500 g (RR 1.31). Although there appeared to be a higher risk during the pregnancy, there were no increased risks to the offspring of malformations, infant death, or altered sex ratio indicating no increased risk of germ cell mutagenicity [49]. A review of pregnancies postcancer diagnosis in adults indicated that subsequent pregnancy did not represent a major health risk for the mothers or children. In 678 pregnancies there were no increased risk of congenital malformations (OR 0.6) though pregnancies more often resulted in preterm delivery (OR 2.8), low birth weight (adds ratio 2.5), and cesarean section (OR 2.3) and were delivered on average 6 days earlier even after controlling Obstetrics and Gynecology International 7 for multiple births from patients utilizing fertility treatments to conceive [46].
It is not clear whether the increased risks in pregnancy are related to the malignancy itself or the result of treatments such as radiation or chemotherapy. Several considerations exist in cancer patients that may affect the risk to a developing fetus including altered metabolism, nutrition deficiencies from malabsorption of nutrients, increased stress, and general overall decreased health. It is also possible that these patients might be subjectively viewed as higher risk by their physicians and are electively delivered earlier. In summary, it does appear that there may be an increased risk of preterm birth and associated neonatal complications for female cancer survivors, but the outcomes of the majority of pregnancies appear similar to noncancer patients.
Risk of Transmission of Genetically Linked Cancers to
Offspring. Although there does not appear to be a definitive increased risk of congenital abnormalities for the offspring of female cancer patients, there is a concern over the transmission of genetically linked cancers. Hereditary cancers account for about 5% of all malignancies [50]. Most hereditary cancers follow an autosomal dominant mode of inheritance with the most common being hereditary nonpolyposis colorectal cancer, familial breast and ovarian cancer, neurofibromatosis type 1, familial retinoblastoma, multiple endocrine neoplasia type 2, and familial adenomatous polyposis. Fewer hereditary cancers have an autosomal recessive inheritance such as ataxia teleangiectatica and Fanconi anemia [51].
Preimplantation genetic diagnosis (PGD) is a technique to screen embryos for genetically transmissible diseases prior to implantation. PGD involves removing one or more cells from an embryo after IVF and testing for predisposing mutations. PGD may be performed for genetic diseases where the gene has been identified and tested. PGD has been performed for all of the cancer predisposition syndromes mentioned previously in additional to several other less common susceptibility syndromes [52].
Significant controversy exists over the ethical aspects of screening embryos for disease. A survey of 4,834 Americans in 2004 found that approximately 52% of women and 62% of men reported that they approved of PGD for screening embryos that had a tendency to develop a disease such as adult onset cancer [53]. A more recent survey of attendees at a national conference for individuals and families affected by hereditary breast and ovarian cancer reported that only 32% of participants had ever heard of PGD; however, 57% believed that it was an acceptable option for highrisk individuals and that patients should be given this information by their health-care provider [54]. The ASRM Ethics committee guidelines state that the concerns about the welfare of resulting offspring should not be cause for denying cancer patient's assistance and that preimplantation genetic diagnosis to avoid the birth of offspring with a high risk of inherited cancer is ethically acceptable. However, selection to avoid a genetic disease may not always be appropriate and factors such as the severity of the disease, the probability of its occurrence, and the age at onset should be considered.
Pregnancy and Cancer Recurrence.
Cancer is diagnosed in one of every 118 pregnant women each year. There are several concerns for patients pursuing pregnancy after cancer treatment that may be dependent on the type of cancer and treatments. Concerns include the risk of cancer recurrence either during or after treatment, the possible increased risk of cancer recurrence secondary to pregnancy itself (breast cancer, endometrial cancer, and malignant melanoma), and the difficulty in detecting cancer during pregnancy (breast cancer and endometrial cancer).
For most cancers, future pregnancy does not negatively impact the likelihood of recurrence. However, concern exists for several hormonally mediated cancers due to the consideration that the hormonal milieu of pregnancy may increase the risk of recurrence. The most common female tumors in reproductive age women that have been associated with hormonal mediators include breast cancer, endometrial cancer, and malignant melanoma. The most common cancer in women of childbearing age is breast cancer and is particularly concerning due to its clear association with hormonal markers. Patients with estrogen and/or progesterone receptorpositive tumors pose a particular challenge in counseling patients regarding recurrence risks during pregnancy and long-term overall recurrence risk.
Approximately 2% of all breast cancers occur in women between 20 and 34 years of age and 11% in women between 35 and 45 years. Given the relatively young age at diagnosis and initial treatment, there is a risk of recurrence during the reproductive years. The overall risk of recurrence and timing of recurrence in the context of pregnancy is difficult to evaluate due to the complex associations with other predisposing factors such as age at diagnosis, prior pregnancy history, age at menarche, and family history. Additional considerations include whether or not pregnancy itself affects the long term survival for patients with breast cancer, and whether or not the timing of pregnancy affects any risk of recurrence.
Initial studies in the 1980s and early 1990s indicated that there did not appear to be a difference in survival for time intervals from diagnosis to pregnancy [55][56][57]. In contrast, followup data indicated that patients that become pregnant in the first 3 months [58] or the first 6 months [59] of the initial breast cancer diagnosis may have an increased mortality. Clark et al. compared women that conceived within 6 months after a diagnosis of breast cancer to those patients that became pregnant between 6 and 24 months and more than 5 years after a diagnosis and found 5-year survival rates of 54%, 78%, and 100%, respectively. Another populationbased study in 2006 showed a statistically nonsignificant increased mortality risk (RR 2.20, P = 0.58) for women diagnosed with breast cancer less than 6 months before pregnancy. However, if the interval was more than 2 years, the risk of death was reduced significantly (RR 0.48, P = 0.009) [60].
A recent 2011 meta-analysis by Azim et al. has addressed the optimal timing of pregnancy for breast cancer patients. Five studies compared 353 patients who became pregnant within 6-24 months and after 2 years of a breast cancer diagnosis and found that pregnancy within 6-24 months or beyond 2 years did not have an effect on overall outcome. In summary, the data is controversial but it would appear prudent to advise waiting a minimum of 6 months after diagnosis to attempt pregnancy but more than 2 years is perhaps advisable and will depend on individual patient characteristics. For patients at higher risk of recurrence, a delay of 5 years or more may also be recommended.
Interestingly, several studies have suggested that pregnancy is actually associated with a better long-term prognosis for breast cancer patients. The Azim et al. meta-analysis reviewed 14 studies of women who became pregnant after breast cancer and reported that 8 studies demonstrated a significant survival advantage while the remaining 6 showed a trend-favoring pregnancy but did not reach statistically significance [61], Once criticism of studies reporting an improved survival for patients with breast cancer is that they may have included selection bias referred to as the "healthy mother effect." The "healthy mother effect" infers that women who become pregnant represent an overall healthier group of patients with perhaps a lower risk of disease relapse. The Azim et al. meta-analysis incorporated several sensitivity analyses to attempt to control for the "healthy mother effect" but still reported that women who got pregnant following breast cancer diagnosis had a 41% reduced risk of death compared to women who did not get pregnant and was most notable in patients with a history of node-negative disease. In a subgroup analysis, they compared the outcome of women with a history of breast cancer that became pregnant to breast cancer patients who did not get pregnant and did not find a difference in survival between the groups. A separate meta-analysis also controlling for the "healthy mother effect" also found similar results with a survival that was higher among early stage breast cancer patients compared to control (hazard ratio 0.51) for pregnancy that occurred at least 10 months after the diagnosis [62].
The mechanism by which pregnancy may provide a protective effect is not clearly understood. It has been found that parous women have changes in expression of markers of disease recurrence including estrogen receptor alpha and beta (ERα, ERβ) and human epidermal growth factor receptor 2 (HER2) for up to 10 years after pregnancy, which may provide protection from cancer recurrence [63]. Patterns of breast cancer recurrence have been evaluated with regard to estrogen receptor status in two randomized trials with 25 years of median followup. It was reported that most breast cancer recurrence occurred within the first 5-7 years in ER-negative after randomization while ERpositive patients had events spread through 10 years. Patients with ER-positive breast cancer generally receive 5 years of adjuvant hormonal therapy and are recommended to delay childbearing although some women may elect to interrupt hormonal therapy to conceive. Even for patients that may not be receiving adjuvant hormonal therapy, it is recommended that patients wait a minimum of 2 years following diagnosis to conceive due to a generally higher incidence of recurrence in the first 2 years after diagnosis [64]. Overall, available data support that pregnancy after breast cancer are safe for women at low risk of recurrence but the timing of pregnancy will depend on individual patient characteristics and estrogen receptor status.
Although less common than breast cancer, malignant melanoma is a cancer with a peak incidence in the 30s and 40s resulting in a substantial number of women in their reproductive years interested in pursuing pregnancy after treatment. In contrast to breast cancer, there is limited evidence that hormonal mediators significantly influence this cancer. It has been noted that patients diagnosed during pregnancy often have a more rapid progression of their cancer, and estrogen-receptor proteins have been detected in tumor specimens. However, most studies have not found a statistical difference in 5-year survival rate for pregnant or nonpregnant patients. A study investigating endocrine ablation with procedures such as oophorectomy on patients with melanoma showed no benefit [65]. However, this author does suggest advising against future pregnancies for patients with nodal metastases or those who experienced tumor activation during a prior pregnancy. Others have recommended that all women with a history of melanoma avoid pregnancy for 3-5 years after treatment [66].
Endometrial cancer is another hormonally mediated cancer as evidence by the fact that exposure to unopposed estrogen is a significant predisposing factor. Progesterone offers a protective effect on the endometrium but both estrogen and progesterone are elevated during gestation. Limited data is available regarding absolute risk of cancer progression or recurrence for patients with a history of endometrial cancer as most patients are treated with a hysterectomy. A small case series and literature review of 50 women reported data on women with early stage endometrial cancer treated with conservative hormonal treatment in lieu to a hysterectomy. There were 65 deliveries reported with 77 live births. No neonatal morbidity was noted but one of the 50 women died of her disease after delivery [67]. Another study found that 40% of patients treated with conservative treatment of progestin therapy conceived but had a 36% relapse rate of their cancer [68].
Conclusions
Given the relatively high incidence of cancer in reproductive age women and improvements in 5-year survival, an increasing number of women are presenting for discussion of fertility preservation and pregnancy after cancer treatment. The ASCO published recommendations in 2006 on fertility preservation in cancer patients. These guidelines state that oncologists should address the possibility of infertility with cancer patients and be prepared to discuss possible fertility preservation options or refer the patient to a reproductive specialist.
Part of the difficulty in counseling patients regarding the risk of infertility and/or subsequent pregnancy complications is that the risks are dependent on several factors. These risks include the dose and duration of treatment, other risk factors for infertility, the age of the patient, and the patient's baseline ovarian reserve at the time of initiation of treatment. Advancements in ovarian reserve testing may help Obstetrics and Gynecology International 9 counsel patients about the impact of their cancer treatments on fertility and chances for future pregnancy.
Fertility preservation is a rapidly evolving field that includes medical and surgical treatments to decrease the impact of cancer treatments on future fertility. Ongoing trials will address the effectiveness of GnRH agonists in protecting ovarian reserve. Several technologies exist to help preserve future fertility including embryo cryopreservation, oocyte, and ovarian tissue cryopreservation. Embryo cryopreservation is currently the only recommended method of gamete preservation, but recent advances in oocyte vitrification may increase the utility of this treatment for cancer patients. Additionally, PGD may decrease the risk of disease transmission of hereditary cancer syndromes. The risk to the patient of IVF may also be decreased with recent advances in IVF stimulation protocols.
There may be an increased risk of preterm birth and associated neonatal complications for female cancer survivors, but the outcomes of the majority of pregnancies appear similar to noncancer patients. It is not clear whether the increased risks in pregnancy are related to the malignancy itself or the result of treatments such as radiation or chemotherapy. Also, the risk of disease recurrence will depend on several factors, but for most cancers the risk of recurrence is not increased secondary to pregnancy. Overall, pregnancy appears safe for most patients after cancer treatment but will depend on individual patient characteristics. | 2018-04-03T03:38:59.563Z | 2012-03-18T00:00:00.000 | {
"year": 2012,
"sha1": "7a57336376527e888ed97f566ba39fb30076a53f",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/ogi/2012/953937.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7a57336376527e888ed97f566ba39fb30076a53f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
218921599 | pes2o/s2orc | v3-fos-license | Determinants of Output Growth in Africa’s Non-Oil Producing Countries (ANOPCs)
This study examined the determinants of output growth in the 14 selected non-oil producing countries in Africa, using annual time series data spanning from 1980 to 2016 sourced from the World Bank, World Development Indicator (WDI) and IMF International Financial Statistics (IFS). Error-correction based panel cointegration test was employed to test for the panel cointegration between output growth and some selected macro-economic variables. Results revealed that there is a long-term relationship between output growth and the selected macroeconomic variables; that the responses of output growth to the shocks from world oil price are positive and significant in some of the countries which were able to explore alternative sources of energy; that the responses of output growth to the shocks from Federal Fund Rate (FFR) are significant in all the selected countries, among others. Based on the findings, it is recommended that over reliance on oil can be reduced by diversifying into non-oil sources of energy such as natural gas and renewable sources of electricity such as hydro, geothermal, solar and wind. The study also recommends that stable exchange rate policy should be adopted across all African non-oil producing countries as this will go a long way in creating a predictable climate for investment, enhance more proceeds from exports and appreciate domestic currency.
Introduction
There is no country in the world either developed or developing that is not concern about what actually determines her output growth. The main question then is why are some countries poor while are others rich and what determines their output growth? Nkurunziza and Bates (2003) noted that economic growth rates are still not high enough to make a real dent in the pervasive poverty and enable developing countries to catch up with other developed nations since investments have remained subdued, limiting efforts to diversify economic structures and boost growth. Mallick and Kumar (2002) specifically noted continued drop in capital formation suggesting that of the components of GDP, investment has been one of the slowest growing, a symptom of looming crisis. Therefore, it is necessary to understand various factors and circumstances that prevail in the African countries which influence their current GDP per capita growth. Consequently, the aim of this study is to investigate the actual determinants and the major driving forces behind the output growth of ANOPCs Crude oil is arguably one of the most important commodities in today's industrialized economy, as it represents a crucial energy source for many countries. Being a global commodity the effects of crude oil on economies across the world is multifaceted. Virtually all the ANOPCs are predominantly producers of primary products hence many of them depend largely on importation making them import-dependent economies. This is another reason why many of the macroeconomic policies of many countries in the ANOPCs are highly prone to external influences. Therefore to cope with these external influences macroeconomic policies are subject to frequent changes in order to cope with a prevailing situation presented by the external forces at a certain period of time (AFDB 2014).
Literature and Empirical Review
The neoclassical Solow-Swan (1956) economic growth theory, also known as the exogenous growth model, advocates for the accumulation of physical capital as an important driver of economic growth in the short run, while technological advancement is the key determinant of economic growth in the long run. An important extension of the neoclassical growth model was the inclusion of human capital stock as one of the key factors driving economic growth to complement physical capital accumulation (Mankiw, Romer, and Weil 1992;Islam 1995). Olawale (2017), investigated the determinants of Economic growth in 18 Sub-Sahara African countries by decomposing Export and Import over the period of 1996-2015 using a neoclassical economic growth model containing GDP, export components, import components, export concentration index, capital and labour force as variables of analysis. The result of fixed effects estimations revealed that both exports and imports contribute significantly to economic growth. His findings also shown that capital formation has a more significant influence on economic growth than labour force. Lumengo B. and Ferdinand (2015) assesses the determinants of economic growth in Sub-Sahara African countries using Ghana as a case study during the period 1970-2012 using the Bayesian Model Averaging (BMA) in order to address the issue of model uncertainty. Making use of the Markov Chain Monte Carlo Model composition (MC)3 for model selections, the results of the empirical analysis show the importance of variables such as current account balance, inflation rate and population growth as well as the role of the dual economy in driving economic growth in Ghana. These results show that economic growth policy in Ghana should not be confined within a specific growth theory, be it neoclassical and Keynesian. The results are robust with the change of model priors in the context of the BMA analysis. Ndambiri and Ritho. (2012) investigated the determinants of economic growth in the 19 Sub-Sahara African Countries for the years 1982-2000. With the opinion that economic growth is importantly seen as a dynamic phenomenon, their study employs the Generalized Method of Moments (GMM) to explain the factors that determines the growth of economies in the region. The results of the study reveal that physical capital formation, a vibrant export sector and human capital formation significantly contribute to the economic growth among sub-Sahara African countries. However, government expenditure, nominal discount rate and foreign aid significantly lead to negative economic growth. Based on the results of their findings, it is recommended that relevant policies be formulated to promote those sectors that enhance economic growth in the region. Barro (1999) investigated the determinants of economic growth using an extended neoclassical growth model for 100 countries and covering the period 1960-1995. Based on a panel regression and three stage least squares method, the study results showedthat investment share, growth rate of terms of trade, years of schooling, rule of law index, democracy index and international openness were positively and significantlyassociated with economic growth, while government consumption, total fertility rate, and inflation were negatively and significantly associated with economic growth. Romer (2006) in his modification of Arrow's seminar work on the economies of learning by doing pointed out that investment in knowledge (experience) has strong linkage with increase in productivity. According to him, the indexes of experience by cumulative investment follow the following production function.
Theoretical Framework
Where is the output of country i , A(t) is the stock of knowledge of country i at period t, and are the capital and labour of the country at period t. Romer pointed out that labour is more productive due to accumulation of knowledge which also depends on experience. However experience is a function of past investment. Consequently the growth rate of output can be written as a function of indexes of experience by cumulative investment as follows: Where G(t) is the growth rate of the output, I(v)dv is the indexes of the cumulative investment which is equal to capital stock k(t). However, the growth rate of output of the country according to Romer (2006) is equal to the percapita production function (real output or income) i.e
= ( ) 3
Substituting equation 3 in 2 shows that: Where y is the real output Again, in the definition of money demand function, Romer (1996) postulated a relationship between inflation, money growth and interest rate in such that demand for real money balance is a decreasing function of interest rate and increasing function of real income. That is: This can be written in linear form thus: Therefore:
= + 7
Dividing both sides by leads to: Where 1/ and / are elasticities of real money balance and interest rate respectively.
Substituting equation 8 into 4 leads to: Thus growth rate can be presented as a function of the real money balance and interest rate which determines capital stock investment, where labour remains constant.
Our model is a modification of equation 9. In our attempt to study the determinants of output growth in the economies of ANOPCs, apart from the monetary policy instruments like interest rate and money supply, we also included in the model as explanatory variables some policy variables like exchange rate and inflation rate. All these variables are identified as having direct linkages with monetary policy dynamics (Ngalawa and Viegi, 2012;Omolade and Ngalawa, 2014) While, GDP growth rate is used as a measure of economic growth of the countries and capital formation K is added as additional control variable because of its important role in growth process.
Again, from the neoclassical perspective of production function the linkages between energy and growth are explained. A general form of production function describes the relationship between oil as a form of energy and economic activity. The function is presented thus; Where are various outputs or economic growth are various inputs such as capital, labour among others.
are energy inputs used in the production process i.e oil, gas, coal etc.
According to the neo classical economists, the relationship between energy and growth of the GDP gross domestic products can be affected by the following agents (i) Substitution between energy and other inputs (ii) Technological change Journal of Economics and Sustainable Development www.iiste.org ISSN 2222-1700 (Paper) ISSN 2222-2855 (Online) Vol.11, No.8, 2020 159 (iii) Shifts in the composition of the energy input (iv) Shifts in the composition of outputs Other factors can be a shift in the mix of the inputs used in the production; for instance either capital intensity or labour intensity. Basically, the linkage between oil as an input and growth is explained by Mainstream economist through their growth theories with natural resources as a form of energy.
Model Specification
Following equations 9 and 10 our model is expressed thus Where , is the growth rate of output of country i at time t, , comprises of the monetary policy instruments; real money balance measured by real money supply and real interest rate. It also comprises of policy variables such as real exchange rate, and inflation rate in country i at time t, while , is the capital of country i at time t (measured as Gross Capital Formation), , is crude oil price at time t and μi,t represents the country specific stochastic variable. Note that i=1,2.....14 and t=1,2………,36. More explicitly for the panel analysis the model is presented as follows , = , + , + , + , + , + , + , + , . 12 The study used quarterly data spanning a period of thirty six years from 1980:Q1 to 2016:Q4 for the economy.
The study period is dictated by data availability and an effort to stay current. The data was obtained from the World Development Indicators (WDI) and IMF -International Financial Statistics (IFS).
Error-Correction Based Panel Cointegration Test
In this aspect, four basic types of tests are designed for the purpose of testing for panel cointegration. The tests are conducted based on both asymptotic distribution and cross sectional dependence that is, bootstrapping. Results of the asymptotic distribution for the four tests are shown in table 4.1. The results in table 4.1 revealed that two tests out of the four basic tests designed for the purpose of testing for panel cointegration indicates a rejection of the null hypothesis of no long-run relationship between output growth and some selected macro-economic variables. This implies that there is a long-term correlation between output growth and macroeconomic variables in the selected non-oil producing countries in Africa. This study therefore proceeds to estimate the Error Correction Model using the fixed effect within regression. The results are presented in table 4.3 below. . This is a strong indication that macroeconomic variables appear to have more significant influence on output growth in the short-run than in the long-run in the selected non-oil producing countries during the period under review.
Journal of Economics and Sustainable Development www.iiste.org ISSN 2222-1700 (Paper) ISSN 2222-2855 (Online) Vol.11, No.8, 2020 161 The implication of this finding is that macroeconomic variables appear to predict output growth more in the shortrun than in the long-run. The possible reason behind this finding might not be unconnected with the nature of the emerging economies in Africa particularly the non-oil producing countries in Africa where according to Al-Fayomi (2009) as their economies are not fully efficient and therefore do not incorporate all given information to allow long-term co-movement between macroeconomic variables and output growth. Moreover, the results of both long-run and short-run model segment of the fixed effect regression showed that domestic interest rate has negative and significant impact on output growth in non-oil producing countries in Africa. The implication of this finding is that when interest rate which is the cost of borrowing is high, it will lead to disincentive in borrowing; which will eventually discourage investment and thus a declining output growth rate. This finding aligns with the work of Irfan andUme (2011) andHameedGul et al (2012). Also, the results of both long-run and short-run model segment of the fixed effect regression revealed that Money supply growth rate (MSGR) exerted positive and significant impacts on output growth in non-oil producing countries in Africa. This finding conforms with the economic theory stating that money supply is an increasing function of economic growth, which means that as money supply increases, output growth also increases. This finding also agrees with the works of Ahmad and Suleiman (2011) and Mishra (2012) who posited in their research works that expansionary monetary policy through increase in money supply is a declining function of interest rate which eventually triggers investment and leads to output growth increment. In addition, the results from the short-run model segment of the fixed effect regression exhibited that Gross Capital Formation (GCF) has positive and significant impact on output growth in non-oil producing countries in Africa. This particular finding indicates that there can be no significant output growth without investments in fixed and productive capital. This finding also corroborates the assertions of Adjasi andBiekpe (2009) andGutierrez (2005) who posited that private capital increases in productive areas, output growth will equally be increased.
The results of the cross-sectional dependence test which is based on the correlation matrix of the residual and Breusch-pagan LM test of independence are presented in table 4.3 below This result therefore indicates that non-oil producing countries in Africa respond differently to their common factor shocks. In this regard, the presence of cross-sectional dependence in this research work justifies the testing for bootstrapping option as a means of getting a robust p-value even in the presence of cross-sectional dependence.
Structural Vector Autoregressive (S-VAR) Model
The presence of cross-sectional dependence in the Correlation Matrix of residual tests conducted which necessitates the use of alternative estimation technique to examine the responses of the selected non-oil producing countries to their common factors separately. The reason behind this is that cross-sectional dependence is majorly caused by the presence of common factors among the units, thereby leading to cross-member correlation. For instance, non-oil producing countries in Africa are known to have common characteristic of being emerging and Journal of Economics and Sustainable Development www.iiste.org ISSN 2222-1700 (Paper) ISSN 2222-2855 (Online) Vol.11, No.8, 2020 dependent economy. This gives room for the tendency of sharing similar factors among themselves. However, the responses of these African countries to the common factor shocks might be at varying degrees due to different social norms, economic effects and independent preferences that characterize each of the African countries (Pesaran, 2013). Structural Vector Autoregressive (S-VAR) is therefore employed to examine how each non-oil producing country in Africa responds differently to their common factor shocks.
Structural VAR: Impulse Response Function and Variance Decomposition.
Impulse Response function assesses the effect of one standard deviation shock in an exogenous variable to one of the innovations of the endogenous variable of a model within a given period of time. In other words, it is used to predict or forecast the response of each endogenous variable to a standard deviation change in all other exogeneous variables. Moreover, variance decomposition examines the proportion of variation of the dependent variable explained by each of the independent variables. It shows which of the independent variable is stronger in explaining the variability in the dependent variables over time. Both S-VAR Impulse Response Functions and Variance Decompositions are shown in the figures and table below respectively.
Response of GDPgr to WOP BENIN
Journal of Economics and Sustainable Development www.iiste.org ISSN 2222-1700 (Paper) ISSN 2222-2855 (Online) Vol.11, No.8, 2020 Figure 1a displayed the response of output growth to (WOP) in each of the selected non-oil producing countries in Africa. Likewise in order to complement the result of Impulse Response function in Figure 1a, table 4.4 showed the Variance Decomposition of GDPgr with respect to WOP in the selected non-oil producing countries in Africa. Result from figure 4.1a revealed that the response of output growth (GDPgr) to a standard deviation shock from (WOP) is positive and significant in Senegal,Kenya, Swaziland, Botswana and Zambia. Shock from WOP produced similar effect on GDPgr in Senegal and Swaziland as the shock from WOP was huge on GDPgr at the initial stage but later tended to diverge towards equilibrium as time increases. This result was supported by the result of variance decomposition depicted in table 4.2a in which the WOP shock in both Senegal and Swaziland explained about 15% and 31% of the variation in GDPgr in third quarter respectively, but the proportionate explanation power increased significantly as the quarter progresses to about 60% and 64% respectively in the 12 th quarter. Also, the positive impact of the shock from WOP on GDPgr was largely significant and even exploded as time increases in Kenya, Botswana and Zambia. These results equally align with the results of variance decomposition shown in table 4.2a in which the WOP shock explained about 10%, 12% and 20% variance in GDPgr in quarter 3 in Kenya, Botwsana and Zambia respectively. But the innovative power increased rapidly and significantly to about 4%, 41% and 53% in the 12 th quarter in these countries respectively.
Meanwhile, findings from both S-VAR Impulse Response Function and Variance Decomposition produced results that actually contradict theoretical expectation as the standard deviation shock from WOP exerted positive and significant impact on the output growth in Senegal, Kenya, Swaziland, Botswana and Zambia. The outcome of these results might be linked to the fact that these countries might have identified and explored alternative sources of energy that helped them to adopt measures to reduce the level of dependency on oil in international market. This might have gone a long way in protecting these countries from the negative impacts of World Oil Price increase. Example of this fact can be seen in Senegal, Kenya and Swaziland who have implemented various Biofuels production initiatives over the past few years to improve their energy sectors. Through these initiatives, several Biofuels projects were carried out in these countries with the inclusion of the plantation of Jatropha oil seeds which were grown on thousand hectares of land in these countries (Mitchell, 2011) Vol.11, No.8, 2020 Figure 1b: Response of output growth (GDPgr) to Federal Fund Rate (FFR) shock in the selected non-oil producing countries in Africa. Table 4.5 exhibited the Variance Decomposition of GDPgr with respect to FFR in the selected non-oil producing countries in Africa. The results from figure 1b revealed that the response of GDPgr to a shock from FFR was positive initially, diverged toward equilibrium in the 3.8 th , 3 rd , 4 th and 6 th period and eventually fell to the negative axis in Mauritius, Senegal, Bennin snd Togo respectively. This result is consistent with the results of Variance Decomposition in Table 4.2b in which the shock from FFR explained about 25%, 10%, 40% and 50% variance in GDPgr during the 3 rd quarter in Mauritius, Senegal, Benin and Togo respectively, but the proportionate explanation power decreased significantly to about 8%, 1%, 10% and 15% in those countries respectively during the 12 th quarter. In addition, figure 4.1b equally revealed that a standard deviation shock coming from FFR exerted positive and significant impact on GDPgr in Kenya, Zambia, Mozambique, Uganda, Sierra-Leone, Gambia, Botswana, Namibia, Swaziland and Lesotho. But these positive and significant impacts are mostly pronounced in Swaziland and Lesotho during the period under review. The result of this Impulse Response Function agreed with the result of Variance Decomposition in table 4.2b in which the FFR shock explained about 8%, 3%, 10%, 10%, 8%, 7%, 10%, 11%, 10% and 15% variation in GDPgr during the 3 rd quarter in Kenya, Zambia, Mozambique, Uganda, Sierra-Leone, Gambia, Botswana, Namibia, Swaziland and Lesotho respectively. But the proportionate explanation power of FFR shock increased significantly as the quarter progresses to 25%, 30%, 14%, 42%, 41%, 25%, 45%, 45%, 40% and 45% in those countries respectively in the 12 th quarter.
Conclusion and Recommendations
This study examined the determinants of output growth in the selected non-oil producing countries in Africa. Results from the study revealed that the response of output growth to the shocks from world oil price are positive and significant in some African non-oil producing countries who were able to explore alternative sources of energy. Findings from the study also revealed that the responses of output growth to the shocks from Federal Fund Rate (FFR) are significant in all the selected African non-oil producing countries. Based on this finding, this study therefore concludes that the economies of African non-oil producing countries are exposed and sensitive to the US Federal Fund Rate which represents the foreign interest rate.
Based on this finding, this study concludes that adoption of stable exchange rate is sufficient enough to shield the economies of African non-oil producing countries from the negative effects of an increase in the global oil price. Therefore, this study concludes that expansionary monetary policy through reduction in interest rate to enhance investment is more effective in compensating and offsetting the negative effects of an increase in the global oil price in the selected African non-oil producing countries. Based on the findings of this study, the following recommendations were raised: over reliance on oil can be reduced by diversifying into non-oil sources of energy. The most common alternative are natural gas and renewable sources of electricity such as hydro, geothermal, solar and wind. Biofuels have also been adjudged to be common substitute for liquid transportation fuels. A high response and sensitivity of output growth in the African non-oil producing countries to the US Federal Fund Rate which represents foreign interest rate is a signal to the government, policy analysts, investors and Central Bank of non-oil producing countries in Africa that U.S monetary policy shocks should be properly monitored. Government of non-oil producing countries in Africa should try and tighten fiscal policy in the face of rising capital inflows as this will dampen spending, put downward pressure on domestic interest and stimulate investment and output growth. Stable exchange rate policy should be adopted across all African non-oil producing countries as this will go a long way in creating a predictable climate for investment, enhance more proceeds from exports and appreciate domestic currency. This will be capable enough to mitigate the negative impacts of high global oil price. Finally, effective expansionary monetary policy, through reduction in the interest rate should be adopted by all African non-oil producing countries. | 2020-05-07T09:09:59.336Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "763cf4c97b259d2c2895a201f6bec9d693176fd3",
"oa_license": "CCBY",
"oa_url": "https://iiste.org/Journals/index.php/JEDS/article/download/52524/54256",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "dc0f5ad518beab115b190c6b69612067319fab80",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Economics"
]
} |
250885911 | pes2o/s2orc | v3-fos-license | SUSY effects in Higgs production at high energy e+ e− colliders
Considering the constraints from collider experiments and dark matter detection, we investigate the SUSY effects in the Higgs production channels e+ e− → Zh at an e+ e− collider with a center-of-mass energy above 240 GeV and γγ → h → bb̄ at a photon collider with a center-of-mass energy above 125 GeV. In the parameter space allowed by current experiments, we find that the SUSY corrections to e+ e− → Zh can reach a few percent and the production rate of γγ → h → bb̄ can be enhanced by a factor of 1.2 over the SM prediction. We also calculate the exotic Higgs production e+ e−→ Zh1 in the next-to-minimal supersymmetric model (NMSSM) (h is the SM-like Higgs, h1 is the CP-even Higgs bosons which can be much lighter than h). We find that at a 250 GeV e+ e− collider the production rates of e+ e−→ Zh1 can reach 60 fb.
Introduction
The LHC has discovered a scalar with mass around 125 GeV which resembles the Standard Model (SM) Higgs boson [1]. Since the minimal supersymmetric model (MSSM) predicts a light Higgs boson below 130 GeV [2], the discovery of such a 125 GeV Higgs boson may be the first hint of low energy supersymmetry (SUSY). However, the LHC measurements of the properties of this new boson are so far consistent with the SM predictions, which squeezes the SUSY effects in the Higgs couplings to a decoupling region [3][4][5]. Besides, after the LHC Run-1, the null results of direct searches for SUSY particles (sparticles) have excluded the first two generations of squarks and gluinos with masses below about 1 TeV [6]. Third generation squarks and non-colored sparticles as light as hundreds of GeV are still allowed but have also been constrained by the LHC searches [7]. All these indicate that the SUSY scale may be much higher than the electroweak scale. So it will be a challenge for the LHC to directly observe any SUSY particles except for the light Higgs boson alone. In such a situation, an alternative way for probing SUSY is to search for the indirect SUSY loop effects from some precision measurements of the Higgs boson. Since the precision measurements of the Higgs boson are rather challenging at hadron colliders like the LHC, some high energy e + e − colliders with center-of-mass energy above 240 GeV are being proposed.
At an e + e − collider, the Higgs-strahlung process e + e − → Zh is the dominant production channel for the Higgs boson, for which the Zh events can be inclusively detected by tagging a leptonic Z decay without assuming the Higgs decay mode. For a center-of-mass energy of 240−250 GeV and an integrated luminosity of 500 fb −1 , an e + e − collider can produce about O(10 5 ) Higgs bosons per year and allow for measuring the Higgs couplings at percent level [8,9], which may be able to unravel the SUSY effects in this production. For this process, the leading order rate, the one-loop electroweak corrections and the SUSY corrections were calculated in Refs. [10], [11][12][13][14] and [15], respectively.
As a feasible option, γγ collision can be achieved through the backward Compton scattering of laser light against high-energy electrons at a linear e + e − collider. At such a γγ collision the Higgs boson can be singly produced via the loop process γγ → h. This process is demonstrated to be sensitive to the new charged SUSY particles. So the photon collider will be an ideal place to investigate the anomalous hγγ coupling. At the γγ collider, the Higgs partial width Γ γγ can be measured with an accuracy of about 2 . Besides, the CP property of the Higgs boson can be measured using the photon polarizations. The single production of SUSY Higgs bosons through γγ fusion has been calculated in Refs. [16,17].
Note that at a high energy e + e − collider the production of some exotic Higgs bosons will be possible. If the center-of-mass energy is designed at 240-250 GeV, the production e + e − → hA in the MSSM, which is complementary to the production e + e − → Zh and was searched for at LEP2, will not be open because the CP -odd Higgs A is now much heavier than the SM-like Higgs h. However, in the NMSSM the lightest CP -even Higgs h 1 and CP -odd Higgs A 1 can be singlet-dominant and much lighter than the SM-like Higgs h [18]. So in the NMSSM the exotic Higgs productions e + e − → Zh 1 and e + e − → hA 1 may occur at a 240-250 GeV e + e − collider. These exotic Higgs productions could be a good probe for non-minimal SUSY like the NMSSM.
In this work we systematically examine all the above processes in SUSY. We will not only calculate the NMSSM process e + e − → Zh 1 which has not been intensively studied in the literature, but also re-examine the SUSY effects in e + e − → Zh and γγ → h by considering current experimental constraints, such as the LHC Higgs data and the dark matter detection limits.
This work is organized as follows. In Section 2 we describe the parameter scan and the calculation details for the processes e + e − → Zh, γγ → h → bb, e + e − → Zh 1 . In Section 3, we show the numerical results. Finally, we draw some conclusions in Section 4.
A description of calculations
There are about 120 free parameters in a general Rparity conserving weak-scale MSSM. However, most of these parameters are related to the flavor changing neutral currents (FCNC) and/or the CP -violating phases, which are highly constrained by the experimental measurements. So in our work we only discuss the pMSSM and CMSSM, where the free parameters are reduced and the models are more predictive.
The pMSSM is considered as the most general version of the R-parity conserving MSSM with the following considerations (i) CP conservation; (ii) The principle of minimal flavor violation (MFV) at the weak scale; (iii) Degenerate masses of the first and second generation sfermions; (iv) Negligible Yukawa couplings and trilinear terms for the first two generations, but keeping the 3rd generation parameters A t , A b , A τ ; (v) The lightest neutralino as the LSP. Finally, only 19 parameters can be independently changed in the pMSSM, which are (a) tan β, which is the ratio of the vevs of the two Higgs doublet fields; (b) the higgsino mass parameter µ and the pseudoscalar Higgs mass m A ; (c) the gaugino mass parameters M 1 , M 2 , M 3 ; (d) the first/second generation sfermion mass parameters mq, mũ R , md R , m˜l, mẽ R ; (e) the third generation sfermion mass parameters mQ, mt R , mb R , mL, mτ R ; (f) the third generation trilinear couplings To further simplify the parameter space we assume TeV to avoid the constraints from the first-two generation squarks and the gluino direct searches at the LHC. In addition, considering the current bounds on the stop and stau masses in the MSSM, we conservatively require the lighter stop mass mt 1 > 300 GeV and the lighter stau mass mτ > 150 GeV. We scan the parameter space in the following ranges Different from the general MSSM where all soft breaking parameters are independent [19], the CMSSM [20] assumes the following universal soft breaking parameters at SUSY breaking scale (usually chosen as the Grand Unification scale) as the fundamental ones: with M 1/2 , M 0 and A 0 denoting the gaugino mass, scalar mass and trilinear interaction coefficient, respectively. When evolving these parameters down to the weak scale, we get all the soft breaking parameters in the low energy MSSM. The ranges of these parameters in our scan are
113104-2
For the NMSSM, we scan the parameters in the following ranges and other parameters like first/second generation squark mass and gluino mass are set to be 2 TeV. In our scan, we impose the following constraints (1) A SM-like Higgs mass in the range of 123-127 GeV. We use FeynHiggs-2.8.9 [21] to calculate the Higgs mass and impose the experimental constraints from LEP, Tevatron and LHC with HiggsBounds-3.8.0 [22]. We do not perform the Higgs couplings fit to the LHC data because of the current poor precision.
(2) Various B-physics bounds at 2σ level. We implement the constraints by using the package SuperIso v3.3 [23] , including B → X s γ and the latest measurements of (3) The thermal relic density of the lightest neutralino is in the 2σ range of the Planck data [24] and the dark matter σ SI upper limit form the LUX data [25]. The code MicrOmega v2.4 [26] is used to calculate the relic density.
(4) The constraints from the electroweak observables such as ρ l , sin 2 θ l eff , m W and R b [27] at 2σ level. (5) We require the MSSM and NMSSM to explain at 2σ level the discrepancy of the measured value of the muon anomalous magnetic moment from its SM prediction, i.e., a exp µ − a SM µ = (28.7 ± 8.0) × 10 −10 [28]. For the CMSSM, since there is a tension between µ g−2 with the Higgs mass [29], we just require the CMSSM prediction not to be worse than the SM value. (6) Since the large mixing terms in the stop/stau sector will affect the vacuum stability, we require SUSY to comply with the vacuum meta-stability condition by using the formulas in Refs. [30,31].
We also impose the multi-jets direct search limits [32] on the (m 0 , m 1/2 ) plane based on the search by the ATLAS collaboration for squarks and gluinos in final states that contain missing E T , jets and 0 − 1 leptons in 20.1 − 20.7fb −1 integrated luminosity of data at √ s = 8 TeV collision energy. While these exclusion limits were obtained in the MSUGRA/CMSSM framework for fixed value of tan β and A 0 = −2m 0 , it was proved [33] that the result is fairly insensitive to tan β and A 0 and so we can use the limits directly.
Besides, to make the SM-like Higgs not deviate too much from the Higgs data, we also impose the following constraints on the property of the SM-like Higgs: (6) In the calculations, we generate and simplify the amplitudes by using the packages FeynArts-3.9 [34] and FormCalc-8.2 [35]. All the loop functions are numerically calculated with the package LoopTools-2.8 [36].
Calculations for e + e − → Zh
The one-loop corrections for e + e − → Zh in the SM and MSSM have been studied in Ref. [2]. In Fig. 1 we show the typical Feynman diagrams for the e + e − → Zh production in the MSSM. The complete one-loop corrections to the process e + e − → Zh include two parts: virtual corrections and real photon radiations. The virtual corrections include a set of self-energy corrections, the vertex corrections of eeZ, ZZh and ZAh, and the box diagrams. We adopt the dimensional regularization and the constrained differential renormalization (CDR) [37] to regulate the ultraviolet divergence (UV) in the loop amplitudes for the SM and MSSM, respectively. These UV singularities are re-moved by using the on-shell renormalization scheme. We take the definitions of the scalar and tensor two-, threeand four-integral functions presented in [38] and use the Passarino-Veltman method to reduce the N -point tensor functions to scalar integrals [39].
Due to the infrared (IR) singularities in the vertex corrections to e + e − → Zh, the real photon radiation corrections should be taken into account. These IR divergences can be canceled with the real photon bremsstrahlung corrections in the soft photon limit by the Kinoshita-Lee-Nauenberg theorem [40]. According to the energy of the photon E γ , we split the phase space into a soft region (E γ < ∆E γ √ s/2) and a hard region (E γ < ∆E γ √ s/2), where ∆E γ is the energy cut-off of the soft photon. We use the soft photon approximation formula to obtain the soft part of the cross section [41] and give a fictitious mass m γ to the photon to eliminate the IR divergence. It should be noted that the dependence of the real corrections on m γ is exactly canceled by the corresponding virtual corrections. In the hard region, we use the well-known VEGAS [42] routine to evaluate the cross section. We checked that our results are independent of m γ and ∆E γ .
Calculations for γγ → h
The leading order γγ → h occurs at one-loop level, where the photon beam is generated by the backward Compton scattering of the incident electron-and the laser-beam. The number of events is obtained by convoluting the cross section of γγ collision with the photon beam luminosity distribution given by where dL γγ /d √ s γγ is the photon-beam luminosity distribution and σ γγ→h (s) (s is the squared center-of-mass energy of e + e − collision) is defined as the effective cross section of γγ → h. In the optimal case, it can be written as [43] where F γ/e denotes the energy spectrum of the backscattered photon for the unpolarized initial electron and laser photon beams given by Here ξ = 4E e E 0 /m 2 e (E e is the incident electron energy and E 0 is the initial laser photon energy) and x = E/E 0 with E being the energy of the scattered photon moving along the initial electron direction.
In the calculations of e + e − → Zh and γγ → h → bb, we use the package FeynHiggs to obtain the masses of the Higgs bosons in the MSSM. By evaluating loop corrections to the h, H and hH-mixing propagators, we can determine the masses of the two CP -even Higgs bosons m h and m H as the poles of this propagator matrix, which are given by the solution of whereΣ hH (q 2 ),Σ HH (q 2 ) andΣ hH (q 2 ) denote the renormalized Higgs boson self-energies. It should be noted that since the Higgs field renormalization constants are given in the DR scheme [44] in FeynArts-3.9, we adopt the finite wave function normalization factorsẐ ij to ensure the correct on-shell properties of the external particles in the S-matrix elements. The values of Z ij can be numerically obtained by using the package FeynHiggs. Also, it should be noted that normally one has to use the tree-level Higgs masses in the whole loop calculations to keep the gauge invariance, while for the phase space integration we need to express the matrix element in terms of the physical masses for the external final states. In our study, we take the loop-corrected Higgs boson mass as the physical mass and adopt the way proposed in Ref. [45] to technically deal with this problem. To be specific, since the tree-level process e + e − → Zh does not involve the exchange of the light Higgs boson h, we only need to use tree-level Higgs masses in the loop integral calculation but keep the loop-corrected mass in the phase space integration.
We define the following ratio to quantitatively show the SUSY effect in the Higgs productions where σ SUSY and σ SM are the one-loop cross sections in the MSSM and SM, respectively.
Results for e + e − →Zh in MSSM and CMSSM
In Fig. 2 we show the dependence of SUSY corrections to the process e + e − → Zh on the chargino mass mχ+ 1 for the samples allowed by constraints (1)-(6) at 2σ level for an e + e − collider with √ s = 250, 350, 500 GeV. From this figure we can see that the SUSY corrections can be negative or positive, depending on the masses of the sparticles in the loops and the collider energies. For √ s = 250 GeV, the SUSY corrections can maximally reach −2.5% with mχ+ 1 ∼ √ s/2, which is caused by the resonant effects in the chargino loops. Note that the bounds on the chargino mass from direct electroweakino searches are still weak for our samples since most of the survived points are dominated by the mixture of winohiggsino. Given the expected sensitivity of a 250 GeV e + e − collider like CEPC, the residual SUSY effects in e + e − → Zh can still be detected if mχ+ 1 < 400 GeV when the luminosity reaches about 10,000 fb −1 [46,47]. Since the hZZ coupling directly affects the Zh production, we survey the deviation of this coupling from the SM value and find it at most 0.05%. So in the MSSM the difference of Zh production from the SM is largely from the sparticle contribution.
In Fig. 3 we present the dependence of SUSY corrections to the process e + e − → Zh on the neutralino mass mχ0 1 in the CMSSM with √ s = 250, 350, 500 GeV. We find that the SUSY corrections for most samples are less than 0.5% because the sparticle masses have been pushed up to the multi-hundred GeV region by the inclusive sparticles searches for the CMSSM at the LHC. So it is difficult to observe these indirect CMSSM loop effects through e + e − → Zh production at future e + e − colliders.
Results for γγ → h → bb in MSSM
At an ILC-based photon collider, the Higgs boson can be singly produced through the photon-photon fusion mechanism [48]. Since the cross section of γγ → h is proportional to the decay width of h → γγ, the ratio ∆σ SUSY (γγ → h)/σ SM (γγ → h) is independent of the energy of the ILC. Given the large branching ratio of h → bb, we calculate the SUSY corrections to the observable σ(γγ → h)·Br(h → bb) in the MSSM and display its dependence on the mass of pseudo-scalar m A in Fig. 4.
From Fig. 4 we see that the SUSY corrections can maximally reach 20% for the allowed samples in the small m A region with a large tan β due to the enhancement of Br(h → bb). With the increase of m A , the SUSY corrections drop. Note that a light stau can make a sizable loop contribution to γγ → h, which, after considering the vacuum stability, can enhance the cross section by a factor of 1.2. So, if photon-photon collisions can be realized at the ILC, it will be a good place for probing SUSY effects.
Of course, it should be mentioned that such sizable effects may be detected or further constrained at the LHC Run-2.
For the single Higgs production, the largest background is from γγ → bb production and is about 1.8 pb at leading order for √ s = 250 GeV. For the signal, the cross section is 0.18 fb. If we have optimistic cuts to reduce the background, the signal data error could be For ε ∼ 10%, we need a luminosity around 500 fb −1 . Assuming a b-tag efficiency around 80 , the luminosity should be around 800 fb −1 .
3.3
Results for e + e − → Zh 1 in NMSSM In our scan of the NMSSM parameter space we choose h 2 as the SM-like Higgs h. In this case the lightest CPeven Higgs h 1 and CP-odd Higgs A 1 are singlet-dominant and can be much lighter than h.
In Fig. 5 we show the leading-order cross sections for e + e − → Zh 1 and in the NMSSM for √ s = 250 GeV. From the right panel we see that the e + e − → Zh 1 production rate varies over a rather large range and most samples give a cross section larger than 1 fb. It is interesting that the largest production rate increases with the mass of h 1 . The reason is that the e + e − → Zh 1 production rate largely depends on the h 1 ZZ coupling, which comes from the mixing between the singlet and the doublet Higgs fields. When the two masses of h 1 and h get closer, the mixing generally become larger and thus e + e − → Zh 1 increases with the h 1 mass. The right panel of Fig. 5 shows the leading-order cross sections of three production channels versus the hZZ coupling normalized to the SM value. We see that the cross section of e + e − → Zh 1 is sensitive to the deviation of the hZZ coupling from the SM value. When the hZZ coupling approaches the SM value, the cross section of e + e − → Zh 1 drops sharply. Since the dominant decay mode of the light Higgs bosons is bb, the exotic Higgs productions e + e − → Zh 1 will lead to the signal of Z + 2b. These signals can be efficiently detected at an e + e − collider. Also note that at such an e + e − collider the loop-induced Higgs production e + e − → hγ can be enhanced in SUSY [49]. All these processes can jointly serve as a good probe for SUSY models.
Conclusion
In this work we examined the SUSY residual effects in the process e + e − → Zh at an e + e − collider with centerof-mass energy above 250 GeV and γγ → h → bb at a photon collider with center-of-mass energy above 125 GeV. We found that the SUSY corrections to e + e − → Zh can reach a few percent in the parameter space allowed by current experiments. The production rate of γγ → h → bb can be enhanced by a factor of 1.2 over the SM prediction. We also calculated the exotic Higgs productions e + e − → Zh 1 and e + e − → A 1 h in the NMSSM. We found that for an e + e − collider with center-of-mass energy of 250 GeV the e + e − → Zh 1 production rates can reach 60 fb. Since in the NMSSM model the h 1 mass could be lighter than the SM Higgs and also Zh 1 could have a sizable cross section, it is possible to observe h 1 at a center-of-mass energy just above M h 1 + M Z before the CEPC reaches the designed 250 GeV center-of-mass energy. These processes will jointly serve as a probe of SUSY in the proposed e + e − colliders like CEPC, TLEP or ILC. | 2022-07-22T07:08:20.855Z | 2016-01-01T00:00:00.000 | {
"year": 2016,
"sha1": "11b165082b6648242fcc9d6df12ff7913d88b10e",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/1674-1137/40/11/113104",
"oa_status": "HYBRID",
"pdf_src": "IOP",
"pdf_hash": "11b165082b6648242fcc9d6df12ff7913d88b10e",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
263966700 | pes2o/s2orc | v3-fos-license | Reactive nitrogen species inhibit branched chain alpha-ketoacid dehydrogenase complex and impact muscle cell metabolism
Branched chain α-ketoacid dehydrogenase complex (BCKDC) is the rate-limiting enzyme in branched chain amino acid (BCAA) catabolism, a metabolic pathway with great importance for human health. BCKDC belongs to the mitochondrial α-ketoacid dehydrogenase complex family, which also includes pyruvate dehydrogenase complex and oxoglutarate dehydrogenase complex. Here, we revealed that BCKDC can be substantially inhibited by reactive nitrogen species (RNS) via a mechanism similar to what we recently discovered with pyruvate dehydrogenase complex and oxoglutarate dehydrogenase complex—RNS can cause inactivating covalent modifications of the lipoic arm on its E2 subunit. In addition, we showed that such reaction between RNS and the lipoic arm of the E2 subunit can further promote inhibition of the E3 subunits of α-ketoacid dehydrogenase complexes. We examined the impacts of this RNS-mediated BCKDC inhibition in muscle cells, an important site of BCAA metabolism, and demonstrated that the nitric oxide production induced by cytokine stimulation leads to a strong inhibition of BCKDC activity and BCAA oxidation in myotubes and myoblasts. More broadly, nitric oxide production reduced the level of functional lipoic arms across the multiple α-ketoacid dehydrogenases and led to intracellular accumulation of their substrates (α-ketoacids), decrease of their products (acyl-CoAs), and a lower cellular energy charge. In sum, this work revealed a new mechanism for BCKDC regulation, demonstrated that RNS can generally inhibit all α-ketoacid dehydrogenases, which has broad physiological implications across multiple cell types, and elucidated the mechanistic connection between RNS-driven inhibitory modifications on the E2 and E3 subunits of α-ketoacid dehydrogenases.
Branched chain α-ketoacid dehydrogenase complex (BCKDC) is the rate-limiting enzyme in branched chain amino acid (BCAA) catabolism, a metabolic pathway with great importance for human health.BCKDC belongs to the mitochondrial α-ketoacid dehydrogenase complex family, which also includes pyruvate dehydrogenase complex and oxoglutarate dehydrogenase complex.Here, we revealed that BCKDC can be substantially inhibited by reactive nitrogen species (RNS) via a mechanism similar to what we recently discovered with pyruvate dehydrogenase complex and oxoglutarate dehydrogenase complex-RNS can cause inactivating covalent modifications of the lipoic arm on its E2 subunit.In addition, we showed that such reaction between RNS and the lipoic arm of the E2 subunit can further promote inhibition of the E3 subunits of α-ketoacid dehydrogenase complexes.We examined the impacts of this RNS-mediated BCKDC inhibition in muscle cells, an important site of BCAA metabolism, and demonstrated that the nitric oxide production induced by cytokine stimulation leads to a strong inhibition of BCKDC activity and BCAA oxidation in myotubes and myoblasts.More broadly, nitric oxide production reduced the level of functional lipoic arms across the multiple α-ketoacid dehydrogenases and led to intracellular accumulation of their substrates (α-ketoacids), decrease of their products (acyl-CoAs), and a lower cellular energy charge.In sum, this work revealed a new mechanism for BCKDC regulation, demonstrated that RNS can generally inhibit all α-ketoacid dehydrogenases, which has broad physiological implications across multiple cell types, and elucidated the mechanistic connection between RNS-driven inhibitory modifications on the E2 and E3 subunits of αketoacid dehydrogenases.
The family of α-ketoacid dehydrogenase complexes, which includes pyruvate dehydrogenase complex (PDHC), oxoglutarate dehydrogenase complex (OGDC), and branched chain α-ketoacid dehydrogenase complex (BCKDC), play key roles in mitochondrial metabolism.These enzyme complexes comprise of three subunits and share a similar catalytic mechanism (Fig. 1A).The E1 subunit, PDH, OGDH, and BCKDH for the three enzyme complexes, respectively, is a thiamin-dependent decarboxylase that decarboxylates its substrate (α-ketoacids) to its corresponding acyl group.The E2 subunits, dihydrolipoamide S-acetyltransferase (DLAT), dihydrolipoamide S-succinyltransferase (DLST), and dihydrolipoamide branched chain transacylase (DBT), respectively, contain a covalently attached lipoic arm that mediates the transfer of the acyl group from the E1 subunit to CoA to produce acyl-CoA.With this transfer, the lipoic arm converts from its oxidized form (lipoamide) to its reduced form (dihydrolipoamide). Finally, the E3 subunit, dihydrolipoamide dehydrogenase (DLD), encoded by the same gene for all three αketoacid dehydrogenase complexes, re-oxidizes the dihydrolipoamide to lipoamide, coupled to NAD reduction to NADH.Together, the coupled action of the three subunits allows for the oxidation of α-ketoacids and the production of acyl-CoA and NADH.These enzymes catalyze reactions that are key steps in carbohydrates and amino acids catabolism (1).
As BCKDC, PDHC, and OGDC control important crossroads of the metabolic network, their activities are dynamically regulated by layers of molecular mechanisms.Particularly, multiple mechanisms acting through posttranslational modifications of E1, E2, or E3 subunits of these enzymes have been found to play key roles in their regulation (13)(14)(15)(16)(17)(18)(19)(20)(21)(22)(23).Recently, we discovered that reactive nitrogen species (RNS) produced by inducible nitric oxide synthase (iNOS), encoded by the gene Nos2, in classically activated macrophages led to profound inhibition of PDHC and OGDC, via a previously unknown posttranslational modifications (24).This inhibition is caused by loss of functional lipoic arm on their E2 subunit, as can be detected by immunoblotting.However, mass spectrometrybased analysis showed this loss of functional lipoic arm is largely not due to absence of bound lipoic cofactor, rather, RNS can cause a series of covalent modifications of the bound C, E, F, K, and L, statistical analysis was performed using one-way ANOVA followed by post hoc Tukey's test.Bars with different lower-case letters (a, b, c, or d) indicate a statistically significant difference with p < 0.05, with exact pvalues reported in Supporting Information.B, C, E-G, I, K, and L, all bars and error bars represent mean ± SD, N = 3, unless otherwise noted.BCKDC, branched chain α-ketoacid dehydrogenase complex; IFNγ, interferon-gamma; iNOS, inducible nitric oxide synthase; LPS, lipopolysaccharide; RNS, reactive nitrogen species.
lipoic arm, preventing it from cycling between its reduced and oxidized forms to perform its catalytic activity (Fig. 1A).Furthermore, we demonstrated that the RNS-driven inhibition acts through specifically modifying the active thiols of the lipoic arm, by showing inhibition of purified PDHC by RNS depends on the presence of its substrates to generate reactive thiols.Without the addition of substrates (pyruvate and CoA), E2 subunit's lipoic arm is mainly in its oxidized form.In this condition, incubating purified PDHC with NO donor alone does not cause substantial inhibition of PDHC activity.In contrast, incubating purified PDHC with its substrates, pyruvate and CoA, or its product NADH allows the lipoic arm to convert to its reduced form via E1 and E2 subunit activity or reversed E3 subunit activity, respectively, exposing the reactive thiols.Consistently, incubating purified PDHC with NO donor in the presence of substrates causes substantial inactivation of PDHC.Given the mechanistic similarity among α-ketoacid dehydrogenase complexes, it is conceivable that a similar mechanism could also regulate BCKDC.Additionally, many other cell types beyond macrophages also produce nitric oxide (NO); therefore it is also likely that the RNS-driven regulation of these complexes is important in these cell types.However, these hypotheses have not been tested.
Here, through a series of in vitro and in cell experiments, we show that RNS can indeed inhibit BCKDC by modifying the lipoic arm on its E2 subunit.This mechanism not only inhibits BCKDC alongside PDHC and OGDC in classically activated macrophages but also in muscle cells, where BCKDC has an important role.Upon exposure to tumor necrosis factor-alpha (TNFα) and interferon-gamma (IFNγ), muscle cells express iNOS and produce NO (25).This NO production has been implicated in muscle cachexia and altered mitochondrial metabolism (26).We found that TNF-α and IFN-γ stimulated NO production in myotubes and myoblasts lead to significant rewiring of their mitochondrial metabolism and altered energy charge.
In addition to targeting E2 subunit's lipoic arm, RNS have also been shown to inhibit α-ketoacid dehydrogenase complex through mechanisms acting on the E3 subunit (DLD).In classically activated macrophages, RNS can cause inhibitory cysteine nitrosylation of DLD (21).The normal catalytic function of DLD in α-ketoacid dehydrogenase complexes is to reoxidize the reduced lipoic arm on E2 subunit using a cysteinecysteine active site.Based on this close interaction between the subunits, we reasoned that the RNS-driven modifications on the E2 subunit's lipoic arm could promote the modification of E3 subunits.Indeed, here we find the RNS-driven inhibition of the E3 subunit largely depends on lipoic arm modification of the E2 subunit, providing a mechanistic link between the two recently discovered RNS-driven inhibitory mechanisms.
Together, these data demonstrate a common mechanism which allows RNS to inhibit important enzymes across the lipoic arm-dependent dehydrogenase family, including BCKDC.Such regulation by RNS has significant biological impacts in multiple cell types that are capable of producing RNS.It is likely to have broader relevance in other cell types that are influenced by RNS in the microenvironment as well.
RNS cause strong inhibition of BCKDC
To test the hypothesis that NO can inhibit BCKDC, we first incubated mitochondrial lysate from RAW 264.7 cells with NO donor, PAPA-NONOate.Indeed, in vitro treatment with NO donor led to a profound inhibition of BCKDC activity (Fig. 1B).
We hypothesized this inhibition was mediated mainly through a mechanism similar to what we recently found with PDHC and OGDC (24): RNS cause a series of inactivating Smodifications of the lipoic arm on their E2 subunit, and such mechanism is highly specific and efficient because in cells, RNS can react with CoA to form SNO-CoA, which, via binding to the E2 subunit at the CoA-binding site, can deliver the modifications to the lipoic arm in a targeted manner.If the hypothesis that RNS inhibit BCKDC via a similar mechanism is true, we would predict that BCKDC can also be directly inhibited by SNO-CoA in the presence of NADH (to generate reduced thiol on lipoic arm) with a high potency.We therefore incubated mitochondrial lysate with varied doses of SNO-CoA in the presence of NADH and measured BCKDC activity.Indeed, SNO-CoA inhibited BCKDC in a dose-dependent manner, with concentration as low as 0.1 μM can cause over 50% activity reduction in 3 h and 10 μM near completely inactivated BCKDC (Fig. 1C).These results provided in vitro evidence that BCKDC can be efficiently inhibited by RNS similar to PDHC and OGDC.
NO production leads to BCKDC inhibition in activated macrophages
We next tested the hypothesis that such RNS-driven inhibition of BCKDC occurs in cells.In macrophages, classical activation by lipopolysaccharide (LPS) and IFNγ induces the expression of iNOS (Fig. 1D), resulting in the production of NO, as indicated by the accumulation of intracellular citrulline (Fig. 1E) and nitrite (a final product of RNS) in media (Fig. 1F).In the macrophage cell line, RAW 264.7,BCKDC activity is reduced by 80% upon classical activation, and such activation-induced BCKDC inhibition is significantly rescued in Nos2−/− cells (Fig. 1G), showing NO as an important driver of the BCKDC inhibition in activated macrophages.To test whether this NO-dependent inhibition of BCKDC is mediated by changes in the lipoic arm, we probed for the level of functional lipoic arm on BCKDC's E2 subunit, DBT, by immunoprecipitation.Although BCKDC activity decreased substantially upon activation, the total level of DBT in input is slightly higher upon activation in both WT and Nos2−/− macrophages (Fig. 1, H and I), possibly due to compensation, suggesting the activation-induced inhibition is not due to reduced DBT level but strong inactivation of its catalytic activity.Correlating with overall BCKDC activity, the level of functional lipoic arm on DBT decreased substantially upon activation in WT macrophages, but such decrease was prevented by Nos2 knock out (Fig. 1, H and I), suggesting that NO causes changes to the lipoic arm which then mediate the BCKDC inhibition upon activation.
BCKDC is the rate-limiting step in BCAA catabolism.To examine the impact of NO production on BCAA metabolism in macrophages, we applied isotopic tracing with U-[ 13 C]-Lleucine.Oxidation of U-[ 13 C]-L-leucine by BCKDC produces 5-labeled isovaleryl-CoA, which can be further metabolized to labeled acetyl-CoA, whereas the unlabeled fraction of acetyl-CoA originates from other sources, including citrate, pyruvate, and β-oxidation of fatty acids (Fig. 1J).In WT RAW264.7 cells, stimulation by LPS and IFNγ greatly reduced the abundance of 5-labeled isovaleryl-CoA (Fig. 1K) and reduced the contribution of U-[ 13 C]-L-leucine to acetyl-CoA production to near-background level (Here, the background was the fraction of M+2 acetyl-CoA measured in cells cultured in fully unlabeled media.The M+2 arises from natural abundance of C13 and S34) (Fig. 1L).Both the stimulation-induced reduction in isovaleryl-CoA abundance and in acetyl-CoA labeling from leucine were rescued by Nos2 knock out (Fig. 1, K and L).Together, these results show NO production causes lipoic arm alteration and inhibition of BCKDC, and reduction in BCAA oxidation, in macrophages upon classical activation.
NO inhibits α-ketoacid dehydrogenase complexes in myotubes and myoblasts
Many tissues and cell types have the capability of producing NO by nitric oxide synthases for functions including signaling, pathogen killing, and regulation of angiogenesis, vasodilation, and neural functions (27)(28)(29)(30).Additionally, cells that do not actively produce NO themselves can be impacted by RNS in the microenvironment generated by neighboring cells.Therefore, this mechanism for RNS to inhibit of α-ketoacid dehydrogenase complexes is likely to have broad biological significance in a variety of cellular systems beyond macrophages.Skeletal muscle is known to express iNOS to produce NO and have elevated RNS upon stimulation with the cytokines (25,(31)(32)(33).We hypothesized that α-ketoacid dehydrogenase complexes are inhibited by RNS in muscle cells upon cytokine stimulation and tested this hypothesis using a widely used cell model, C2C12 myoblast cells, which can be differentiated to myotubes (34)(35)(36).
In differentiated C2C12 myotubes stimulated with TNF-α and IFN-γ for 48 h, iNOS expression is induced, and the levels of functional lipoic arm in both the bands corresponding to the molecular weight of DLAT (E2 subunit of PDHC, 70 kDa) and DLST or DBT (E2 subunit of OGDC and BCKDC, respectively, both 50 kDa) are reduced relative to total level of their corresponding E2 subunits (Fig. 2A).Treating cells with a selective inhibitor of iNOS, N-(3-(aminomethyl) benzyl)-acetamidine (1400W), rescued the stimulationinduced decrease in the functional lipoic arm level relative to E2 subunits (Fig. 2A).Consistent with the changes in lipoic arm, BCKDC activity, as measured in isolated mitochondria lysate, is reduced by 50% upon stimulation, and this inhibition is rescued by iNOS inhibition (Fig. 2B).
We further tested the effect of NO on the changes in the lipoic arm of α-ketoacid dehydrogenase complexes using a genetic knock out of Nos2.Like WT C2C12 cells, Nos2 KO cells can be sufficiently differentiated to myotubes, as indicated by the great increase of myosin expression after differentiation (Fig. 2C).Similar to what is observed with pharmacological inhibition of iNOS, the stimulation-induced decrease of functional lipoic arm relative to total DLAT, DLST, and DBT level was prevented in Nos2−/− myotubes (Fig. 2D).
To further probe BCKDC activity in cells, we performed U-[ 13 C]-L-leucine tracing.Upon stimulation with TNF-α and IFNγ, cellular level of labeled α-ketoisocaproate, the substrate of BCKDC, accumulated in WT myotubes but not Nos2−/− myotubes (Fig. 2E).The fraction of 2-labeled acetyl-CoA decreased to blank level upon stimulation in WT myotubes, and this loss of labeling from U-[ 13 C]-L-leucine is significantly reversed by Nos2−/− (Fig. 2F).This NO-dependent buildup of substrate and decrease in labeling incorporation into downstream metabolite indicated NO inhibits BCKDC activity in cells.
Based on the observed changes in the lipoic arm (Fig. 2, A and D), we also expected PDHC and OGDC activity to be inhibited in an NO-dependent manner in TNF-α-and IFN-γstimulated myotubes.To examine PDHC activity and glucose oxidation in myotubes, we performed kinetic labeling with U-[ 13 C]-glucose tracer.U-[ 13 C]-glucose is quickly converted to U-[ 13 C]-pyruvate via glycolysis in both stimulated and unstimulated condition (Fig. 3A).The labeled pyruvate can be metabolized by PDHC to 2-labeled acetyl-CoA, which is then converted to 2-labeled citrate.In WT myotubes, TNF-α and IFN-γ stimulation caused the labeling incorporation from glucose into citrate to be much slower (Fig. 3B), even though the labeling incorporation into pyruvate is higher (Fig. 3A), suggesting greatly reduced flux through PDHC.In contrast, this reduced rate of labeling incorporation from pyruvate into citrate was not observed when Nos2−/− myotubes were activated (Fig. 3, C and D).These results suggest NO production inhibits intracellular PDHC activity and glucose oxidation in activated myotubes.Consistently, we observed substantial accumulation of pyruvate, the substrate of PDHC, and depletion of acetyl-CoA, the product of PDHC, in stimulated WT myotubes, and these stimulation-induced changes are largely prevented by Nos2−/− (Fig. 3, E and F).Similarly, the substrate for OGDC, α-ketoglutarate, accumulated significantly, and the product of OGDC, succinyl-CoA, depleted upon stimulation in WT myotubes, but these changes are significantly rescued in Nos2−/− myotubes (Fig. 3, G and H).
Similar to genetic Nos2 knock out, treating cells with iNOS inhibitor 1400W also partially reversed the stimulationinduced accumulation of α-ketoacids and depletion of succinyl-CoA and acetyl-CoA (Fig. 3, I-L).However, the rescue was weaker than the genetic knock out.This correlates with the fact that genetic Nos2 KO completely ablated NO production but 1400W treatment did so incompletely, as evidenced by measurements of intracellular citrulline, a product of iNOS (Fig. 3, M and N), and extracellular nitrite (Fig. 3, O and P), suggesting possible dose-dependent effect of NO.
As the results above suggested NO production causes inhibition of all three α-ketoacid dehydrogenase complexes, PDHC, OGDC and BCKDC, which control the mitochondrial oxidation of important nutrients-glucose, glutamine, and BCAA, respectively-such inhibition can have significant impact on cellular bioenergetics.Indeed, consistent with a previous report (26), we observed cellular energy charges significantly decreased upon stimulation of myotubes, and this decrease is prevented by Nos2−/− (Fig. 3Q).
Like myotubes, when undifferentiated myoblasts are stimulated with TNF-α and IFN-γ, they express iNOS (Fig. 4A) and produce NO as well, as indicated by the accumulation of intracellular citrulline (Fig. 4B) and nitrite in media (Fig. 4C).Consistent with NO driving inhibition of α-ketoacid dehydrogenases, we found the level of α-ketoacids accumulate, and acetyl-CoA decreased upon TNF-α and IFN-γ stimulation in myoblasts, and such changes are rescued by Nos2 knock out (Fig. 4, D-F).Incorporation of labeled glucose into TCA cycle intermediates is also reduced upon stimulation, which is rescued by Nos2 knock out (Fig. 4, G-J).Similarly, iNOS expression and activity can also be induced in myoblasts by LPS and IFN-γ stimulation (Fig. 4, K and L).And we observed similar metabolic alterations, including accumulation of the substrates of α-ketoacid dehydrogenases (Fig. 4, M-O), decrease of their corresponding acyl-CoAs (Fig. 4, P and Q), and reduced labeling from glucose into TCA cycle intermediates (Fig. 4, R-U), upon LPS and IFN-γ stimulation.In summary, in all the tested conditions when NO production is activated, cellular metabolism is significantly altered by NOdriven inhibition of α-ketoacid dehydrogenases, in both myotubes and myoblasts.
RNS-driven lipoic modification on E2 subunit promotes E3 subunit inhibition
The α-ketoacid dehydrogenase complexes are subjected to the regulation by many mechanisms.Besides the inhibitory modifications of the lipoic arm on E2 subunit, it has been recently discovered that in macrophages, RNS can also inhibit PDHC via another mechanism-cysteine nitrosylation on its E3 subunit (DLD) (21).The normal catalytic function of DLD is to use a cysteine-cysteine active site to re-oxidize the reduced lipoic arm on E2 subunit, then transfer the electron to FAD, then NAD, to produce NADH (Fig. 1A).Based on this close interaction between E2 and E3 subunits, we hypothesized that the RNS-driven lipoic modification on E2 subunit can further promote the cysteine modification on E3 subunit through mechanisms such as trans-nitrosylation.
To test this hypothesis, we took advantage of the fact that, as demonstrated in our previous work (24), the modification of E2 subunit's lipoic arm by RNS depends on the presence of substrates (pyruvate and CoA) to generate reduced thiols that is susceptible to modifications and to deliver RNS modifications to lipoic arm via SNO-CoA (Fig. 1A).If the modification and inhibition of DLD is resulted from the interaction with E2 subunit's modified lipoic arm, it too would be dependent on the presence of substrates.We therefore incubated purified PDHC with NO donor, PAPA-NONOate, in the presence or absence or pyruvate and CoA.Only when both pyruvate and CoA are present, did the NO donor cause a large reduction in overall PDHC activity (Fig. 5A) and in specific DLD activity as measured by spectrometric assay (Fig. 5B) and in-gel activity assay (Fig. 5 protocols (21,(37)(38)(39).The observation that NO donor can cause significant inhibition of DLD activity without changing total DLD level in the presence of pyruvate and CoA (Fig. 5, B-D) confirmed that in normal cellular environment, where pyruvate and CoA are present, production of NO would be capable of modifying and inhibiting DLD, as recently reported in activated macrophages (21).However, the inhibition of DLD activity (18%, Fig. 5B) is relatively small compared to the inhibition of overall PDHC activity (82%, Fig. 5A), suggesting the contribution of E3 inhibition is minor and E2 subunit inhibition is the major driver of overall PDHC inhibition by RNS.The fact that without pyruvate or CoA, NO donor alone does not cause significant DLD inhibition demonstrated that NO does not directly cause inhibitory modifications of DLD, instead, the inhibition is mediated by the interaction between E3 subunit and E2 subunit with RNS-modified lipoic arm.
To further test the dependence of E3 subunit inhibition on E2 subunit's lipoic arm in cells, we used a cell model that lacks lipoic arm on α-ketoacid dehydrogenase complexes' E2 subunit.Mitochondrial trans-2-enoyl-CoA reductase (MECR), a required enzyme in the lipoic acid biosynthetic pathway (Fig. 5E), was knocked out in HAP1 cell line.As expected, MECR-null cells have no detectable level of lipoic arm on DLAT, DLST, or DBT, while WT HAP1 cells do (Fig. 5F).We then treated WT or MECR KO cells with NO donor.The level of functional lipoic arm on α-ketoacid dehydrogenase complexes' E2 subunits reduced upon NO donor treatment (Fig. 5F), and consistently, overall PDHC activity is reduced in WT HAP1 cells (Fig. 5G).MECR KO cells have no overall PDHC activity (Fig. 5G), as expected from the lack of lipoic arm.Importantly, when we specifically measured the activity of DLD, we found that DLD activity was only significantly decreased after NO donor treatment in WT, but not MECR KO cells (Fig. 5H).This result provided in-cell evidence that the inhibition of DLD by RNS depends on the lipoic arm on E2 subunit.
Another well-known posttranslational modification mechanism to regulate the activity of some α-ketoacid dehydrogenase complexes acts through their E1 subunits.The E1 subunits of PDHC and BCKDC can be phosphorylated at specific sites.These phosphorylation sites inhibit PDHC and BCKDC activity and are under the regulation of specific kinases and phosphatases (13,16,40,41).OGDC, however, has not been shown to be regulated by similar E1 phosphorylation.Interestingly, we found that in myotubes, BCKDH phosphorylation decreases upon stimulation with cytokines (Fig. 5, I and J), which can counteract the NO-dependent E2 inactivation and potentially reduce overall activity loss.When stimulated cells were treated with iNOS inhibitor or when Nos2 was knocked out, in addition to the rescue of functional lipoic arm (Fig. 2, A and D), we observed the stimulation-induced decrease in E1 subunit phosphorylation was also reversed (Fig. 5, I and J).These results suggested that changes in BCKDC E1 phosphorylation and E2 lipoic modification have compensatory effect during myotube activation by cytokines.However, the appeared compensatory relationship E2 subunit modification by RNS and E1 subunit phosphorylation are condition specific, as we have observed in macrophages upon classical activation, inhibitory PDHC E1 subunit phosphorylation is increased at the same time E2 subunit's lipoic arm is inactivated by RNS (42).This condition-specific relationship reflects that the phosphorylation of E1 subunit and the RNSdriven E2 subunit modifications occur through independent mechanisms, in contrast to the inhibitory modifications of E2 and E3 subunits by RNS are mechanistically connected.
Discussion
Mitochondrial α-ketoacid dehydrogenase complexes catalyze crucial reactions at the crossroads within the metabolic network.Here we demonstrate that NO can strongly inhibit BCKDC.This study, together with our recently published work, revealed that RNS are capable of broadly inhibiting all the α-ketoacid dehydrogenase complexes through a common mechanism-modifying and inactivating the catalytic lipoic arm of their E2 subunits.We demonstrated that this mechanism drives significant alterations in the metabolism of carbohydrates and amino acids across multiple cell types, including macrophages, myotubes, and myoblasts, under conditions in which NO production is induced.It has been previously reported that NO production in cytokine-activated muscle cells has important physiological effects, including impairment of myoblast proliferation and differentiation (43) and induction of apoptosis in aging-induced sarcopenia (44).The inhibition of α-ketoacid dehydrogenase complexes, and the resulting alterations in mitochondrial metabolism, can play a role in mediating these effects.Given that purposeful production of NO by iNOS, eNOS (primarily expressed in endothelial cells), and nNOS (primarily expressed in neurons), as well as the generation of RNS as metabolic by-products, occurs in many physiological and pathological contexts (28,45); this mechanism is likely to have broad significance in regulating metabolism.Alterations of metabolism by RNS have the potential to have numerous downstream impacts via mechanisms including affecting protein acetylation and succinylation by altering acetyl-CoA and succinyl-CoA availability or changing cellular energetic status.The broad downstream effects and their mechanisms remain to be further examined.
The α-ketoacid dehydrogenase complexes are subjected to tight regulation by a variety of mechanisms.Here we investigated the relationship between the RNS-driven inhibitory modifications on the E2 subunit and other regulatory posttranslational modifications targeting α-ketoacid dehydrogenase complexes' E3 or E1 subunits.We found that RNS-driven modification of the E2 subunit's lipoic arm promotes inhibition of the E3 subunit.This molecular connection has important implications in the specificity of RNS-driven DLD inhibition and in the extent and reversibility of overall αketoacid dehydrogenase inhibition by RNS.In high RNS conditions, such as classically activated macrophages, we found that α-ketoacid dehydrogenases are specifically and profoundly inhibited, while cell viability and the activity of many other mitochondrial enzymes remains high (42).If RNS cause inhibitory DLD nitrosylation by direct, nonenzymatic interaction with DLD, it is mechanistically unclear why DLD would be preferably modified and inhibited, over many other mitochondrial proteins that have cysteine residues which can potentially be modified.This specificity question is explained by our model of multistep modification transfer: In cells, RNS can react with CoA, a relatively abundant thiol-containing metabolite, and generate SNO-CoA.Through the specific binding of SNO-CoA to the E2 subunit, the modification is efficiently delivered to the thiol of lipoic arm; and through the local interaction between the E2 and E3 subunits, the E3 subunit is further modified.Through this mechanism, both E2 and E3 subunits can be inactivated by RNS, causing greater effect on overall α-ketoacid dehydrogenase inhibition.As a result, to recover the overall enzyme activity, the inhibitory modifications on both E2 and E3 subunits need to be removed.The reversibility under specific cellular conditions is an important direction for future studies.
Overall, this study extended our knowledge about the mechanisms impacting the activity of α-ketoacid dehydrogenase complexes.This work showed that strong inhibition of αketoacid dehydrogenase complexes by RNS can have significant effects in cellular metabolism across various cell types.These findings merit future investigation to examine the broader physiological or pathological effects of this mechanism in vivo and explore the translational implications in conditions where elevated RNS play a key role, such as inflammatory disorders and cardiovascular diseases.
The human chronic myeloid leukemia haploid cell line, HAP1, WT or MECR KO (generated by Horizon, HZGHC006857c007), were cultured in Improved modified Eagle's medium with 10% FBS and 1% penicillin-streptomycin at 37 C with 5% CO 2 .For NO donor treatment, 250 μM of DETA-NONOate was added to the media for 48 h with media and donor replacement every 24 h.
All cell lines were tested for mycoplasma contamination.
For leucine tracing experiments, media without L-leucine was supplemented with U-[ 13 C]-L-leucine (Cambridge Isotope Laboratories, CLM-2262-H) at formulation concentration and was used in the place of chemically identical regular unlabeled media.Both RAW264.7 and C2C12 cells were cultured with stable isotope for 48 h with media changes at 24 and 2 h prior to metabolite extraction.For kinetic glucose tracing, media without D-glucose supplemented with U-[ 13 C]-D-glucose (Cambridge Isotope Laboratories, CLM-1396-1) at formulation concentration was used.
For iNOS inhibitor treatment of C2C12 myotubes, 200 μM 1400W (Cayman Chemical, 81520) was added to the media 24 h prior to experiment start as pretreatment.The inhibitor was maintained in the media at the same concentration throughout the experiment duration.
CRISPR-Cas9-based genetic knockout of Nos2
Nos2−/− KO cells were generated as previously described (24).Briefly, 2 × 10 6 C2C12 myoblast cells were transfected via electroporation with 1 μM fluorescent transactivating CRISPR RNA (tracerRNA, ITT, catalog no.ATTO550), 1 μM RNA targeting mouse Nos2 (crRNA, GTGACGGCAAACATGACTTC, IDT Design ID: Mm.Cas9.NOS2.1.AA), and 1 μM HiFi Cas9 enzyme (IDT, catalog no.1081060) in 100 μl Nucleofector solution V plus supplement (Lonza, catalog no.VCA-1003), using the preprogrammed electroporation protocol B-032 on Nucleofector II/ 2b.Immediately, cells were plated on a 35-mm plate with DMEM media with 10% FBS without penicillin/streptomycin.Eighteen hours after transfection, cells positive for fluorescent tracrRNA were single-cell sorted by FACS (BD FACSAria III) onto a 96-well plate in DMEM media with 10% FBS and 1% penicillin/streptomycin. Single-cell colonies were expanded and subsequently screened via Western blot for the lack of iNOS expression after 48 h stimulation with TNFα and IFNγ.Further validation of positive hits (myoblasts without iNOS expression after cytokine treatment) was performed by differentiating the selected myoblast clones into myotubes followed by 48-h treatment with TNFα and IFNγ and subsequent Western blot for iNOS expression, measurement of nitrite concentration in the media using Griess Reagent System (Promega, G2930), and measurement of intracellular citrulline abundance by LCMS.
Protein extraction, SDS-PAGE, and immunoblotting
Whole cell lysate was collected using RIPA buffer (150 mM NaCl, 1% NP-40 substitute, 50 mM Tris, 0.4 mM EDTA, 0.1% SDS, 0.5% sodium deoxycholate, 10% (v/v) glycerol, pH = 8.0).Lysate was incubated on ice for 15 min and spun at 12,000g for 5 min at 4 C. Total soluble protein concentration in supernatant was determined with bicinchoninic acid (BCA) assay (Thermo Fisher Scientific, Pierce 23225).Denatured gel was run using a Thermofisher Scientific Mini Gel Tank system with Bolt Bis-Tris 8% or 4 to 12% gels and Bolt MES or MOPS running buffer.Proteins were then transferred to nitrocellulose membrane using Bolt Transfer Buffer.Total protein stain was used (Li-COR, 926-11011) to visualize loading.Membranes were blocked in 5% nonfat dairy milk in Tris-Buffer Saline with 0.01% Tween-20 (0.01% TBS-T) for 1 h at room temperature.Primary antibodies were diluted 1:1000 in 5% bovine serum albumin in 0.01% TBS-T and incubated with membrane overnight at 4 C. Membranes were washed with TBS-T and placed with secondary antibody diluted 1:10,000 in 5% bovine serum albumin in TBS-T for 1 h at room temperature.Blots were imaged on Odyssey CLx Imaging System (LI-COR Biosciences).
Immunoprecipitation
WT or Nos2−/− RAW264.7 cells were collected into centrifuge tubes after washing twice with Dulbecco's phosphate buffer saline (D-PBS) and scrapping off the culture plates.Cell suspensions were spun at 500g for 10 min at 4 C and supernatant was discarded.Protein was extracted from the cell pellets using extraction buffer (20 mM Tris HCl, pH 8.0, 150 mM NaCl, 1 mM EDTA, 1 mM EGTA, 1% NP-40, and Pierce protease and phosphatase inhibitors (Thermo Fisher Scientific, A32957 and A32953)) at a volume of 0.350 ml per 3.00 × 10 7 cells.Cell mixtures were incubated on ice for 30 min, then centrifuged at 5000g for 10 min at 4 C. Supernatant was transferred to a tube, and total soluble protein concentration was determined with BCA assay.Dynabeads Protein G (Thermo Fisher Scientific, 10003D) (200 μl per isolation) were washed three times with citrate-phosphate buffer (470 mg citric acid, 920 mg dibasic sodium phosphate dihydrate, pH = 8.0), then incubated with 2 μl of anti-lipoic acid antibody, or nonspecific IgG control, in 200 μl D-PBS with gentle mixing for 40 min at room temperature and washed three times with citrate-phosphate buffer.Antibody was cross-linked by washing twice with 0.2 M triethanolamine (pH = 8.2) and resuspending in fresh DMP solution (20 mM dimethyl pimelimidate dihydrochloride (Thermo Fisher Scientific, P08925G) in 0.2 M triethanolamine, pH = 8.2 (DOT Scientific Inc, DST23040-0.1)) and incubating for 30 min at room temperature.The reaction was terminated by resuspending beads in 50 mM Tris buffer (pH = 7.5) and incubating for 15 min.Cross-linked beads were washed three times with PBS with 0.1% Tween-20.To immunoprecipitate protein containing lipoic moiety, 2 mg of whole cell lysate were mixed with cross-linked bead-Ig complex and the mixture was rotated end-over-end overnight at 4 C. Then the beads were washed three times with PBS and protein was eluted with 50 μl of glycine elution buffer (50 mM glycine, pH = 2.8).To ensure complete elution, an additional 30 μl of elution buffer was added to the beads, and the eluants were pooled.The eluant was then neutralized with 1:1 (v/v) neutralization buffer (1 M Tris, pH = 7.5).Total protein concentration of eluant was determined with BCA assay, then the same amount was loaded for immunoblotting for DBT.
Metabolite extraction and LCMS analysis
Cells were washed three times with D-PBS, and intracellular metabolites were extracted with cold 80:20 methanol:H 2 O (v/ v, LCMS grade).Samples were dried under nitrogen gas and resuspended in LCMS-grade H 2 O.
The metabolites reported were identified based on exact m/ z and retention times determined with chemical standards.Data were analyzed with MAVEN (46,47).
To quantify changes in relative metabolite levels, metabolite abundance measured by ion count in LCMS analysis were normalized to total protein content.To quantify absolute abundance for AMP, ADP, and ATP to calculate cellular energy charge, the ion count measured by LCMS was converted to molar quantity based on calibration curves obtained by running various concentrations of purified AMP, ADP, and ATP chemical standards on LCMS using the same method.Energy charge is calculated based on Equation 1.
Crude mitochondria isolation and matrix enrichment Mitochondria isolation and matrix enrichment was performed as previously described (21, 48, 49) with adaptations.
Briefly, cells were harvested from tissue culture plate after washing three times with D-PBS.Cells were pelleted by spinning at 1000g for 5 min at 4 C. Cell pellet was resuspended in isolation buffer (10 mM Tris-MOPS, pH = 7.4, 1 mM EGTA/ Tris, and 200 mM sucrose) and then homogenized with Teflon pestle operating at 1600 rpm with 50 passes.Homogenate was spun at 600g for 10 min at 4 C. Supernatant was transferred to a new tube and spun at 7000g for 10 min at 4 C. Pellet was resuspended in isolation buffer then spun at 7000g.Mitochondria pellet was lysed with hypotonic lysis buffer (20 mM Tris, pH 7.4, 1 mM EDTA, and Pierce protease and phosphatase inhibitors) and incubated on ice for 15 min.For crude mitochondria use, mitochondria protein concentration is determined with BCA assay.
For matrix lysate enrichment, pelleted crude mitochondria were lysed with 150 μl hypotonic lysis buffer, sonicated on ice for 5 s followed by 30 s rest for a total of four times at 40% amplitude with a probe sonicator.After the addition of 30 μl 1 M NaCl and 20 μl 50% glycerol (v/v) to reach final concentration of 150 mM NaCl and 5% glycerol, lysate was spun at 20,000g for 30 min at 4 C.For DLD activity assays, matrix lysate was dialyzed (3.5 K MWCO membrane) overnight at 4 C rocking in 20 mM sodium phosphate buffer.The protein concentration in supernatant matrix lysate was determined with BCA assay.
Branched chain α-ketoacid dehydrogenase activity assay BCKDC activity assay in lysate was performed as previously described with adaptations (31).To initiate reaction, 200 μl of assay buffer with substrate mixture (30 mM K 3 PO 4 , 2 mM MgSO 4 , 2 mM DTT (Thermo Fisher Scientific, AAJ1539706), 0.56 mM TPP (Sigma-Aldrich, C8754-1G), 0.56 mM CoA (Cayman Chemical, 16147), 1 mM NAD+ (Cayman Chemical, 16077), Triton X-100, 0.2 mM alpha-ketoisocaproate (Cayman Chemical, 21052-5), and 5 μM rotenone (VWR, 10189-314) at pH=7.0), which was prewarmed to 30 C, and 20 μg of crude isolated mitochondria or matrix lysate, as specified in figure legends, were mixed in each well of 96-well plate.Reaction was allowed to proceed at 30 C. At designated time points (typically every 20 min from reaction start to 1 h), 50 μl or 60 μl of reaction mixture sample was collected and quenched in 4× volume (200 μl or 240 μl) of LCMS-grade methanol.Samples were spun and supernatant were dried under nitrogen gas, then resuspended in LCMS-grade H 2 O and analyzed by LCMS.The reaction rate was quantified by the production of isovaleryl-CoA over time.The slope was fitted by linear regression.As a blank control, the same amount of mitochondria lysate was mixed with assay buffer without alphaketoisocaproate.
For experiments involving in vitro treatment of mitochondria lysates with NO donor or SNO-CoA, lysate was diluted to 1.5 μg/μl in assay buffer (30 mM K 3 PO 4 , pH 7.0); PAPA-NONOate or SNO-CoA were added and incubated at room temperature for 3 to 5 h, as specified in figure legend.SNO-CoA was prepared as previously described (24, 50, 51) by combining 100 mM CoA in 1 M HCl with 100 mM NaNO 2 in 100 μM EDTA and 100 μM DPTA in a 1:1 (v/v) ratio.
Cell lysate PDHC activity assay
PDHC activity in cell lysate from HAP1 cells was analyzed using a pyruvate dehydrogenase enzyme activity microplate assay kit (Abcam, ab109902) per the manufacturer's instructions.This kit measures PDHC activity by monitoring pyruvate-dependent NADH production.The NADH level was measured by absorbance of NADH-coupled dye (450 nm) using a BioTek Epoch2 plate reader.Data were analyzed using Gen5 TS v.2.09 software (BioTek Instruments, Inc).
Blue native-PAGE and DLD activity assay
For native gel analysis, the Novex Native Bis-Tris Gel System was used (Thermo Fisher Scientific).Samples were loaded in 4 to 16% Bis-Tris NativePAGE gel and run at 150 V for 1 h at 4 C with anode buffer in outer chamber and light blue cathode buffer in inner chamber.After 1 h, the light blue cathode buffer was replaced with anode buffer.The gel was run for an additional 1 h at 250 V on ice at 4 C.
In gel, DLD activity assay was performed as previously described (21,37,38).Briefly, native gel was immediately removed from cassette and incubated in activity assay buffer (50 mM potassium phosphate, pH = 7.0, 0.2 mg/ml nitro blue tetrazolium (NBT) chloride (Alta Aesar, B23792.02), and 0.1 mg/ml NADH (Cayman Chemical, 16078)) for 40 to 50 min until purple bands appeared.The gel was then imaged on an EPSON Scan V700.After the image was obtained, gel was fixed, stained with Coomassie R-250, and destained for visualization of the protein standard (Thermo Fisher Scientific, LC0725).The product of the diaphorase activity of DLD is NBT-formazan, which has a maximum absorbance between 500 to 600 nm (39).Therefore, DLD activity was quantified by the rate of diaphorase activity as measured by the increasing NBT-formazan absorbance at 568 nm over time using a Bio-Tek Epoch2 microplate reader.Absorbance was measured continuously, and the mean velocity was determined from the linear portion of the curve.As a blank control, the same amount of mitochondria lysate was mixed with assay buffer without NADH.Mean velocity was normalized to relative DLD protein expression by corresponding Western blot.Data was analyzed using Gen5 TS v.2.09 software (BioTek Instruments, Inc).
Measurement of nitrite concentration
To measure nitrite production by cells, 2 ml media was incubated with each well of cells in 6-well plate (macrophages or myotubes) or 10 ml media was incubated with each 10 cm plate of cells (macrophages), for 48 h, then spent media was collected.Nitrite concentration in spent media was measured using Griess Reagent System (Promega, G2930) per manufacturer's instructions.
Statistical analysis
Unless otherwise stated in figure legend, for comparisons between two groups, a nonpaired students' t test was performed.For comparisons between three groups or more, oneway ANOVA followed by Tukey's post hoc test for multiple comparisons was performed.
Software
LCMS data analysis was performed with Maven Version 6.
Figure 1 .
Figure 1.Reactive nitrogen species inhibits BCKDC in activated macrophages.A, model schematic.RNS can cause inhibitory modification of the E2 subunit's catalytic lipoic arm, through the targeted delivery via SNO-CoA.B, relative BCKDC activity in mitochondria lysate from WT RAW264.7 cells, after in vitro incubation with PAPA-NONOate (1 mM) for 5 h at room temperature.C, relative BCKDC activity in matrix lysate from WT RAW264.7 cells after in vitro incubation with varied doses of SNO-CoA in the presence of NADH (200 μM) for 3 h at room temperature.D-F, immunoblot for iNOS in whole cell lysate (D), relative intracellular level of citrulline (E), and concentration of nitrite in the spent media (F), of WT and Nos2−/− RAW264.7 cells with or without 48-h LPS/ IFNγ stimulation.G, percent BCKDC activity in WT or Nos2−/− RAW264.7 cells after 48-h LPS/IFNγ stimulation normalized to unstimulated.H and I, the level of functional lipoic arm on DBT in WT or Nos2−/− RAW264.7 cells with or without 48-h LPS/IFNγ stimulation.H, representative immunoblots of DBT in RAW264.7 whole cell lysates (input) or after immunoprecipitation of lipoic acid or IgG control.I, relative fraction of DBT with functional lipoic arm in stimulated state compared to unstimulated state, in WT or Nos2−/− cells, quantified based on blots, N = 2. J, U-[ 13 C]-L-leucine isotopic tracing schematic.Red dots represent 13 C conversion through the leucine catabolic pathway.K, relative abundance of isovaleryl-CoA (m+5 labeled from U-[ 13 C]-L-leucine) in WT or Nos2−/− RAW264.7 cells with or without 48-h LPS/IFNγ stimulation.L, the fraction of m+2-labeled acetyl-CoA after 48 h labeling in media containing U-[ 13 C]-L-leucine (N = 3) or unlabeled control (N = 4) in WT or Nos2−/− RAW264.7 cells with or without 48-h LPS/IFNγ stimulation.B, G, and I, statistical analysis was performed with unpaired student's t test with p-values reported.C, E, F, K, and L, statistical analysis was performed using one-way ANOVA followed by post hoc Tukey's test.Bars with different lower-case letters (a, b, c, or d) indicate a statistically significant difference with p < 0.05, with exact pvalues reported in Supporting Information.B, C, E-G, I, K, and L, all bars and error bars represent mean ± SD, N = 3, unless otherwise noted.BCKDC, branched chain α-ketoacid dehydrogenase complex; IFNγ, interferon-gamma; iNOS, inducible nitric oxide synthase; LPS, lipopolysaccharide; RNS, reactive nitrogen species.
Figure 2 .
Figure 2. Nitric oxide production inhibits BCKDC in C2C12 myotubes.A, representative immunoblots for iNOS, DLAT, DLST, DBT, and lipoic moiety in whole cell lysates of WT C2C12 myotubes with or without 48-h TNFα/IFNγ stimulation and with or without treatment with 1400W (200 μM).This experiment was repeated four times.To compare the changes in functional lipoic arm upon stimulation, the relative ratio of lipoic band to its corresponding total E2 subunit band (DLAT for the lipoic band at 70 kDa and DLST or DBT for the lipoic band at 50 kDa) was quantified and normalized to unstimulated, untreated condition.Each dot represents quantified results from one independent experiment.Statistical analysis for significance was performed with paired student's t test with p-values reported.ns indicates p > 0.05.B, relative BCKDC activity as measured by isovaleryl-CoA production for α-ketoisocaproate over time in crude mitochondria isolation from WT C2C12 myotubes treated with or without 1400W (200 μM) and with or without 48-h TNFα/IFNγ stimulation.C, representative immunoblot for myosin heavy chain (MHC) and beta-actin over a time course as WT or Nos2−/− C2C12 cells differentiation.GM, undifferentiated C2C12 cells cultured in growth media (GM); D1, first day in differentiation media; D5, after 5 days in differentiation media.This experiment was repeated with three replicates, with quantification of relative MHC expression normalized to beta-actin expression presented in bar graph.D, the same as (A) but with genetic Nos2 KO instead of 1400W treatment.E, relative abundance of α-ketoisocaproate (6-labeled from U-[ 13 C]-L-leucine) in WT or Nos2−/− C2C12 myotubes with or without 48-h TNFα/IFNγ stimulation.F, the fraction of m+2-labeled acetyl-CoA after 48 h labeling with U-[ 13 C]-L-leucine in WT or Nos2−/− C2C12 myotubes with or without 48-h TNFα/IFNγ stimulation.B and C, statistical analysis for significance was performed with unpaired student's t test with p-values reported.ns indicates p > 0.05.E and F, statistical analysis for significance was performed with one-way ANOVA followed by a post hoc Tukey's test.Bars with different lower-case letters (a, b, or c) indicate a statistically significant difference with p < 0.05, with exact p-values reported in Supporting Information.A-F, all bars and error bars represent mean ± SD.N = 3, unless otherwise noted.BCKDC, branched chain α-ketoacid dehydrogenase complex; DBT, dihydrolipoamide branched chain transacylase; DLAT, dihydrolipoamide S-acetyltransferase; DLST, dihydrolipoamide S-succinyltransferase; IFNγ, interferon-gamma; iNOS, inducible nitric oxide synthase; TNFα, tumor necrosis factor-alpha.
Figure 5 .
Figure 5. Reactive nitrogen species-driven lipoic modification on E2 subunit promotes E3 subunit inhibition.A, relative enzymatic activity of purified porcine PDHC after incubation at room temperature with or without PAPA-NONOate (1 mM), in the presence or absence of pyruvate (1 mM) and CoA (100 μM).B, DLD activity measured by spectrometric assay after purified porcine PDHC was incubated with indicated combination of PAPA-NONOate (1 mM), pyruvate (1 mM), and CoA (100 μM).C, in-gel assay after purified porcine PDHC was incubated with indicated combination of PAPA-NONOate (1 mM), pyruvate (1 mM), and CoA (100 μM).DLD activity is indicated by purple dye intensity at the molecular weight of DLD dimer (146 kDa) in native gel.Samples loaded on gel in paired technical duplicates D, immunoblot for total DLD level in the same samples used in the PDHC, spectrometric, and in-gel activity assays in (A), (B), and (C), respectively.E, schematic of WT and MECR KO HAP1 cells.F, immunoblot for lipoic acid, DLAT, and DLD from WT and MECR (KO) HAP1 cells with or without 250 μM DETA-NONOate treatment.G, relative enzymatic activity of PDHC from WT or MECR KO HAP1 cells with or without 250 μM DETA-NONOate treatment.H, percent DLD enzymatic activity in WT or MECR KO HAP1 cells treated with 250 μM DETA-NONOate normalized to respective untreated condition.I, immunoblots for phosphorylated (Ser293) and total BCKDH in unstimulated or stimulated (TNFα and IFNγ, 48 h) WT C2C12 myotubes with or without treatment of 1400W.J, immunoblots for phosphorylated (Ser293) and total BCKDH in unstimulated or stimulated (TNFα and IFNγ, 48 h) WT (WT) or Nos2−/− C2C12 myotubes.I and J, relative ratio of phospho-BCKDH (Ser293) to total-BCKDH (normalized to WT unstimulated) reported above respective blot.A, B, G, and H, statistical analysis for significance was performed with unpaired student's t test with p-values reported.All bars and error bars represent mean ± SD, N = 3. DLAT, dihydrolipoamide S-acetyltransferase; DLD, dihydrolipoamide dehydrogenase; IFNγ, interferon-gamma; MECR, mitochondrial trans-2-enoyl-CoA reductase; PDHC, pyruvate dehydrogenase complex; TNFα, tumor necrosis factor-alpha. | 2023-10-14T06:17:44.931Z | 2023-10-01T00:00:00.000 | {
"year": 2023,
"sha1": "e7e9bf416ebc2672732c9de1d0e6adaf0879646c",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1101/2023.07.31.551364",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "ab42efb7f123eecf4c8764b055f182278616b880",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
264918494 | pes2o/s2orc | v3-fos-license | MON-442 Tall Cell Variant of Papillary Thyroid Carcinoma Presenting Acutely as a Right Thyroid Mass and Thyroiditis - an Unusual Presentation
Abstract INTRODUCTION Aggressive malignant thyroid tumors (AMTT) can rarely present with signs and symptoms mimicking that of acute suppurative thyroiditis (AST). In its early course, correct diagnosis may be difficult but a prompt recognition is crucial. We report an interesting case of an aggressive papillary thyroid cancer with presenting acute symptoms masquerading as AST. CLINICAL CASE A 50-year-old Caucasian male was admitted for 2 day history of sore throat, odynophagia, low grade fever with a tender and enlarging right neck swelling, not improving with oral antibiotics. CT neck showed a 5.8 x 4.5cm mass in right thyroid lobe with shallow retropharyngeal effusion. Labs showed elevated WBC 10.7 x 109/L (4–10.5), ESR 20 mm/hr (0–12), CRP 92.93 mg/L (0.2–10), with negative rapid strep and IgM for viruses EBV and CMV. Thyroid function tests were normal. Ultrasound neck showed an ill-defined 6.5 cm hypoechoic mass with cystic component and increased vascularity and calcification, replacing the right thyroid lobe. Due to concern for thyroid abscess he underwent urgent aspiration. Fluid analysis and cultures were negative for bacterial source. FNA (fine needle aspiration) of one specimen was suspicious for follicular neoplasm with predominant hurtle cells and suspicious for thyroid neoplasm with papillary architecture in the other. Core biopsy was suspicious for papillary carcinoma but limited due to scant amount of tissue. Affirma unavailable due to insufficient RNA yield. Flow cytometry was negative for abnormal lymphocyte population. Repeat CT neck 3 weeks later was concerning for metastatic lymphadenopathy. He underwent total thyroidectomy with lymph node (LN) dissection and pathology showed papillary thyroid cancer, tall cell variant, tumor size 6 cm, predominantly hemorrhagic, extending to perithyroidal strap muscles and involving 5/6 LN in right trachea-esophageal groove. It was a stage I, pT3bN1aMx cancer. Subsequently, he underwent thyrogen stimulated I-131 ablation of residual thyroid carcinoma with pretherapy scan showing increased uptake in thyroid bed and a satellite lesion suspicious for LN metastasis. Post-ablative scan showed no other new lesions. He is now on levothyroxine suppressive therapy. CONCLUSION Early course of AMTT can rarely present as AST, both being progressive and fatal diseases. Our case had a typical presentation of AST, causing delay in diagnosis of his very aggressive form of thyroid cancer. Gross pathology showed predominantly hemorrhagic areas in the tumor with central necrosis, which was likely the cause of acute mass with systemic signs suspicious for infection. One should consider an alternative diagnosis in suspected cases of AST in the absence of positive cultures or lack of improvement with parenteral antibiotics. It is imperative to make a diagnosis without any delay as prognosis for either condition depends on prompt recognition and treatment.
made the case more fascinating, especially since its management with steroids has causes an improvement in the patient's PTH status.Regardless of etiology, primary hypoparathyroidism is treated with lifelong supplementation of calcium and calcitriol to a goal serum calcium level at the lower limit of normal.Reference: Abate EG,Clarke BL.Review of Hypoparathyroidism.Front Endocrinol (Lausanne).2017; 7:172.Published 2017 Jan 16
Tall Cell Variant of Papillary Thyroid Carcinoma Presenting Acutely as a Right Thyroid Mass and Thyroiditis -an Unusual Presentation
Itivrita Goyal, MD, Salman Zahoor Bhat, MD, Manav Batra, MBBS, Abhijana Karunakaran, MD.University at Buffalo, Buffalo, NY, USA.
MON-442 INTRODUCTION
Aggressive malignant thyroid tumors (AMTT) can rarely present with signs and symptoms mimicking that of acute suppurative thyroiditis (AST).In its early course, correct diagnosis may be difficult but a prompt recognition is crucial.We report an interesting case of an aggressive papillary thyroid cancer with presenting acute symptoms masquerading as AST.CLINICAL CASE A 50-year-old Caucasian male was admitted for 2 day history of sore throat, odynophagia, low grade fever with a tender and enlarging right neck swelling, not improving with oral antibiotics.CT neck showed a 5.8 x 4.5cm mass in right thyroid lobe with shallow retropharyngeal effusion.Labs showed elevated WBC 10.7 x 10 9 /L (4-10.5),ESR 20 mm/hr (0-12), CRP 92.93 mg/L (0.2-10), with negative rapid strep and IgM for viruses EBV and CMV.Thyroid function tests were normal.Ultrasound neck showed an ill-defined 6.5 cm hypoechoic mass with cystic component and increased vascularity and calcification, replacing the right thyroid lobe.Due to concern for thyroid abscess he underwent urgent aspiration.Fluid analysis and cultures were negative for bacterial source.FNA (fine needle aspiration) of one specimen was suspicious for follicular neoplasm with predominant hurtle cells and suspicious for thyroid neoplasm with papillary architecture in the other.Core biopsy was suspicious for papillary carcinoma but limited due to scant amount of tissue.Affirma unavailable due to insufficient RNA yield.Flow cytometry was negative for abnormal lymphocyte population.Repeat CT neck 3 weeks later was concerning for metastatic lymphadenopathy.He underwent total thyroidectomy with lymph node (LN) dissection and pathology showed papillary thyroid cancer, tall cell variant, tumor size 6 cm, predominantly hemorrhagic, extending to perithyroidal strap muscles and involving 5/6 LN in right trachea-esophageal groove.It was a stage I, pT3bN1aMx cancer.Subsequently, he underwent thyrogen stimulated I-131 ablation of residual thyroid carcinoma with pretherapy scan showing increased uptake in thyroid bed and a satellite lesion suspicious for LN metastasis.Post-ablative scan showed no other new lesions.He is now on levothyroxine suppressive therapy.
CONCLUSION
Early course of AMTT can rarely present as AST, both being progressive and fatal diseases.Our case had a typical presentation of AST, causing delay in diagnosis of his very aggressive form of thyroid cancer.Gross pathology showed predominantly hemorrhagic areas in the tumor with central necrosis, which was likely the cause of acute mass with systemic signs suspicious for infection.One should consider an alternative diagnosis in suspected cases of AST in the absence of positive cultures or lack of improvement with parenteral antibiotics.It is imperative to make a diagnosis without any delay as prognosis for either condition depends on prompt recognition and treatment.
SUN-474
Background: Poorly differentiated thyroid carcinoma (PDTC) is a rare and aggressive subtype with morphological/behavioral features between differentiated thyroid carcinoma (DTC) and anaplastic thyroid carcinoma (ATC).Clinical case: A 43-year-old female presented with 3 cm right thyroid mass noted on US neck.FNA biopsy showed undifferentiated carcinoma, large cell type.Additional immune-stains were suggestive of ATC.Pre-surgery nonstimulated thyroglobulin (NSTG) was 311 (RR 0-55 ng/ dl).Pathology post-total thyroidectomy with bilateral level VI lymph node dissection showed a 3.2 x 2.5 x 2.5 cm carcinoma with vascular and capsular invasion.Most of the mass consisted of very atypical pleomorphic cells, mitosis was difficult to find.The tumor did not show the widely invasive-destructive pattern commonly seen in ATC.An adjacent differentiated component showed predominantly follicular pattern and was described as dedifferentiated follicular carcinoma.All lymph nodes were negative for metastatic disease.Post-surgery NSTG was <0.2 (RR <0.1ng/ml as athyreotic), stimulated TG was 2.22 with negative TG antibodies.Four months later, she received 193.5 mCi radioactive iodine (RAI) therapy.The post-therapy scan showed no Iodine-131 avid uptake in neck or distant metastasis.Neck imaging and TG levels done periodically showed no structural or biochemical evidence of recurrence.Currently the patient is cancer-free for 14 years since diagnosis with no need for additional therapies.Discussion: PDTC accounts for 1-15% of all thyroid cancers.Although PDTC is rare, it is a clinically significant histological diagnosis as it represents the main cause of death from non-anaplastic follicular cell-derived thyroid carcinoma.The Turin proposal published in 2007 suggested three criteria for the diagnosis of PDTC which included the pattern of growth and high-grade features.PDTC presents more frequently with locally invasive extra-thyroidal disease, metastasis to regional lymph nodes and distant organs compared to DTC.Despite the capacity to have RAI uptake, there has been no evidence of significant improvement in survival due to tumor heterogeneity in differentiation.Recent data suggest that age more than 45 years, tumor size more than 4cm, extra-thyroidal extension, higher pathological T stage, positive margins, and distant metastasis predict worse prognosis.Conclusion: Our patient showed an excellent response to therapy in spite of having PDTC with positive margins.We hypothesize that this could be likely due to young age at the time of diagnosis, early detection of tumor while it was localized in the thyroid without distant metastasis as well as heterogeneity in the tumor with differentiated cells that are responsive to RAI.We conclude that with early detection, timely surgery, and adjuvant therapy, excellent prognosis can be achieved in patients with PDTC.
SUN-205
Background: Pseudo-Cushing Syndrome (PCS) is an underrecognized clinical entity that is a reversible consequence of alterations in cortisol production.We present a case of a patient with presumed type 1 Diabetes Mellitus (DM) who was found to have PCS secondary to malnutrition.Once the nutritional status normalized, the patient's glycemic control remarkably improved and became well-controlled on metformin alone.Clinical case: A 54-year-old female with poorly controlled insulin-dependent DM for 10 years was referred for concern for adrenal insufficiency after an ACTH came back elevated in the setting of intractable nausea, vomiting and considerable weight loss over 1 year.Prior HbA1c was 16.2% (RR 4.4-6.7).On exam her vitals were normal, body mass index (BMI) was 15 kg/m 2 .Workup confirmed an elevated ACTH of 100 pg/ml (RR 6-50 pg/ml), however, random PM cortisol was unexpectedly elevated at 26.58 ug/dL ). 8 AM labs for ACTH and cortisol were similarly elevated at 91 pg/ml and 28.33 ug/dl, respectively.She had no evidence of classic Cushingoid features.Subsequent low dose dexamethasone suppression test and 24-hour urine free cortisol were negative.Over 18 months, with optimization of her insulin therapy, BMI improved to 19 kg/m 2 , ACTH and cortisol started to downtrend spontaneously.After 30 months, her BMI improved to 20 kg/m 2 .Repeat blood work showed A1C 6.5%, ACTH and cortisol completely normalized to 42 pg/dl and 8 ug/dL, respectively.After being adherent to insulin for a few years, her gluco-toxicity state resolved.A month prior to following up, she self-discontinued insulin due to hypoglycemia but continued on metformin.Currently she continues to remain off insulin.Discussion: PCS is a challenging diagnosis to recognize and differentiate from Cushing Disease (CD) especially due to overlap in biochemical profile.It is important to be aware of 1 Rush University Medical Center & John H. Stroger Jr. Hospital of Cook County, Chicago, IL, USA, 2 J Stroger Hosp-Cook County, Burr Ridge, IL, USA.
1 Rush University Medical Center & John H. Stroger Jr. Hospital of Cook County, Chicago, IL, USA, 2 Stroger Hospital of Cook County, Chicago, IL, USA. | 2020-05-13T18:08:03.608Z | 2020-04-01T00:00:00.000 | {
"year": 2020,
"sha1": "673a48490a20da62ea0d68abe0e3fa9f17732e78",
"oa_license": "CCBYNCND",
"oa_url": "https://academic.oup.com/jes/article-pdf/4/Supplement_1/MON-442/33188589/bvaa046.341.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "673a48490a20da62ea0d68abe0e3fa9f17732e78",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
119702575 | pes2o/s2orc | v3-fos-license | Solvability of the asymmetric Bingham fluid equations
In this work, we investigate the asymmetric Bingham fluid equations. The asymmetric fluid of Bingham includes symmetric and antisymmetric stresses with such stresses appearing as an elastic response to the micro-rotational deformations of grains in a complex fluid. We show the global-in-time solvability of a weak solution for three dimensional boundary value problem with Navier boundary conditions of the asymmetric Bingham fluid equations.
Introduction
The Newtonian fluid obeys the constitutive relations that the deviation stress tensor is a linear function of the stress rate-of-strain tensor and if the fluid is isotropic then the stress and rate-of strain tensors are symmetric. Many of fluids can not be described by the Newtonian constitutive relations, such as slurries, animal blood, mud (mixtures of water, clay), viscous polymers, polymeric suspensions. These fluids does not commence to flow till the applied stress attains a certain optimum magnitude, called the yield stress τ * , after of that they behave as a Newtonian fluid. An example is toothpaste, which will not be extruded until a certain pressure is applied to the tube. Then it is pushed out as a solid plug. The physical reason for such behaviour is that the fluid contains particles, such as toothpaste, paints, clay, or large molecules, such as polymers, animal blood, which have an interaction, creating a weak rigid structure. Therefore a certain stress is required to break this weak rigid structure. As soon as the structure has been broken, the particles move with the fluid under viscous forces. The particles will associate again if the stress is removed. Such behaviour was firstly presented in an experimental study by Bingham [3], where he proposed its mathematical model. Later on these type of fluids have been called as Bingham plastic fluids. The Bingham plastic fluid behaves as a rigid body at low stresses but flows as a viscous fluid at high stress. Nowadays it is used as a mathematical model of mud flows in drilling engineering, heavy oil, lava (being a mix with melting snow, stones), in the handling of slurries, waxy crude oils. Recent examples concerns the propane flow within the hydro-fracture [20]. Significant efforts of the study of the Bingham plastic fluids have been done by Oldroyd [14], Mossolov, Miasnikov [13], Glowinski, Wachs [11], Papanastasiou [15] and many others scientists.
The Newtonian flow in the Bingham fluid (after the load greater than the yield stress τ * ) has a drastic limitation, since it does not account the behaviour of the fluid, that contains the particles. Most of above mentioned fluid systems contain rigid, randomly oriented particles, irregularly shaped particles (drops in emulsions), branched and entangled molecules in case of polymeric systems, or loosely formed clusters of particles in suspensions, etc. The particles may shrink and expand or change their shape, they may rotate, independently of the rotation of the fluid. To describe accurately the behaviour of such fluids a so-called asymmetric continuum theory [22] (or micropolar theory [7], [9]) has been developed that ignores the deformation of the particles but takes into account geometry, intrinsic motion of material particles. This theory is a significant and a simple generalization of the classical Navier-Stokes model, that describe the Newtonian fluids. Only one new vector field, called as the angular velocity field, of rotation of particles is introduced. As a consequence, only one equation is added, that represents the conservation of the angular momentum. The asymmetric/micropolar fluids belong to the class of fluids with non-symmetric stress tensor. This class of fluids is more general than the classical Newtonian fluids.
Shelukhin, Růžička [21] suggested a mathematical model that describe the behaviour of an asymmetric/micropolar viscous fluid of Bingham. In the paper [19] the global solvability for the mathematical model of [21] has been demonstrated in a special case of one dimensional flow. Later on in [18] the authors have obtained the solvability of the stationary solution for this model already for three dimensional case, but in a particular case when the stress tensor does not have non-symmetric part.
The main objective of the current work is to correct the model suggested in [21] and to show the well-posedness of a modified model. The paper is organized as follows: • First, in the section 2 we describe the model, proposed in [21], and modify this model.
• In section 3 we explain the main idea of the modification in the Shelukhin-Růžička model and collect some technical results that are used in our main result, related with the proof of global-in-time solvability result for the modified model. In particular, we introduce a potential for the viscous part of the stress tensor and characterizes completely its sub-differential (see Proposition 3.7).
• In section 4 we formulate boundary-value problem and a global-in-time existence theorem 4.1 for the modified model.
• In section 5 we introduce an approximated problem (27), depending on a regularization index n ∈ N, and show the solvability of this approximated problem (27) by Schauder fixed point theorem. Also we derive a priori estimates for the solution of (27), which are independent on n; • Finally, in section 6 we prove Theorem 4.1, applying the Lions-Aubin compactness theorem and a priori estimates of the section 5.
Model of asymmetric Bingham fluids
In what follows we explain the mathematical model of asymmetric Bingham fluid that was proposed in the article [21]. For simplicity of consideration in this article we consider a particular case when the angular velocity field is zero. For any matrix X ∈ R 3×3 we define the symmetric and asymmetric parts with the adjoint matrix X T having the property X T ij = x ji . Also we denote For any matrices X, Y ∈ R 3×3 the scalar product X : Y and the modulus |X| of X are defined by Let v = v(x, t) be the velocity of the mass center of the material point (ξ, t) for an asymmetric Bingham fluid. We denote the rate of strain tensor and introduce the matrix are the symmetric and asymmetric parts of B = B(v), respectively. The positive constants µ i are viscosities of the asymmetric Bingham fluid. An instant stress state of the fluid is described by the Cauchy stress tensor T = −p I + S, where p and S are the pressure and the viscous part of the stress tensor. In [19] the viscous part S of the stress tensor of the fluid was suggested to be expressed as for some tensor S plug ∈ R 3×3 , such that |S plug | τ * . The positive constant τ * is the yield stress. Finally, we have the momentum balance law describing the motion of asymmetric fluid of Bingham where ρ is the density and f is the mass force vector.
Remark 2.1. Let us introduce the potential Then the constitutive law (5) for the asymmetric Bingham fluid can be formulated as S ∈ ∂V (B 0 ). Let us remind that this inclusion is equivalent to the variational inequality Nevertheless that the relation (5) describes a plug zone in the Bingham fluid, there exists a significant restriction in such modelling. As we mentioned in 1 Introduction, in the articles [18], [19] the solvability of the model (5)-(6) was shown only in the case when the asymmetric part is not present in the tensor S. As it is well known, one of the principal approach for the study of problems with inclusions is the theory of monotone operators, that was developed by Duvaut, Lions [8]. The major difficulty in the study of Shelukhin-Růžička model [21] consists in the presence of the term S : B 0 in the inequality (7). The asymmetric part B a of B 0 does not permit to apply the theory of monotone operators.
In the following we present our modification in the above described model (5)- (6) in such way that permits to apply the theory of monotone operators. Moreover, in our model the viscous part S will be a more general then in the mathematical model of [21]. For vectors functions v ∈ R 3 we consider B = B(v) introduced in (3). For such defined tensor B we introduce the following tensors where p 2, µ 1 , µ 2 are the viscosities of the viscoplastic fluid of Bingham and τ * is a so-called plug parameter. Let us define the viscous part S of asymmetric fluid of Bingham by for some tensor S plug = S plug (x, t) ∈ R 3×3 , which fulfils the restriction |S plug | τ * . Here we denote The major explanation of this modification is based on Proposition 3.7 proved in the following section.
Some useful results
In what follows the following algebraic result will be very useful.
Lemma 3.1. The space of the matrices endowed with the dot product (2) is the direct sum of the spaces of symmetric matrices and anti-symmetric matrices. More precisely, for any matrices X, Y ∈ R 3×3 , we have with the notation introduced in (1).
Proof. The coefficients of the matrices X s and X a are equal to x s ij = Moreover, we have Before proceeding let us recall two basic theorems on convex analysis. Then, for each y ∈ R n there exists the one-sided directional derivative of f at x with respect to the vector y, i.e.
In fact, the difference quotient (f (x + λy) − f (x))/λ is a non-decreasing function of λ > 0, so that For any function f : R n → R m positively homogeneous of order 1, we have Indeed, in this case one has Let us remember the concept of sub-differential.
The set of all sub-gradients of f at x is called sub-differential of f at x and is denoted by ∂f (x).
Let us also remark the following fact.
Remark 3.6. Let denote the l p norm in R n . Then the l p norm is decreasing with respect to p ∈ [1, ∞]. This is easy to prove: given and, consequently, The following results are used in a crucial way in the proof of our Theorem 4.1.
Proposition 3.7. For given p 2 let us introduce the potential for any matrix X ∈ R 3×3 . The potential V is convex, differentiable at any is the closed ball of a radius r at the center 0 and To see that V is convex, we first notice that the functions are convex since they are a composition of the convex function t ∈ R → |t| p 2 with quadratic functions X → |X s | 2 , X → |X a | 2 . In addition, we can check that the function W is convex using that any norm is convex and the fact that is also a norm. Thus, V is convex because it is a linear combination of convex functions. The function V is differentiable at any X ∈ R 3×3 \{0} due to the chain rule, and we can compute DV (X) differentiating directly U and W with respect to the standard variables x ij in R 3×3 , or, alternatively, using the chain rule, one has Now, the function U is differentiable also at X = 0 and DU (0) = 0. Then, the sub-differential ∂V (0) is equal to τ * ∂W (0). Let us show the items (a) and (b) in the statement of the Proposition).
By Remark 3.3,
Then, by Theorem 3.5, Taking in the inequality Y = S and using the Remark 3.6, we obtain of item (a). Let us consider arbitrary matrix S ∈ R 3×3 satisfying the property Then, by the Cauchy-Schwarz inequality, we have also By Theorem 3.5 we conclude that any S ∈ R 3×3 , satisfying the property (15), belongs to ∂W (0). On the other hand, by the positive homogeneity of W the property (15) is equivalent to Let us demonstrate that this minimum is equal to r p , that gives the claim (14). the exact value of r p is defined in the statement of item (a) in the Proposition.
By a straightforward computation, we obtain that the minimum of the function is (r p ) p , and it is attained at t * = 1/(1 + ν 4 p−2 ). Thus, we have proved the claim (a).
To show claim (b) we follow a similar reasoning as above, and with the help of the Hölder inequality. Let S ∈ ∂W (0). Then, by Theorem 3.5, we have If we take in this inequality Y, having This implies that then, for all Y ∈ R 3×3 , using the Hölder inequality, we have Then, again by Theorem 3.5, we obtain that S ∈ ∂W (0). Now we show the auxiliary result that explains the definition of τ * by (10) in (9).
Proof. As the derivative of the functional W (X) has been calculated in (12) and equals to Now we claim that DW (B) ∈ ∂W (0), and thus the estimate (16) shall follow from this fact, by (a) of Proposition 3.7, since Accounting (13) we have to show the claim By Theorem 3.2 we have that the directional derivative In addition, W is a norm by Remark 3.6, then Summing up, we obtain our above claim.
Remark 3.9. By Proposition 3.7, the relation (9) is equivalent to the variational inequality Next we present two technical results we shall use latter.
Lemma 3.10. Let W = W (X) be a positive convex function on X ∈ R 3×3 . Then, for any natural n, the approximated function is also convex with respect of the parameter X ∈ R 3×3 .
Proof. Note that the function ϕ(z) = p √ z p + n −1 is monotone increasing and convex function for z 0. Therefore applying the definition of convex function, we easily derive that the composition W n (X) = ϕ(W (X)) is also convex with respect of the parameter X ∈ R 3×3 . Lemma 3.11. Let n be an arbitrary natural number. We consider the convex potential defined for arbitrary X ∈ R 3×3 . Let Then, for any given B ∈ R 3×3 , we have ∂V n ∂X (B) = S n , i.e.
Statement of the problem
Let us consider the motion of an asymmetric Bingham fluid, assuming that the fluid is incompressible. For simplicity of considerations we admit that the density ρ is equal to 1 and neglect the mass force vector f . Then the flow equations (6) for the velocity v in a bounded domain Ω ⊂ R 3 with the boundary Γ are where T = −p I + S and S satisfies the constitutive law (9) with the relations (3), (4), (8). We add to this system the initial data The system (17) is mostly supplemented with the usual Dirichlet boundary condition. The Dirichlet condition implies the adherence of fluid particles to the boundary. For the motion of Bingham fluids (such as the extrusion of the toothpaste from the tube, the mud flows in drilling engineering, the propane flow within the hydro-fracture, etc.) it is more natural to study the system (17) where α is a positive friction coefficient. For the discussion of the Navier slip boundary conditions we refer to the articles [4]- [6].
Let us introduce some notations to formulate our main result. We denote by (·, ·) the dot product in L 2 (Ω). Also we define the spaces The space V p is endowed with the norm v Vp = v L 2 (Ω) + ∇v L p (Ω) . The main objective of our article is to show the well-posedness of the system (17)- (19) for unknown functions v and S. This result of the wellposedness is formulated in the following theorem, in which we also define the concept of the weak solution for the system (17)- (19). This concept is a direct consequence of the equations (17), the boundary conditions (19) and the integral equality which is valid for any (3 × 3)-matrix function T ∈ H 1 (Ω) and any 3D-vector function ϕ ∈ H 1 (Ω).
Then there exists a function v and a (3 and the pair (v, S) is a weak solution of the system (17)- (19), satisfying the integral equality The (
Construction of the approximation problem
In this section we consider an approximated problem for the system (17)- (19) and solve this approximated problem applying the Faedo-Galerkin method and the Schauder fixed point argument (see for instance [1]).
Since the space V p is separable, it is the span of a countable set of linearly independent functions {e k } ∞ k=1 . More precisely, we can choose this set as the eigenfunctions for the following non-linear Stokes type equations with Navier boundary conditions: in Ω, e k · n = 0, [T(e k ) n + αe k ] · τ = 0 on Γ with T(e k ) = −p k I + |∇e k | p−2 ∇e k . The solvability of this problem follows from the spectral theory [10]. This theory permits to construct this set {e k } ∞ k=1 as an orthogonal basis for V p and an orthonormal basis for H. We can consider the subspace V n p = span {e 1 , . . . , e n } of V p , for any fixed natural n. Let us define the vector function v n (t) = n k=1 c (n) as the solution of the approximate system (27) The (3 × 3)−matrix functions T n , S n are prescribed by the relations and the matrix functions B n = B(v n ), B n µ , B n ν , B n ν 2 are calculated through the formulas (3), (4), (8). The function v n 0 is the orthogonal projection of v 0 ∈ H into the space V n p . Note that the system (26)-(28) is a weak formulation of the problem v n t + (v n ·∇) v n = div T n , div v n = 0, in Ω T , v n · n = 0, [T n n + αv n ] · τ = 0 on Γ T , v n | t=0 = v n 0 in Ω.
Next we prove that the approximated problem (27) is solvable.
Lemma 5.1. Let us assume that the data v 0 , α satisfy the conditions (22). Then there exists a solution v n ∈ L ∞ (0, T ; V p ) of the system (26)-(28), satisfying the a priori estimate where A is a constant independent on n. More precisely, A depends only on the data v 0 , µ i , ν.
Proof. The system (27) is a system of n ordinary differential equations of the first order, which can be written in the form for the vector function c (n) (t) = (c Let c (n) be an unknown, such that the vector function and the matrix functions B n = B(v n ), B n µ , B n ν , B n ν 2 given by the formulas (3), (4) and (8). From the theory of ordinary differential equations, it follows that the linear system (30)-(31), of n ordinary linear differential equations, has an unique solution c (n) ∈ C 1 ([0, T ]). Therefore, we can consider the operator K : C([0, T ]) → C([0, T ]) defined as The solvability of the system (26)-(28) will be shown if we demonstrate that this operator K has a fixed point, which we shall do by the Schauder fixed point theorem. Thus, we have to prove that this operator is compact on a bounded convex subset M of C([0, T ]).
First, let us deduce a priori estimates for c (n) . We multiply (31) 1 by c (n) k and take the sum on the index k = 1, ..., n. Then the integration over the time interval (0, t) gives ∂v n ∂x = 2µ 1 |B n s | p + µ 2 |B n a | p , B n ν 2 : ∂v n ∂x = |B n s | p + ν 2 |B n a | p and S n : ∂v n ∂x = 2µ 1 |B n s | p + µ 2 |B n a | p + τ * ( c (n) ) |B n s | p + ν 2 |B n a | p .
Therefore, we deduce the inequality which gives the apriori estimate (29). Also, we have and using (34), we obtain where the constant C(n) depends only on n. Let us assume that c (n) ∈ M . Since {e j } ∞ j=1 is the orthonormal basis for the space H, then the equality (31) can be written as for any k = 1, . . . , n. Since V p ⊂ H 1 (Ω), then the Sobolev continuous embedding H 1 (Ω) ֒→ L 6 (Ω) ∩ L 2 (Γ) and the Holder inequality imply Let us recall the Gagliardo-Nirenberg-Sobolev inequality (see [23]) Therefore, the integration of the inequality (37) for each k = 1, . . . , n. Therefore the operator K : M → M is compact by the Arzela-Ascoli theorem. The continuity of K is a direct consequence of the theorem on continuous dependence of the solution of the Cauchy problem (30)-(31) with respect to the coefficients c (n) . Therefore, the operator K fulfils the conditions of the Schauder fixed point theorem, which implies the existence of a fixed point of K, and gives the solution of the system (26)-(28).
Lemma 5.2. We assume that the data v 0 , α fulfil the conditions (22). Then there exists a solution v n ∈ L ∞ (0, T ; H) ∩ L 2 (0, T ; V p ) of the system (26)-(28) that satisfies the following estimates and Here and below C is a positive constant that does not depend on n, but may depend on v 0 and α.
Let us consider a subspace V n p of of V p , defined in Lemma 5.1. Let P n be the orthogonal projection of V p onto V n p . Let ϕ ∈ H 1 (0, T ; V p ) be an arbitrary function. The first equality of (27) is linear with respect of the functions e k , k = 1, ..n, then we have since {e j } ∞ j=1 is the orthogonal basis for the space V p . As it was done in (36), we obtain since the norm ϕ Vp = ϕ L 2 (Ω) + ϕ L p (Ω) on the space V p of the continuous operator P n is less or equal than 1. Therefore this last inequality (43), the a priori estimates (39)-(40) and the Gagliardo-Nirenberg-Sobolev inequality (38), written for v n instead of v n imply the estimate (41).
Lemma 6.1. Let X 0 , X and X 1 be three Banach spaces with X 0 ⊆ X ⊆ X 1 .
Suppose that X 0 , X 1 are reflexive, X 0 is compactly embedded in X and that X is continuously embedded in X 1 . Let Then the embedding of W into L 2 (0, T ; X) is compact.
Hence applying the convergences (44)-(45) in (42), we deduce that the limit functions v, S fulfil the integral equality In what follows we use the approach of the theory of variational inequalities [8], [10] to demonstrate the relation (9), that ends the proof of Theorem 4.1. For a fixed natural n we consider the convex potential introduced in Lemma 3.11. By this Lemma we have and V n (X) − V n (B n ) S n : (X − B n ) a.e. in Ω T , ∀ X ∈ R 3×3 .
are equivalent. Let us denote Ω r = (0, r) × Ω for arbitrary r ∈ (0, T ), the equality (33) can be written as (49) If we substitute (49) in (48), then the lower semi-continuity property of convex functional with respect of weak convergence gives by use the convergences (44). Hence for any matrix function X ∈ L 2 (Ω T ) we have the inequality Ωr S : X dxdt Let us take ϕ = v(1 − sgn ε + (t − r)) in the equality (24) for a fixed r ∈ (0, T ), where In the obtained equality the limit transition on ε → 0 implies Since the matrix function X ∈ L 2 (Ω T ) is arbitrary, we can choose in this inequality X = B + εZ for any positive ε and any matrix function Z∈ L 2 (Ω T ), which gives lim ε→0 + Ωr Ωr S : Z dxdt.
Now we are able to prove the uniqueness result. Let us denote the difference of two functions f 1 and f 2 by f, i.e. f = f 1 − f 2 . Let us admit the existence of two different solutions v 1 , v 2 with respective tensors S 1 , S 2 , satisfying the relation (9). By (24) the difference v fulfils the equality for any ϕ ∈ H 1 (0, T ; V p ), such that ϕ(·, T ) = 0 in Ω. It is easily to check that Also there exists a constant C > 0, depending only on µ 1 , µ 2 and p, such that C(|B s | p + |B a | p ) B µ : ∂v ∂x S : ∂v ∂x , being a consequence of the monotonicity of the second term in the relation (9) and the inequality (1.25) 1 of Lemma 1.19, shown in [12]. Let us fix an arbitrary r ∈ (0, T ) and take ϕ = v(1 − sgn ε + (t − r)) in (55). Then the limit transition on ε → 0 in the obtained equality and the In the sequel we follows the ideas presented in Theorem 3.2 of [23] and Theorem 4.29 of [12]. By Hölder's inequality and Lemma 6.3, the right hand side of (56) is estimated as where the ε−version of Young's inequality has been used in the last inequality. Hence taking ε = C, we obtain Therefore, if p 5p−6 5p−9 , that is p 7+ √ 19 5 ≈ 2.272, then applying the Gronwall inequality we obtain z(t) = 0 a.e. in (0, T ) and we deduce the global-in-time uniqueness result. | 2018-11-18T23:51:56.000Z | 2018-11-18T00:00:00.000 | {
"year": 2018,
"sha1": "6e226f8f9c672696e71b7151d1bd274677f52fb2",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "6e226f8f9c672696e71b7151d1bd274677f52fb2",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
} |
11838105 | pes2o/s2orc | v3-fos-license | Arrhythmogenicity of the Coronary Sinus
The coronary sinus (CS) is the cardiac venous system that begins at its ostium in the right atrium and ends at the origin of the great cardiac vein. The major tributaries of the CS include the great cardiac vein (anterior cardiac vein), the left obtuse marginal vein, the posterior (or inferior) left ventricular vein, the middle cardiac vein, and the right coronary vein. In addition, atrial veins and, notably, the vein of Marshall (or oblique left atrial vein) also enter the coronary sinus. From the perspective of electrophysiologists, the CS represents an anatomical structure of particular interest. First, it provides access to epicardial atrioventricular pathways and arrhythmogenic foci of both atrial6 and ventricular arrhythmia. Second, it represents by itself a potential source of atrial arrhythmia. The arrhythmogenic potential of the thoracic veins in general has been recognised since the 1970s. Atrial arrhythmias can originate in the pulmonary veins, the superior vena cava, and the CS. Indeed, biatrial flutter,left atrial tachycardia, and atrial fibrillation, involving the distal CS have been well described. There is now evidence that the CS apart from participating in arrhythmia circuits, such as in the slow-slow form of atrioventricular nodal reentrant tachycardia and atrioventricular reentrant tachycardia due to accessory pathways,may itself be a source of apparently atrial arrhythmia. In patients with paroxysmal atrial fibrillation apparently originating from the left superior or inferior pulmonary vein, detailed epicardial mapping through the distal coronary sinus might identify epicardial location of the arrhythmogenic focus. Therefore, the search for foci of abnormal automaticity within the CS should be part of the electrophysiologic evaluation of left atrial arrhythmias.
Introduction
The coronary sinus (CS) is the cardiac venous system that begins at its ostium in the right atrium and ends at the origin of the great cardiac vein. 1,2,3 The major tributaries of the CS include the great cardiac vein (anterior cardiac vein), the left obtuse marginal vein, the posterior (or inferior) left ventricular vein, the middle cardiac vein, and the right coronary vein. In addition, atrial veins and, notably, the vein of Marshall (or oblique left atrial vein) also enter the coronary sinus. 1,2,3 From the perspective of electrophysiologists, the CS represents an anatomical structure of particular interest. First, it provides access to epicardial atrioventricular pathways. 4,5 and arrhythmogenic foci of both atrial 6 and ventricular arrhythmia. 7,8 Second, it represents by itself a potential source of atrial arrhythmia. 9,10,11,12,13,14 The arrhythmogenic potential of the thoracic veins in general has been recognised since the 1970s. 15,16 Atrial arrhythmias can originate in the pulmonary veins, 17 the superior vena cava, 18 and the CS. Indeed, biatrial flutter, 19,20 left atrial tachycardia, 9,10 and atrial fibrillation 12,14 involving the distal CS have been well described. There is now evidence that the CS apart from participating in arrhythmia circuits, such as in the slow-slow form of atrioventricular nodal reentrant tachycardia 21 and atrioventricular reentrant tachycardia due to accessory pathways, 22,23,24,25,26 may itself be a source of apparently atrial arrhythmia. In patients with paroxysmal atrial fibrillation apparently originating from the left superior or inferior pulmonary vein, detailed epicardial mapping through the distal coronary sinus might identify epicardial location of the arrhythmogenic focus. 6 Therefore, the search for foci of abnormal automaticity within the CS should be part of the electrophysiologic evaluation of left atrial arrhythmias. 13
The Ligament of Marshall
A potential source of arrhythmia at the CS is the area of the ligament of Marshall. The ligament of Marshall is a left atrial epicardial neuromuscular bundle that has been associated with the genesis of atrial tachyarrhythmias and AF. 27,28 In 1850, at The Royal Society in London, J. Marshall presented a description of a "vestigial fold of the pericardium" in the back of the auricle, running from the region of the left superior pulmonary vein to the coronary sinus. 2 During embryonic cardiac development the tributary veins of the left sinus horn are successively obliterated so that, at the 10th week, the distal part of the left sinus horn remains as the oblique vein of Marshall and the remaining proximal horn becomes the coronary sinus. 2, 29 Occasionally, a left superior caval (cardinal) vein persists and drains to the coronary sinus. Normally, there is only a remnant of the left superior vena cava that persists as a fibrous cord coursing along the left side of the parietal pericardium, overlying the left pulmonary artery and joining the heart at the roof of the left atrium. This fibrous structure is readily recognised during cardiac operations and is known as the ligament of Marshall. 30 The ligament courses in the AV groove at the base of the left atrial appendage and leads to the earliest tributaries of the coronary sinus. This transition from a ligamentous structure to a vein occurs in the region between the left superior pulmonary vein and the base of the left atrial appendage. This is the corresponding atrial endocardial location for the tip of an electrophysiology catheter advanced to the "wedge" position as far as distal as possible in the coronary sinus, and, consequently, to the origin of the ligament of Marshall (Fig. 1). The ability of this structure to produce automatic activity during sympathetic stimulation was described more than a century later than its initial description. Doshi et al, 28 in an animal study, clearly demonstrated that the ligament of Marshall is rich in sympathetic innervation and isoproterenol infusion may induce automatic activity from it; this sensitivity to isoproterenol is upregulated following long-term rapid pacing and may contribute to development of AF. Thus, this area may not only act as a trigger of AF paroxysms but may also result in sufficient electrical remodelling of the atria and persistent AF.
We and others 12,31 have demonstrated that electrical activity of ligament of Marshall tissue can be identified in the human by epicardial or endocardial recording of discrete potentials, and that these potentials can be selectively abolished by catheter ablation (Fig. 2). Our group has also shown that combined epicardial and endocardial ablation through the coronary sinus and the left atrial endocardium is necessary in order to abolish the electrical activity of the presumed extracardiac and intracardiac components of this tissue. Abolition of this activity may significantly reduce the frequency of paroxysms of adrenergic AF and confer considerable symptomatic improvement. Kim et al, 32 based on anatomical observations on postmortem human hearts, have recently suggested that the complex pattern of ligament of Marshall myocardial tract insertions into the left atrial free wall may necessitate an endocardial ablation approach with energy delivery to the lower part of the left atrium. Our observations todate indicate that although such an approach should be part of the ablation procedure, it may not be enough without ablation of the extracardiac component. This is in keeping with observations from the surgical Maze where total isolation of the pulmonary veins has not been found enough to prevent AF unless coronary sinus cryoablation is accomplished. 33 According to our experience, cannulation of the area of the ligament of Marshall is not always possible with conventional ablation catheters. Our method requires full engagement of the distal coronary sinus with the ablation catheter and this is usually possible in approximately half of the cases. Furethermore, delivery of radiofrequency current at this area is not always feasible due to impedance rises in case of a wedged electrode. Thus, the epicardial approach may not be applicable to a considerable proportion of patients.
The Role of Coronary Sinus Musculature
Arrhythmias originating within the CS or cardiac veins have also been attributed to atrial musculature extending into these structures. Striated myocardial connections between the venous wall of the CS and the left atrium have been described both in animal and human necropsy studies. 34,35 The myocardial sleeve around the coronary sinus is composed of bands of muscle from the left atrial wall as well as from the right atrial wall. 36 The sleeve usually does not extend to other veins, although occasionally it may cover the adjacent 2 to 10 mm of the great cardiac vein. 37 Myocardial connections between left atrium, pulmonary vein and CS musculature can be identified epicardially, through the CS or following a pericardiocentesis, by recording double potentials or fractionated electrograms indicative of delayed conduction. 11,38,39 We have recently shown that recording of double potentials is possible within the CS, particularly at its distal, superoposterior part, near the left superior pulmonary vein (Fig. 3). Their prevalence is higher in patients with PAF than in subjects with other or no arrhythmias and their presence denotes possible sources or substrate for atrial arrhythmia. 11 The demonstration of double potentials with pacing-dependent interpotential delay, clearly suggests the possibility of conduction delay at the distal CS-LA connection. This supports the reported association of this area with the initiation of paroxysms of AF as well as macroreentrant arrhythmias. Delayed conduction at this particular site might provide the substrate for local micro-reentry or serve as a component of a macro-reentrant circuit of biatrial flutter or atrial tachycardia. 9,10,19 Our results also showed that double or fractionated potentials within the CS and in particular at the distal area, may be recorded in the human regardless of a clinical history of PAF. Thus, these potentials are not totally specific for PAF; their prevalence, however, is significantly higher in patients with PAF than in patients with other or no arrhythmias. 11 If the multicomponent electrogram pattern as well as the potential arrhythmogenicity of the CS are due to myocardial connections, recording of double or fractionated potentials should not be possible in venous structures such as great cardiac vein and middle cardiac vein that lack a myocardial sleeve. We studied this hypothesis in our laborotary by subjecting 20 patients to catheter mapping of the CS, the middle cardiac vein, and the great cardiac vein. At conventional mapping during sinus rhythm and high right atrial pacing, discrete double potentials or fractionated electrograms were recorded during left, right atrial and CS pacing at the CS ostium, mid-CS, and distal CS-ligament of Marshall area, in 2 (10%), 1 (5%), and 9 (45%) patients, respectively, whereas no patient displayed such signals in the cardiac veins (p<0.001). The pattern of circumferential muscle activation within the proximal CS was also studied with a circular mapping catheter (Lasso 12 mm). Proximal CS mapping with the Lasso was accomplished in 10 patients, 7 of whom had no evidence of multicomponent potentials in the CS at conventional mapping. Specific CS potentials dissociated from the atrial electrograms were recorded in all patiens with the use of circumferential mapping. The perimetric distribution of electrograms within the CS suggested an oblique course of conduction across the CS musculature (Fig. 4).
It seems therefore that employment of more sensitive mapping techniques such as perimetric mapping, can disclose the universal existence of muscular activation around the CS in all patients tested. In addition, the pattern of activation is similar to this described by Haissaguerre et al 17 in the pulmonary vein and by Goya et al 41 in the SVC, probably reflecting the peculiar anatomy of myocardial extensions surrounding venous structures in the heart. According to von Ludinghausen's 42,37 descriptions of the CS muscle coat as viewed from the epicardial side of the heart, the CS musculature courses across the AV groove in an oblique way. Thus an oblique rather than circular pattern of conduction should be responsible for the observed distribution of electrograms. We can speculate, therefore, that what we recorded is a combination of longitudinal conduction along the CS axis through the adjacent atrial endocardium with oblique conduction of CS musculature across the CS.
Extensive mapping of the distal, superoposterior part of the CS is technically difficult and at times impossible. Interestingly, despite conventional beliefs, in adults, venous luminal diameter is not a cause of obstruction to the passage of 6 or 7F catheters in the distal coronary sinus. 43 Our results, in keeping with post-mortem observations, 42 support this view. It has been shown, both in cadaveric hearts and in clinical studies, that in the majority of the cases a first obstacle is the presence of the valve of Vieussens, and, once this has been negotiated, half of the attempts fail due to acute bending of the great cardiac vein. We have been systematically trying to map the great cardiac vein in all our electrophysiology cases the last 3 years and, despite the fact that the tributaries of the coronary sinus and of the anterior cardiac veins are very variable, 42,44 our success rate now approaches 40%.
In conclusion, the discussed observations provide the rationale for the reported arrhythmogenicity of the CS itself. They support the view that atrial myocardial extensions into cardiac venous structures provide the substrate for potential arrhythmogenicity. In addition, they suggest that detailed mapping of venous structures such as the CS should be considered when assessing patients with tachyarrhythmias of apparently left atrial origin. | 2014-10-01T00:00:00.000Z | 2004-10-01T00:00:00.000 | {
"year": 2004,
"sha1": "d95f5e87fd9858001e7257cc6c49ac0f710b1f97",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "d95f5e87fd9858001e7257cc6c49ac0f710b1f97",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
215826717 | pes2o/s2orc | v3-fos-license | Paired Comparisons-based Interactive Differential Evolution
We propose Interactive Differential Evolution (IDE) based on paired comparisons for reducing user fatigue and evaluate its convergence speed in comparison with Interactive Genetic Algorithms (IGA) and tournament IGA. User interface and convergence performance are two big keys for reducing Interactive Evolutionary Computation (IEC) user fatigue. Unlike IGA and conventional IDE, users of the proposed IDE and tournament IGA do not need to compare whole individuals each other but compare pairs of individuals, which largely decreases user fatigue. In this paper, we design a pseudo-IEC user and evaluate another factor, IEC convergence performance, using IEC simulators and show that our proposed IDE converges significantly faster than IGA and tournament IGA, i.e. our proposed one is superior to others from both user interface and convergence performance points of view.
INTRODUCTION
There are many optimization tasks that it is not easy or almost impossible to design scales for evaluating target systems quantitatively but that we can evaluate subjectively. Some of these tasks include, for example, drawing montage based on witness's memory, fitting a hearing-aid to get satisfactory sounds, designing cute or lovely motions of home robots. Interactive Evolutionary Computation (IEC) has been applied to these tasks in a wide variety of application areas [1].
The biggest drawback of the IEC is IEC user fatigue due to human cooperation with tireless computer. IEC user has to evaluate generated individuals, which makes the user boring and tired. The population size and evolving generations are limited due to the fatigue, and 10 -20 individuals and 10 -20 generations are frequently used in IEC, but they are quite fewer than those of normal EC search and result slower convergence. The slower convergence is other factor of the IEC user fatigue.
Several trials have been done to solve the fatigue problem [1]. Improving IEC user interface is one of them. Some of them are: improving display interface to help user to compare individuals easily, improving input interface by reducing evaluation levels, for example 5 evaluation levels rather than 100 ones, and data visualization by projecting the distribution of individuals in the n-D searching space onto 2-D or 3-D space, and others . Predicting IEC user's evaluation using trained prediction models is other trial. The model is used as a fitness function of normal EC search and is combined with IEC to accelerate IEC search. To train these models using IEC user's evaluations in past generations, a distance-based model [1], genetic programming [2], neural networks [1], and Support Vector Machine [3,4] has been used. References [5,6] propose to use an eye-tracker to avoid the input of fitness values and even to avoid the user for selecting best individuals: it could be done by correctly interpreting cognitive store data; for instance, by considering time spent on evaluating an individual.
Another possible solution is to use tournament IEC [7] that is based on only paired comparisons rather than comparing all individuals. This technique is easier for an IEC user to evaluate a pair of individuals than to compare all individuals and evaluate them at once. However, drawback is its less information for giving fitness values to all individuals due to lack of comparison of all individuals, which means that the fitness includes more noise and may result slow convergence. Simulated breeding is an IEC method that IEC user just chooses better individuals among all, and one click selection of a pointing device is an easy IEC user interface. Although it compares all individuals unlike tournament IEC, the 1 bit evaluation includes more quantization noise in fitness than any other evaluations and may makes IEC convergence slower [8].
Introducing new type of EC and accelerating IEC convergence are other approaches. Differential Evolution (DE) [9] is an optimization technique come to be frequently used in this decade. DE has two possible advantages that completely fall into IEC conditions: first one is the use of comparisons between only two individuals (paired comparisons) and second one is its potential of faster convergence. The objective of this paper is to evaluate Interactive DE (IDE) [10][11][12] in comparison with conventional IEC approaches and show its potential.
Unfortunately, IDE in the references [10][11][12] did not use the first feature of paired comparisons. Particle Swarm Optimization (PSO) can be used as an EC part in IEC. As PSO is sensitive to quantization noise in IEC fitness, better performance of Interactive PSO than Interactive Genetic Algorithms (IGA: Interactive GA) is achieved by combining some methods reducing the effect of the quantization noise in fitness with Interactive PSO [8].
The objective of this paper is to show better convergence performance of our proposed paired comparison-based IDE than convectional IEC algorithms through IEC simulation. Since the advantage of its paired comparison in comparison with many comparison of conventional IGA is obvious, the proposed IDE is the best if the proposed IDE is faster than tournament IGA or faster than or equal to IGA.
We explain EC algorithms used in our study including GA and DE in section II and how individuals are evaluated in interactive frameworks in section III. Section IV evaluates how IDE converges in comparison of three conventional IEC approaches.
II. EC ALGORITHMS
We compare four EC algorithms (DE, Genetic Algorithm (GA), tournament1-GA, and tournament2-GA) with/without an IEC framework. Let us first present what is tournament-GA and next what is DE.
A. Tournament Genetic Algorithms (TGA)
Reference [13] was the first who proposed competitive fitness that does not use absolute values of a fitness function but relative evaluation. Reference [7] was the first that applied the tournament fitness to IEC. It also proposed a tournament IEC that uses not only which is better of a paired individual but also how better into their final fitness values.
The tournament1-GA is a GA in that individuals are evaluated thanks to competitive fitness called singleelimination tournament in [13,14]. Individuals are paired at random, and play one game per pair. Losers of games are eliminated from the tournament… This process continues until the tournament has only one champion left. The fitness of an individual is the number of games played. In the interactive case, i.e. tournament1-IGA, IEC user just chooses one of two displayed individuals.
The tournament2-GA is a GA in that individuals are evaluated thanks to another competitive fitness based on single-elimination tournament first proposed in [7]. The fitness is computed based on not only given the number of games played but also how far between a paired individual. For instance, we start by giving a fitness of 10 to the champion of the tournament. Individual which has lost against the champion is given champion's fitness, i.e. 10, minus the difference between it and the champion and so on for all individuals. In the interactive case, the difference between both individuals is supposed to be given by the user and previous fitness values are also made discrete in nevaluation levels.
An individual that has fought against the tournament's champion in the first game, i.e. it lost the tournament at the first game, it will have a better fitness in tournament2 than in tournament1.
B. Differential Evolution (DE)
The point is that comparing two vectors is only the evaluation of DE. We believed that this paired comparison is the big advantage of DE to use for IEC. However, work done by [10][11][12], that seems to be first work on Interactive DE, did not use this big potential to reduce IEC user fatigue and asked an IEC user to choose better individuals among shown all eight individuals.
DE is a population based, stochastic and continuous function optimizer [9] where distance and direction information from current population is used to guide the search process [15]. DE is known to be able to handle nondifferentiable, nonlinear and multimodal cost functions, to be parallelized to cope with computation intensive cost functions, easy of use, and well suited for rapid convergence, i.e. consistent convergence to the global minimum in consecutive independent trials.
Basically, for each individual of the population (parent or target vector), first generate a mutant vector by adding weighted difference (difference vector) between two randomly chosen vectors (parameter vectors) to the third chosen vector (base vector). Secondly, the trial vector is obtained from the mutant vector and the target vector using binomial or exponential crossover. Finally, target vector is replaced with a better vector of either of the trial one or the target one. There are some variations in how to determine the base vector. More details could be obtained in [9,15].
A. Pseudo-IEC User
Human cannot conduct thousands of evaluations under completely same conditions and is not unreliable to evaluate the convergence of IDE by comparing with those of conventional IEC methods. We should evaluate them with an IEC simulation by designing a pseudo-IEC user even if we evaluate our proposed paired comparison-based IDE with human IEC user later [8].
There are three IEC features that we must realize in the pseudo-IEC user: (1) evaluation characteristics with less complexity, (2) relative fitness in each generation, and (3) discrete fitness in n-evaluation levels. Furthermore, the evaluation characteristics of the pseudo-IEC user should be controlled parametrically.
We realized the (1) using a Gaussian Mixture Model described in section III.B. The (2) is realized thanks to competitive fitness function [7,13,14] explained in section II.A, and the (3) is realized by dividing the range of the best fitness and the worst fitness obtained thanks Gaussian Mixture Model in n-evaluation levels in each generation.
The reason why we use the Gaussian Mixture Models consisting of four Gaussian Mixture functions is to emulate the evaluation landscape in human mind. IEC task may not be a unimodal but not complex because IEC users can reach to satisfactory solutions with less number of population size and generations (see Figure 1).
B. Gaussian Mixture Model
Our evaluation tasks are 4 different dimensional Gaussian Mixture Models in 3-D, 5-D, 7-D, and 10-D. We design to make all their function characteristics same to control our experimental complexities by changing only the dimensionality. They are expressed as: where k and n are the number of Gaussian functions and the dimensionality, respectively; k=4 and n = 3, 5, 7, and 10 in have been chosen such as Gaussian Mixture Model landscape has four peaks more or less overlapped to emulate human decision making as shown in Figure 1. In such a case, boundaries between classes of evaluation are not precise.
A. Experimental conditions
Genotype is a vector of float values; each float value is represented as an array of 12 bits. Vector's dimension is equal to the Gaussian Mixture Model's dimension (3, 5, 7 or 10). Crossover operator is a multipoint crossover with a 100% rate. Mutation rate is 5%. We use a tournament selection operator made of 2 individuals for each tournament. For all experiments, 100 runs are done during 100 generations. For IEC framework, fitness is discretized into 5 evaluation levels (as explained in section III.A).
Our experiments are also conducted with two different population sizes of 16 and 128. The former corresponds to the population size when a human IEC user runs IEC experiments, while the latter corresponds to that for normal EC search and is conducted as the reference for comparing with convergence characteristics of IEC. Population size had also been chosen because of implementation of singleelimination tournament (cf. section II.A); size needs to be a power of two, otherwise some individuals will not have the same number of wins.
As mentioned in section II.B, different strategies exist for DE and we used DE/best/1/bin algorithm that is a standard one.
All following remarks are based on the best fitness and not on the average of fitness.
B. Comparison of DE and three other EC algorithms as References
In general : • EC with smaller population size is harder than that with more population size, • higher dimensional tasks is more difficult, • tournament GA's use less information for selection than normal GA that selects better parents by comparing all individuals, which implies that the normal GA converges faster than the tournament GA's in general, and • from the observations of EC research in this decade, DE seems to converges faster than normal GA. Figure 3 to Figure 10 show all these general observations. This is why we may believe that our four EC algorithms run correctly though we have not applied sign tests to these results.
C. Comparison of IDE and three other IEC algorithms
From the practical IEC point of view, we should note that practical generation numbers would be 10 -20 (maybe at most 30) and a population size would be up to 20. However, convergence curves till 100 generations in graphs are useful to observe general characteristics of IEC.
From convergences till 20-30 generations in Figure 11 to Figure 14, which is a practical use of IEC conditions, it seems that there are no significant differences among 4 EC methods except the simplest task of the 3-D Gaussian Mixture Model. We should apply a sign test or Wilcoxon sign test and confirm whether this view is correct; we do it in the next session.
Comparison of Figure 11 until Figure 14 and Figure 15 until Figure 18 shows that: • normal IGA with big population size works well in early generations, and • IDE runs better in later generations, i.e. IDE is a slow starter than IGA. Although we cannot use IEC with big population size and Figure 15 to Figure 18 are not realistic, analysis of the reason of the IDE's slow starter may give us a hit to be applied to practical IEC conditions of less population size in fewer generations and improve IDE in practical conditions.
D. Results
Unlike DE and GA, two tournament GA's use rank order fitness, i.e. relative fitness. All IDE, IGA, and tournament IGA use relative fitness. If we make a graph of these relative fitness along with generations, of course we cannot observe convergence. Normalization of fitness also cannot solve it. However, for making comparisons, we must observe their convergence in a searching space with their absolute function values of the individuals evolved based on the relative fitness.
V. DISCUSSION
We statistically tested whether DE or IDE is significantly better than others or worse than the best at each generation. Results are shown in Figure 2. Our observation is that • DE is always the best or at least not poorer than others significantly in early generations. It becomes the best after 10-20 generations for three different complexities of tasks with small (16) and large (128) population size. • IDE with 16 population size shows similar tendency.
However, the generations that the IDE becomes significantly better than other after about the 10 th generation for simple task (3-D Gaussian mixture model) and 40 th generation for complex tasks (5-D, 7-D, and 10-D models). • IDE with 128 population size does not have reality as IEC, and its results themselves are not important.
VI. CONCLUSION
Better IEC user interface and fast convergence are necessary to reduce IEC user fatigue. We proposed paired comparison-based IDE that can reduce IEC user fatigue greatly than comparison of all individuals. Especially, it is effective when IEC tasks handle individuals displayed timesequentially, i.e. sounds or movies. Since this advantage is obvious, we evaluated another key point, convergence speed and show the superiority of IDE to IGA and two tournament IGA's. From these two advantages of the IDE, we can say that our proposed IDE is the better than conventional IDE, IGA, and tournament IGA's.
Our next step of this research is to evaluate these advantages of our IDE found though IEC simulations is really effective for human IEC. We are planning to evaluate the proposed IDE using real human users. | 2009-09-10T23:30:05.000Z | 2009-09-10T00:00:00.000 | {
"year": 2009,
"sha1": "dfa9a412f90de49544350ecff878101254b9e73d",
"oa_license": null,
"oa_url": "https://catalog.lib.kyushu-u.ac.jp/opac_download_md/1808446/NaBIC2009.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "0a2933165c48be06fddee4bf885214d8c668d599",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
233652018 | pes2o/s2orc | v3-fos-license | A green approach for the reduction of representative aryl functional groups using palladium ecocatalysts
ABSTRACT The reduction of functional groups has been thoroughly explored in organic synthesis. Nevertheless, most of the existing methodologies are based on conditions far from being environmentally friendly, which limits their usefulness. Here we report a green approach for the reduction of representative aryl functional groups using Eco-Pd catalysts. The Eco-Pd catalysts were prepared from palladium enriched vegetal filters that biosorbed palladium from an aquatic environment. The Eco-Pd catalysts were characterized by MP-AES XRPD and XPS, and used in the reductions of aryl halides, aryl aldehydes and nitroarenes in green conditions. A mixture of glycerol and n-butanol was used, as the solvent system and as the reducing reagent, in the presence of non-hazardous inorganic bases and without any ligands. Although Eco-Pd catalysis within this solvent system was homogeneous, we were able to recover and reuse the ecocatalyst to efficiently catalyze new reduction reactions. GRAPHICAL ABSTRACT
Introduction
The reduction of functional groups is a ubiquitous transformation in organic synthesis. A great variety of reducing systems has been described. The choice of a reducing agent depends on the desired chemo-, regioand stereoselectivity. Many systems have been investigated to find the required reactivity. One of the most used systems of reducing agents is based on the addition or substitution of a nucleophile hydride ion or a radical. The reducing reagent can derive from alkali, alkaline earth or other metal hydrides (silicon, tin, transition metals) or Lewis acids such as alanes or boranes. However, metal hydrides require aprotic and anhydrous solvents, which can be hazardous, as THF, Et 2 O, Me 2 SO, DMF, HMPA, PhCH 3 (1). Most common alanes and aluminohydrides are inflammable, boranes, borohydrides and tin hydrides generate problematic wastes, which are regulated by REACH (2). Moreover the metal hydrides and sources of hydride equivalents are used in stoichiometric conditions and cannot be recycled, thus lead to higher E-factors (3). Reductions by dissolved metals and related methods raise similar issues.
Another system involves the use of dihydrogen in hydrogenation or hydrogenolysis reactions. Various catalysts (Ni, Fe, Pd, Pt, Ru, Rh, Os, Ir) are employed in micro and macro scale under heterogeneous or homogeneous conditions. Several factors such as the catalyst, the support, the solvent, the temperature and the pressure of dihydrogen affect the efficiency of the reduction. For example, it is well known that catalytic hydrogenation is very sensitive to steric hindrance and can require high temperature and pressure. These conditions constitute an industrial risk. The price of PGMs (platinum group metals) recently reached high records due to their geo-economic context. The phenomenon was reinforced by an increasing demand on world markets for PGMs such as Pd.
For the aforementioned reasons, the chemical industry is facing the imperative need to innovate for the development of available, eco-friendly, sustainable and recyclable reducing catalytic systems. In this context, we studied the synthetic potential of a green and versatile reducing tool, called Eco-Pd. Eco-Pd has been developed during pioneering works of our research group on the ecological recycling of palladium from specific plants.
While investigating the remarkable sorption mechanism realized by aquatic plant roots, it appeared that this phenomenon was unexpectedly chemical and not physiological due to the presence of carboxylate and phenolic groups that can chelate metallic elements without active internalization process. Our group could thus develop new vegetal filters using dead aquatic plant roots instead of living plants. The biosorption of transition metals was studied using these vegetal filters and showed a high efficiency. Pd-rich biomasses were thermally and chemically treated for the preparation of Eco-Pd catalysts (4)(5)(6)(7)(8)(9). The use of Eco-Pd led to high yields in cross-coupling reactions such as Sonogashira coupling.
In this work, we have extended Eco-Pd catalytic activities in representative reduction reactions: microwave assisted reduction of aryl-bromides, -iodides and -aldehydes, and thermal reduction of nitroarenes.
Reagents and instruments
All reagents and solvents used in this work were purchased from commercial sources. Analyzes by MP-AES: The samples were digested in 10 mL of reversed aqua regia (1:2 hydrochloric acid (37%): nitric acid (65%)) under a microwave-assisted digestion (Multiwave-Go Anton Paar) with the following program: 20-165°C in 20 min and then 10 min isothermal at 165°C. Samples were filtered and then diluted to 0.4 mg.L −1 in 1% aqueous nitric acid. Mineral compositions were determined by using a microwave plasma-atomic emission spectroscopy (MP-AES) 4200 (Agilent Technologies) equipped with a concentric nebulizer and a doublepass cyclonic spray chamber. The pump speed during analyzis was kept at 10 rpm and the sample introduction tube diameter was 0.89 mm. The analytical cycle consisted of 30 s rinsing with aq. 1% nitric acid followed by 25 s of sample uptake (pump speed 40 rpm) and then 20 s of equilibration before the reading at preselected integration times (pump speed 10 rpm). The integration time was set to 3 s for all elements. Unless otherwise stated, the automatic background correction mode available in the software was used. An Agilent SPS3 autosampler was used throughout the study.
X-ray diffraction (XRD) data measurements on the samples dried at 110°C for 2 h were performed by using a BRUKER diffractometer (D8 advance, with a Cu Kα radiation λ = 1.54086 Å) equipped with a Lynxeyes detector.
The XPS analyzes are carried out with the ESCALAB 250 device from Thermo Electron. The source of excitation is the monochromatic source, Al Kα line (1486.6 eV). The analyzed surface has a diameter of 500 µm. Photoelectron spectra are calibrated as binding energy relative to the energy of the C-C component of C1s carbon at 284.8 eV. The charge is compensated by an electron beam (−2 eV).
Gas chromatography analyzes were performed using a Thermo Scientific Trace 1300 device equipped with an EI ionization source and an ISQ-QD detector. The reactions were monitored by using para-cymene as internal standard and FID methods for the calibration. 1 H NMR and 13 C NMR spectra were recorded on a Bruker 400 AVANCE fitted with a dual 1 H/ 13 C-gradient Z probe at 400 and 100 MHz respectively and using solvent as internal standard (7.26 ppm for 1 H and 77.16 ppm for 13 C for CDCl 3 ). Some reactions were carried out using a microwaveassisted digestion method (Multiwave-Go Anton Paar) with the following program: 20-120°C in 1 min and then 1 min isothermal at 120°C followed with 120-150°C in 1 min and finally 10 (aryl iodides), 15 (aryl bromides), or 15-30 (aldehydes) min isothermal at 150°C.
Biomaterial collection and preparation for biosorption
. Eichhornia crassipes were purchased from a specialized grower (Nymphea company, France). The plants were placed in an outdoor tank under greenhouse filled with water to which guano was added as fertilizer. . Mentha aquatica was purchased from a specialized grower (Nymphea company, France) that used hydroponics in order to maximize the roots of the plants. . Pistia stratiotes was either purchased from a specialized grower (Nymphea company, France) or directly collected by CNR (Compagnie Nationale du Rhône) from the Rhône river and Gardon river near the city of Comps (Gard, France). . Ludwigia peploides was collected by CNR (Compagnie Nationale du Rhône) from the Sèze river and Rhône river near the city of Bagnols-sur-Cèze (Gard, France).
. Reynoutria japonica was collected by ETPB Gardons either from the Gardon d'Alès river near the city of La Grand-Combe (Gard, France), or by the Mixt Union Ganges le Vigan from the Hérault river near the city of Le Vigan (Gard, France). . Bark pine and pinecone were collected in the region of Montpellier (Hérault, France) in fall 2018 and are from the species of black pine (Pinus nigra). . Coffee grounds were obtained directly from our laboratory.
Eichhornia crassipes, Mentha aquatica, Pistia stratiotes, Ludwigia peploides, Reynoutria japonica, bark pine and pinecone were dried in an oven at 80°C until constant weight. Then they were grounded, sifted through a 1.25 mesh sieves, and washed with water (3 × 100 mL.g −1 ). They were then dried 18 h at 80°C before biosorption. Coffee grounds were washed several times with hot water until the filtrate became colorless and then were dried 18 h at 80°C before biosorption.
Preparation and characterization of the ecocatalysts
The dried palladium-loaded biomaterials were heated at 550°C under air for 4 h. The resulting powder was heated in vegetable hydrochloric acid (10) (1 mL per 100 mg of thermal residue) under reflux for 2 h. The resulting solution was cooled to room temperature, then filtrated through a pad of dicalite powder, that was beforehand washed with HCl 37%. The resulting solution was then concentrated under reduced pressure, affording Eco-Pd as a brown solid. Ecocatalysts were characterized by MP-AES analyzes. Eco-CaCO 3 was obtained by thermal treatment of Ostreidae from Oléron. The Eco-Ca loading of Ca was 59%wt. Eco-CaO were prepared by thermal treatment from Crepidula (11). Both Ostreidae and Crepidula were collected from a local producer in Bouzigues (Hérault, France).
Procedure for the reduction reaction
Microwave procedures: Under argon, a Teflon microwave reactor was loaded with the substrate (1.0 mmol, 1 eq.), Eco-Pd (0.0035 mmol, 0.35 mol%) and K 2 CO 3 (2 mmol, 2 eq.). A mixture of degassed n-butanol and glycerol (6 mL, v:v = 1:1) was added and the reactor was put in the microwave apparatus following the heating method specific to the substrate.
Work-up: The reaction mixture was diluted by addition of water (5 mL) and ethyl acetate (10 mL). After separation of the phases, the aqueous phase was extracted three times with 10 mL of ethyl acetate. The combined organic layers were dried on anhydrous magnesium sulfate and filtrated in a 100 mL volumetric flask. For GC yields, 0.2 eq. of para-cymene was added to the reaction mixture as an internal standard. Because of the volatility of the prepared product, all written yields are determined by GC-MS analyzis, using FID detector, for the reduction of aryl halides into the corresponding arenes.
Procedure for the recycling of the Eco-Pd catalyst
Reduction of 2-bromonaphtalene 1a was performed according to the general procedure. After completion of the reaction, grounded and dehydrated roots of Eichornia crassipes (1 g per 15 mg of palladium) were added to the reaction mixture, diluted in cyclohexane and water. After 2 h stirring at room temperature, the reaction media was centrifuged at 4000 rpm for 25 min. The post-solution was diluted with EtOAc to extract the product of the coupling reaction. The precipitate of Pd-loaded roots of E. crassipes and black Pd were subjected to the sequence thermal treatment/activation and a new Eco-Pd was reused for a new reduction reaction.
Characterization of the Eco-Pd catalysts
Eight plant roots were dried, grounded, washed and dried again to prepare a vegetal filter that was used to biosorb an aqueous solution of tetraamminepalladium (II) nitrate. The resulting Pd enriched vegetal filters were then transformed into eight Eco-Pd 1-8 catalysts.
The mineral composition of the Eco-Pd 1-8 catalysts was analyzed by MP-AES ( Table 1). The main mineral elements are Ca and Pd. They exhibit an inverse correlation according to the environmental habitat of the plants. Eco-Pd 1-4 derived from aquatic plants present a high percentage of Ca at the expense of Pd. On the contrary, Eco-Pd 5-8 derived from non-aquatic plants showed a high percentage of Pd at the expense of Ca. Eco-Pd 4 derived from Pistia stratiotes was growing in an effluent rich in Mn explaining the naturally high percentage of Mn.
In every Eco-Pd 1-8 , the palladium amount varies greatly from 3.7%wt. with Pistia stratiotes to 46.30%wt. with pinecone. The dispersion of palladium is an important aspect of Pd catalysis, which can be studied using these eight Eco-Pd catalysts within the reduction of aryl functional groups.
Eco-Pd 1 was further analyzed by XRPD to identify palladium species under crystalline form. After thermal treatment, two Pd crystalline structures have been identified, PdO et K 2 PdO 2 ( Figure 1).
After HCl treatment of the thermal residue, the diffractogram revealed the presence of K 2 PdCl 6 which corresponds to a high and rare oxidation state of Pd(IV) (Figure 2). X-ray photoelectron spectroscopy (XPS) with Eco-Pd 1 was performed in order to confirm the oxidation state of palladium ( Figure 3).
The position in binding energy of the Pd3d5/2 photoelectron peak at 337.9 eV, confirms the presence of Pd (IV) in Eco-Pd 1 .
Reduction of aryl halides
An alternative green approach was developed for the reductions of aryl halide derivatives. Our strategy was based on using biosourced palladium-catalysts, Eco-Pd, non-hazardous and inorganic bases, in a green solvent and without any ligands. Glycerol was chosen as the green solvent since it is a largely available, inexpensive, non-toxic and renewable compound (12). Here glycerol also plays the role of hydrogen donating reagent within Pd-catalyzed reductions, preventing the use of dihydrogen.
Due to the high viscosity of glycerol, glycerol was mixed with a green co-solvent, nBuOH.
An optimization of the aryl halide reduction conditions was first conducted using 2-bromonaphtalene 1a ( Table 2) as a model substrate. Facing poor conversions under a classical thermal heating in batch (entry 1), a microwave-assisted activation was conducted to increase conversions while decreasing reaction time (entries 2-7). The microwave-assisted activation was also motivated by the use of glycerol as co-solvent, which has a low vapor pressure, a high microwave absorption and a rapid heating (13). Inorganic bases, as sodium, calcium or potassium carbonate, were tested (entries 2-5). Potassium carbonate gave the best conversion and yield by preventing the Ullmann homocoupling.
The influence of the ecocatalysts elemental composition was then tested using eight Eco-Pd 1-8 catalysts (entries 7 and 8). Almost no difference was observed between the starting biomass, although presenting a Pd composition that varies from 3.7 to 40.08%wt., suggesting that Pd dispersion has a poor influence on the reactivity of the catalyst. Moreover, no difference was noted for Eco-Pd 4 , which has a high Mn%wt., suggesting that Mn does not play a role in the reduction reaction mechanism.
With the optimal conditions in hand, a substrate scope for the reduction of aryl halides was explored ( Figure 3).
In general, reduction yielded, as expected, to better conversions for aryl iodide than for bromide. The presence of electron withdrawing groupsacetyl, nitro, cyano and chloro groupsdid not affect the reduction of carbon-iodide bond forming the corresponding arenes 2b-2e in good yields (60%-79%). Two limitations were found when using trifluoromethyl group 2f, and using methyl ester 2j, transesterification occurred with the solvent system instead of the reduction.
It is noteworthy that Eco-Pd selectively catalyzed the reduction of carbon-iodide bond in presence of a chlorine atom and gave chlorobenzene 2e with 78% yield. Reduction of carbon-bromide bond gave similar high conversions but lower yields than reduction of carboniodide bond, due to a competitive reduction of the electron donating groups for 2b and 2c.
Interestingly the presence of electron-donating groupsmethoxy and amino groupswas well tolerated; 2g and 2h could be prepared in good yields.
However, no reduction was observed with heteroaryl halides 1k-m. Thiophene derivatives mainly gave the homocoupling products.
We compared our methodology to literature regarding the reduction of the model substrate, 2-bromonaphtalene (Table 3). Although the reported methods gave similar yields to our methodology, they gave much lower TON and TOF, and are far from being respectful to the environment. Indeed, all reported conditions required using a high catalytic loading (1-16 mol%) of metal, cobalt (entry 1) or palladium (entries 2-5). The sources of hydrogen can be dihydrogen under a very high pressure (entry 1). And if the source of hydrogen is not hazardous, the methods require the use of ligands to promote the reaction (entries 2-5).
Regarding the reaction conditions, the solvent system, the reaction time and the yield, our methodology represents a great improvement of the existing methods towards the field of green chemistry.
Reduction of aryl aldehydes
The versatility of the reduction methodology was investigated by studying the reduction of aryl aldehydes (Figure 4).
Based on the reduction of aryl halides, similar green conditions were used with an extended time of reaction to optimize the conversion. a. General conditions: Reactions were performed on 1 mmol scale in an AntonPaar Microwave device, in digestion mode. The reactor was loaded with the bromoaryl, the base and the Eco-Pd under argon atmosphere. Then, the degassed solvent was added, and the reactor was set up in the MW-device following the heating program. After completion, the reaction mixture was diluted into ethyl acetate and water. Aqueous phase was extracted with ethyl acetate three times. Organic layer was washed with water and finally dried on magnesium sulfate. b. p-Cymene was used as internal standard for GC/MS analyzes for conversion and yield determination. c. Heating stage at 100°C instead of 120°C.
In general, aryl aldehydes bearing an electron-withdrawing group, as cyano 5a and chloro 5b, gave the expected alcohol in good yields. The conditions were compatible with nitro 5c substituent but partially gave the reduced p-aminobenzylalcohol. Interestingly aryl aldehydes bearing an electron-donating group, such as methyl 5d, methoxy 5e or hydroxyl 5f, gave excellent conversions but lower yields, which are due to loss of the hydrophilic products during extraction. Similarly non-substituted benzaldehyde and furfural led to benzyl alcohol 5g and compound 5h in quantitative conversions and moderate to good yields. The major limitation of the method was the reduction of the dimethylamine substituted benzaldehyde 5i.
In the literature, several conditions could be found for the reduction of the 4-cyanobenzaldehyde 4a (Table 4). Once again, although the reported methods gave similar yields to our methodology, they gave much lower TON and TOF, and are far from being respectful to the environment. Indeed, numerous methods based on the use of silane and transition metal catalyst, as iron (19) or zinc (20), or the use of sodium borohydride are efficient but generate considerable amounts of waste (entry 1) or hazardous borylated wastes under REACH regulation (entry 3). Nickel is an excellent promoter of such reduction (21,22). Amongst numerous methods, our group reported a Meerwein-Pondorf-Verley type reduction, which is based on using a nickel ecocatalyst in green conditions but that requires a long reaction time (entry 4) (23). Compared to other methods, our new procedure based on using Eco-Pd has similar yields but represents a short and more sustainable approach of such reduction.
Reductions of nitroarenes
As mentioned above, Eco-Pd 1 could catalyze the reduction of p-nitrobenzaldehyde 4c to p-nitrobenzylalcohol 5c and also conducted to the formation of the byproduct p-aminobenzylalcohol. Therefore the reduction of nitroarenes was investigated using Eco-Pd in a mixture of glycerol and n-butanol ( Figure 5).
Based on the reduction of aryl halides and aryl aldehydes, similar green conditions were tested but the microwave activation was disappointing. Instead the reductions of nitroarenes were performed in batch at 120°C for 18 h and using 2 mol% of Eco-Pd1. Moreover, the base was changed to Eco-CaO (11). In general, nitroarenes were transformed into the desired aminoarenes but systematically azoarenes and azoxyarenes were observed as well. Reduction of nitrobenzene gave aniline 7a in a good yield of 65%. The presence of an electro-withdrawing group did not clearly affect the reduction, while the presence of an electro-donating group decreased the yield, due to hydrophilicity of the products 7d and 7e. We compared our methodology to the literature regarding the reduction of nitrobenzene 6a ( Table 5). The reported methods gave better yields than our methodology but required the use of a large excess of Figure 6. Reduction of nitrobenzene. Regarding the reaction conditions, the catalytic concentrations and the solvent system, our methodology represents a great improvement of such reduction towards the field of green chemistry.
Recycling of the catalyst
The high polarity of the solvent system, glycerol/nBuOH, facilitated the solubility of Pd salts, making this reaction a homogeneous catalysis. A major drawback of homogeneous catalysis is the difficulty to recover and reuse the catalyst, but it is a key principle of green chemistry.
After completion of the reduction, a fraction of Eco-Pd was reduced into black Pd(0) by glycerol while the rest remained soluble in the reaction solvent system. The soluble part of Eco-Pd was biosorbed by root powder of E. crassipes directly from the organic media. After centrifugation of the reaction mixture, the biosorbed Eco-Pd and black Pd(0) were subjected to the thermal treatment/activation generating a new Eco-Pd, which could be reused for a new reduction.
Recycling of Eco-Pd was studied through the reduction of 2-bromonaphthalene 1a. Up to four successive runs were tested after recycling of the same Eco-Pd 1 ( Table 6). After the first run, conversions remained quantitative and yields slightly decreased and remained similar for the second, third and fourth run ( Figure 6).
The mineral composition of Eco-Pd 1 was analyzed by MP-AES after each run and does not change much since between 75% and 90% of Pd was recovered by recycling. The mass percentage of Pd slightly decreased while the mass percentage of K increased, which might be due to the biosorption of K, which derived from K 2 CO 3 . However, the efficiency of Eco-Pd 1 was not altered after recycling.
Conclusion
In conclusion, we demonstrated the recycling of palladium from aquatic environment and its reuse in green metal-catalyzed syntheses. Palladium was efficiently recycled from water through biosorption using a vegetal filter based on aquatic plants or coffee grounds. The palladium enriched vegetal filters were transformed into Eco-Pd catalysts that were used in the reduction of aryl functional groups. Green conditions, based on using the Eco-Pd catalysts in a mixture of glycerol and n-butanol, were compatible for the reductions of aryl halides, aryl aldehydes and nitroarenes. Moreover, the Eco-Pd catalysts were easily recycled from the reaction mixtures and reused to efficiently catalyze new reduction reactions.
Compared to common metal hydrogenation or hydride-based methodologies, our technology uses catalytic concentrations of reducing reagents, a green solvent system and does not generate hazardous waste, representing a great alternative of key reductions towards the field of green chemistry. | 2021-05-05T00:07:59.377Z | 2021-03-27T00:00:00.000 | {
"year": 2021,
"sha1": "f829826bf68ccb09e22c5bed66f2f004d06e5fa9",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1080/17518253.2021.1898682",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "c3c9a6a01f8aede145026b90a7948a9d3a4b8a9a",
"s2fieldsofstudy": [
"Environmental Science",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.