id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
248498130
pes2o/s2orc
v3-fos-license
Multi-Omics Analysis of Glioblastoma and Glioblastoma Cell Line: Molecular Insights Into the Functional Role of GPR56 and TG2 in Mesenchymal Transition G protein-coupled receptor 56 (GPR56/ADGRG1) is an adhesion GPCR with an essential role in brain development and cancer. Elevated expression of GPR56 was observed in the clinical specimens of Glioblastoma (GBM), a highly invasive primary brain tumor. However, we found the expression to be variable across the specimens, presumably due to the intratumor heterogeneity of GBM. Therefore, we re-examined GPR56 expression in public domain spatial gene expression data and single-cell expression data for GBM, which revealed that GPR56 expression was high in cellular tumors, infiltrating tumor cells, and proliferating cells, low in microvascular proliferation and peri-necrotic areas of the tumor, especially in hypoxic mesenchymal-like cells. To gain a better understanding of the consequences of GPR56 downregulation in tumor cells and other molecular changes associated with it, we generated a sh-RNA-mediated GPR56 knockdown in the GBM cell line U373 and performed transcriptomics, proteomics, and phospho-proteomics analysis. Our analysis revealed enrichment of gene signatures, pathways, and phosphorylation of proteins potentially associated with mesenchymal (MES) transition in the tumor and concurrent increase in cell invasion and migration behavior of the GPR56 knockdown GBM cells. Interestingly, our analysis also showed elevated expression of Transglutaminase 2 (TG2) - a known interactor of GPR56, in the knockdown cells. The inverse expression of GPR56 and TG2 was also observed in intratumoral, spatial gene expression data for GBM and in GBM cell lines cultured in vitro under hypoxic conditions. Integrating all these observations, we propose a putative functional link between the inverse expression of the two proteins, the hypoxic niche and the mesenchymal status in the tumor. Hypoxia-induced downregulation of GPR56 and activation of TG2 may result in a network of molecular events that contribute to the mesenchymal transition of GBM cells, and we propose a putative model to explain this functional and regulatory relationship of the two proteins. G protein-coupled receptor 56 (GPR56/ADGRG1) is an adhesion GPCR with an essential role in brain development and cancer. Elevated expression of GPR56 was observed in the clinical specimens of Glioblastoma (GBM), a highly invasive primary brain tumor. However, we found the expression to be variable across the specimens, presumably due to the intratumor heterogeneity of GBM. Therefore, we re-examined GPR56 expression in public domain spatial gene expression data and single-cell expression data for GBM, which revealed that GPR56 expression was high in cellular tumors, infiltrating tumor cells, and proliferating cells, low in microvascular proliferation and peri-necrotic areas of the tumor, especially in hypoxic mesenchymal-like cells. To gain a better understanding of the consequences of GPR56 downregulation in tumor cells and other molecular changes associated with it, we generated a sh-RNA-mediated GPR56 knockdown in the GBM cell line U373 and performed transcriptomics, proteomics, and phospho-proteomics analysis. Our analysis revealed enrichment of gene signatures, pathways, and phosphorylation of proteins potentially associated with mesenchymal (MES) transition in the tumor and concurrent increase in cell invasion and migration behavior of the GPR56 knockdown GBM cells. Interestingly, our analysis also showed elevated expression of Transglutaminase 2 (TG2) -a known interactor of GPR56, in the knockdown cells. The inverse expression of GPR56 and TG2 was also observed in intratumoral, spatial gene expression data for GBM and in GBM cell lines cultured in vitro under hypoxic conditions. Integrating all these observations, we propose a putative functional link between the inverse expression of the two proteins, the hypoxic niche and the mesenchymal status in the tumor. Hypoxia-induced downregulation of GPR56 and activation of TG2 may result in a network of molecular events that contribute to the mesenchymal transition of GBM cells, INTRODUCTION Glioblastoma (GBM) is one of the most aggressive forms of primary brain tumors, with a poor prognosis of 12-15 months and just 3-5% of survival over five years (1). The dismal prognosis is attributed mainly to the complex inter-and intratumor heterogeneity, contributing to drug resistance (2,3). Large-scale transcriptomic studies to characterize the heterogeneity of GBM led to the identification of molecular subtypes of GBM, namely proneural (PN), classical (CL), and mesenchymal (MES) (3,4). The PN and MES subtypes have been more consistently identified, with PN relating to a more desirable outcome and MES to poor survival (5,6). Recent work on GBM heterogeneity reports that molecular features of histologically defined anatomical areas of the tumor (leading edge (LE), infiltrating tumor (IT), microvascular proliferation (MVP), cellular tumor (CT), and pseudo-palisading cells around necrosis (PAN) are highly distinct and conserved regardless of whether they are derived from the same or different patients (7). Also, a recent analysis based on single-cell RNA sequencing revealed that GBM cells exhibit four cellular states (neuralprogenitor like (NPC-like), oligodendrocyte progenitor like (OPC-like), astrocyte like (AC-like), and mesenchymal-like (MES-like) and a single malignant cell can generate all four states (8,9). Thus, such inter-/intra tumoral heterogeneity and cellular plasticity may have significant implications for treatment and likely outcomes. Adhesion G protein-coupled receptors (aGPCRs) are the second largest family of G protein-coupled receptors (GPCRs) involved in many cellular functions and serve as drug targets for various clinical conditions (10,11). aGPCRs are characterized by an unusually long extracellular domain that interacts with other cells as well as mediates interactions between cells and the extracellular matrix (ECM) (12), seven transmembrane domains, and an intracellular C-terminus. In recent years, GPR56 has emerged as an important aGPCR with a multitude of functions in health and disease. GPR56 was originally discovered for its role in genetic disorder bilateral frontoparietal polymicrogyria (BFPP) (13,14), resulting in severe cortical damage malformation, thus implying a role in brain development. Further, GPR56 is also implicated in neuronal myelination and myelin repair (15,16), Oligodendrocyte precursor cell (OPC) proliferation (17), radial axonal sorting by Schwann cells (16), and others. Altered expression of GPR56 has been reported in multiple cancers, including glioblastoma, melanoma, breast cancer, and colon cancer (18)(19)(20)(21)(22). Its expression has been found to influence cell adhesion, migration, and proliferation in a variety of cancer cell types as well as epithelial-mesenchymal transition (EMT) and radio resistance (19,(23)(24)(25)(26)(27). Whereas these reports clearly imply GPR56 role in cancer, it seems to function as an oncogenic factor in some (19,28) and as a tumor suppressor in others (26,29). Based on the earlier studies in brain development and melanoma, collagen III and TG2, respectively, have been identified as key interactors of GPR56 (13,30). While GPR56 interacts with collagen III during brain development, its interaction with TG2 plays a role in both developing brain (OPC proliferation) and cancer. TG2 was discovered to be the ECM-ligand of GPR56 in melanoma tissues where it was shown that the binding of TG2 to the receptor initiated internalization and lysosomal degradation of TG2, resulting in decreased fibronectin deposition in the ECM (13,15,30). The interaction of TG2-GPR56 suggests an inhibitory role of GPR56 in the progression of melanoma (27). TG2 is a multifunctional protein, present intracellularly, on the cell surface as well as in the ECM. TG2 crosslinks fibronectin and other adhesion proteins and, in some instances modulates their interaction with integrins on the cell surface with a role in cell migration, growth, and differentiation (31). TG2 can drive cancer stem cell survival and tumor formation either via ECM modifications that activate stromal cells or by modulation of integrin signaling inhibiting the hippo pathway in tumor cells (32). In GBM, high levels of TG2 expression are associated with aggressive MES phenotype, and a recent study suggested that TG2 inhibition prevents differentiation into mesenchymal subtype (33). GPR56 is upregulated in GBM and other astrocytomas, as we and others have reported (20,34,35). However, we observed that GPR56 expression is heterogeneous across GBM specimens and within the tumor, which was also supported by the observation of Moreno et al., who showed that while GPR56 is highly expressed in the PN subtype of GBM, it is downregulated in the MES subtype (26). Given its role in the developmental processes and implications in cancer, it forms an attractive target to study. Thus it is important to delineate the mechanistic basis of these varied influences of the receptor and the pathways involved in normal development and cancer. In this study, we tried to understand the heterogeneous expression and functional significance of GPR56 in GBM using public domain data and experimental multi-omics data, including transcriptomics, proteomics, and phosphoproteomics, of GPR56 knockdown U373 GBM cells. Our study suggests a role of GPR56 and TG2 interaction in PN to MES transition in GBM and provides insights on the putative molecular events involved, rendering this interaction a potential therapeutic target. Sample Collection and Processing Glioblastoma samples were obtained from patients at Mazumdar Shaw Medical Center, Bengaluru, India, during surgery, with informed consent. All procedures were carried out in compliance with the recommended protocols and with the approval of the Institutional Ethics Committee, NHH/MEC-CL-2015/384(A). The samples were snap-frozen and formalin-fixed, and only tumors histopathologically classified as GBM as per WHO guidelines (2016) were used. Brain tissue specimens obtained from temporal lobe epilepsy surgeries were used as experimental controls (Supplementary Table 1), were obtained from the brain bank at the National Institute of Mental Health and Neurosciences (NIMHANS), Bangalore, India. qRT-PCR Expression of individual genes at the RNA level in cells or tissue specimens was assessed using qRT-PCR. For this purpose, total RNA was isolated from the cells or tissues using Macherey Nagel Total RNA and Protein Isolation Kit (Cat#: 740933.50, Macherey-Nagel, Germany). The extracted RNA was then converted into cDNA using a High-Capacity cDNA Reverse Transcription Kit (Cat#: 4368814, Applied Biosystems, Lithuania) as per the manufacturer's instructions. qRT-PCR was carried out using the KAPA Biosystem's SYBR Fast qPCR universal master mix (2X) (Cat#. KK4602, KAPA Biosystems, MA, USA) and the Roche Light Cycler 480 Real-time PCR system (Roche Diagnostics, Germany). The Second derivative maximum (2 −DDCq ) method was used to normalize the qPCR results (36), and relative change in expression was evaluated using the geometric mean of the selected reference genes (18S and RAB7A). Primers used for the qRT-PCR reaction are given in Table 1. Western blot was performed on whole-cell protein extracts, transferred to 0.45 µ m pore size PVDF membrane. The membrane was blocked for 1h in a blocking buffer containing 3% BSA. The primary antibodies were diluted in the blocking buffer as indicated below and incubated with the membrane overnight. The blot was then washed three times with TBST (Tris-buffered saline and Tween 20%) and incubated with respective secondary antibodies in the blocking buffer. This was followed by three washes with TBST, followed by two washes with TBS, and developed using an ECL kit. Beta-actin and GAPDH were used as loading control of TG2 and GPR56, respectively. The details of the antibodies used in the analysis are given in Table 2 below. Public Domain Data Retrieval and Analysis Single-cell data: Neftel et al. (8) generated and analyzed singlecell RNA-seq data from 28 pediatric and adult glioblastoma tumors to identify four major neoplastic cell types defined by six gene modules; 1. Mesenchymal -Hypoxia independent (MES1like) and hypoxia dependent (MES2-like) mesenchymal related gene sets, 2. Astrocytic -astrocytic (AC-like) marker gene set, 3. Oligodendroglial progenitor-like (OPC-like) lineage marker gene set and 4. Neural -stem and progenitor cell gene set (NPC1-like and NPC2-like) as well as two cell cycling modules, namely G1S and G2M (CC). The processed data, which is available in Transcripts per Million (TPM) format at the Broad Institute Single-Cell Portal (GEO -SCP393; https://www.broadinstitute. org), was downloaded and used to study the expression profiles of GPR56 across GBM meta-modules. Ivy Glioblastoma Atlas Project (Ivy GAP): In the Ivy GAP analysis (7) (https://glioblastoma.alleninstitute.org), laser capture microdissection of the GBM tumor areas followed by RNA-seq analysis was used to generate molecular signatures of cells present in five major anatomic features of GBM visible by H and E staining, namely, the LE, IT, MVP, NE, and PAN The processed Ivy GAP quantitation data was downloaded from the website and log2 transformed. Where required, the logtransformed data were z-scored using the following formula: z=X−m where X is the expression measure of a gene in a sample, m is the mean expression measure of a gene across all the samples, and s is the standard deviation of expression measure of a gene across all the samples. Generating GPR56 shRNA Mediated Knockdown Cells SureSilencing shRNA plasmids targeting GPR56 and a nontargeting control were purchased from SABiosciences (Qiagen) and contained the following targeting sequences: shGPR56.2(21 bp): GAACCGACATGCTGGGAGATT and shNON-CODING (21 bp): GGAATCTCATTCGATGCATAC. Plasmids were linearized with NaeI (NEB) for 3h at 37°C, the enzyme inactivated by incubation at 65°C for 20 mins. The linearized DNA was purified using phenol-chloroform extraction, followed by DNA precipitation with isopropanol. Reverse transfection of 8x10 5 U373 cells/well was performed using 0.4 µg linearized shRNA plasmid DNA with Fugene6 (Roche) in Opti-MEM using a 24 well plate. Cells were incubated for 24h and then reseeded in a six-well plate at 5x10 4 cells/well in DMEM with 10% FBS containing 200µg/ml hygromycin (Selection medium). The selection medium was initially removed daily until only a few viable cells remained in each well. Once colonies were established, cloning cylinders (Sigma-Aldrich, USA) were placed around single colonies on the drained plate and sealed with 1% (w/v) low-melting agarose (Gibco, USA). Cells were then treated with accutase, removed, and seeded into a fresh 24 well plate supplemented with the selection medium. Confocal Microscopy GPR56 control or knockdown cells were plated onto poly-Llysine coated glass coverslips in 24 well plates at 1x10 4 cells/well in DMEM with 10% FBS for 72 h. Cells were washed with PBS, fixed with 4% paraformaldehyde in PBS for 5 mins to conserve epitopes. Cells were washed three times with PBS, blocked with 1% BSA in PBS, for 30 mins prior to incubation with 1µg/ml anti GPR56 antibody (R&D Systems) for 2h. Cells were washed three times with PBS, incubated for 1h with 4µg/ml donkey anti-sheep Alexa Fluor 568 (Invitrogen, USA) in 1% BSA/PBS, and washed three times with PBS for 5 min. Nuclei were counterstained with a DAPI-containing vectashield mounting medium (Vector Laboratories, USA). Cells were analyzed using a Leica SP5 confocal microscope (Leica Microsystems, Germany) using a 63x oil immersion. Invasion Assay For invasion assay, we used modified two-chamber plates with an 8-mm pore size (Cat#. CLS3464, Sigma-Aldrich, USA) coated with 1 mg/ml matrigel (Cat#. E1270, Sigma-Aldrich, USA). GPR56 control and knockdown cells (10 4 cells) were added in serum-free media onto the top chamber. In the lower chamber, complete media was used as a chemoattractant. After incubating for 24 hours, the non-invading upper chamber cells were removed by gentle wiping with a cotton-tipped swab. Invading cells on the lower surface of the membrane were fixed with 4% formaldehyde in PBS for 10min, permeabilized with methanol for 20min, stained with 0.4% crystal violet (Cat#. V5265, Sigma-Aldrich, USA) for 15min, counted, and photographed at a magnification of 10X. The fold increase in invasion was calculated by dividing the total number of invading cells from the GPR56 knockdown cells group by those from the GPR56 control group. Wound Healing Assay GPR56 control and GPR56 knockdown cells were cultured in a 24 well plate until they attained 80% confluence state; serum-starved for 24 hours and wounded using disposable 200ul pipette tip. Cell migration to the cell-free area was observed and imaged every 24h for 2 days at 10X magnification using a phase-contrast microscope. The migration distance was measured using the integrated Carl Zeiss software (Zeiss, Germany) as follows: The wound area (in pixels) was measured by taking the average of the areas (n=3) delineated by the wound boundaries. The percentage of wound closure is calculated using the formula (A0-An/A0) *100, where A0 is the average area at the 0 hours and An is the average area at the nth hour. Results were derived from three independent experiments, each performed in triplicates. RNA-Seq Analysis Total RNA isolated as described above was quantified and assessed for quality using Agilent BioAnalyzer to ensure that all samples had an RNA integrity number (RIN) of 7 or more. The poly-A enriched RNA library's construction was performed according to the manufacturer's protocol using the NEB Ultra RNA-seq Library Prep kit protocol (NEB, USA). All libraries were quantitated using Qubit High Sensitivity Assay (Invitrogen, USA), and RNA-seq was carried out using Illumina HiSeqX by a 151 bp paired-end sequencing as per the manufacturer's instructions to generate 30M, 100bp paired-end reads. Using the STAR (37) algorithm with default parameters, the sequenced transcriptome was aligned to the hg19 reference genome. Gene expression was quantified using the ENSEMBL reference with bedtools (38). For count data normalization, the Count per Million (CPM) method was applied using the following formula: CPM = (count/sum (count)*1000000). The CPM data was log2 transformed after adding a pseudo count of 1 for further analysis. A two-fold change in expression was considered to identify differentially expressed genes. Since we used single sample-pair for analysis, we used a linear regression model and Prediction Interval approach to identify differentially expressed genes and to assess the significance. The z-score for each gene was calculated using the following formula: Pathway analysis: To know the significance of differentially expressed genes we categorized them by GO-molecular function, and biological pathways using the Protein ANalysis THrough Evolutionary Relationships (PANTHER v13.1, pantherdb.org) (39,40). Single-sample gene set enrichment analysis (ssGSEA): ssGSEA was run for PN and MES sub-type gene signatures reported by Verhaak et al. (3) to develop an enrichment score. The low/high GPR56-associated signature was defined by differentially expressed genes between GPR56 knockdown and control cells. Sample Preparation and iTRAQ Labeling GPR56 control and knockdown cells were grown to 70% confluence, starved in serum-free medium for 12 h, and then lysed in cell lysis buffer (2% SDS in 50 mM triethyl ammonium bicarbonate (TEABC) with sonication. Protein concentration was estimated using the BCA method (Pierce; Waltham, MA, USA). 200 µg of protein from the GPR56 control or knockdown cells were reduced using 5mM DTT at 60°C for 20 minutes. Subsequently, alkylation was carried out using 15mM iodoacetamide for 15 minutes in the dark. After reduction and alkylation, the proteins were precipitated with ice-cold acetone and incubation at −20°C overnight. For enzyme digestion of the proteins, sequencing grade trypsin reconstituted in 50 mM Triethyl ammonium bicarbonate was added to the dried protein in the ratio of 1:20 (trypsin: protein in µg), and the digestion was carried out at 37°C for overnight. Trypsin digested peptides were then subjected to iTRAQ labeling using an iTRAQ 8-plex kit (AB SCIEX Pte Ltd., USA) as per the manufacturer's instructions. Labeling tag details are as follows: GPR56 control sample batch1 with 114, GPR56 knockdown batch1 sample with 115, GPR56 control sample batch 2 with 118, and GPR56 knockdown batch 2 with 119 tags. Reactions were quenched with 10 mM glycine. All the four labeled samples were combined, desalted using C18 StageTip and vacuum dried. Phosphopeptide enrichment For total proteome analysis, 10 percent (40 µg) of the pool was used and subjected to MS analysis in duplicates, as described below. Remaining pool -90 percent (360 µg) was used to enrich phosphopeptides using a metal affinity-based Phosphopeptide Enrichment Kit (Pierce, Thermo Fisher Scientific, USA). Briefly, the dried peptides were dissolved in 150 ml of binding buffer. TiO 2 beads were washed twice with washing buffer, and a total of 300 µg of tryptic peptide solution was incubated with an appropriate amount (tryptic peptide: TiO 2 = 1:1, w/w) of TiO 2 beads by end-over-end rotation at room temperature for 30 min. The phosphopeptide-bound beads were collected by brief centrifugation, washed twice with 500 ml washing buffer, and transferred to a C18 StageTip (Thermo Fisher Scientific) placed on the top of a 1.5-ml centrifuge tube and was centrifuged to remove the wash buffer, and phosphopeptides were collected from the resin with elution buffer. The eluents were dried and stored at −80°C until further LC-MS/MS analysis. Mass Spectrometry (MS) The tryptic peptides or the phosphopeptide fraction were subjected to LC-MS/MS analysis on Orbitrap Fusion Tribrid mass spectrometer interfaced with Easy nano-LC II (Thermo Scientific, Bremen, Germany) and were analyzed in duplicates. The peptides were first loaded on a preanalytical column (2cmx75µm, Magic C18 Aq) (Michrom Bioresources, Inc.) using solvent A (0.1% formic acid). Peptides were then resolved on an analytical column (50cm x75µm, Magic C18 Aq) using a gradient of 5-38% of solvent B (95% acetonitrile, 0.1% formic acid) at a flow rate of 280 nL/min for 120 min. The data-dependent acquisition of MS spectra in the range of 400-1600 m/z was carried out using Orbitrap mass analyzer with a mass resolution of 120,000 at MS level and 60,000 at MS/MS level; higher energy collision dissociation was selected for fragmentation with 37% normalized collision energy. The automatic gain control was set to 2x10 6 ions for full MS and 1x10 6 ions for MS/MS. Internal calibration was executed using lock mass from ambient air (m/z 445.1200025). Data Analysis Protein identifications and quantifications of differentially expressed proteins were carried out as follows. The MS data was analyzed using Proteome Discoverer (PD; Thermo Fisher Scientific, Version 2.2). MS/MS search was carried out using the SEQUEST search engine against the NCBI RefSeq database version 89 (containing 425211 entries). Search parameters included trypsin as an enzyme with 1 missed cleavage allowed; precursor and fragment mass tolerance were set to 20 ppm and 0.1 Da, respectively; methionine oxidation was set as a dynamic modification while methylthio modification at cysteine and iTRAQ modification at N-terminus of the peptide was set as static modifications. Signal to noise ratio applied was 1.5 or more. Peptide identifications were obtained by setting a target FDR threshold of 1% at the peptide level, using a decoy database. Protein abundance values obtained from the PD output, are based on the ratios of relative intensities of the iTRAQ reporter ions from the control and knockdown cells, released during MS/MS fragmentation of each peptide. The intensities were checked and found to be conformed to less than 40% coefficient of variation. The abundance values of proteins or the phosphopeptides for phosphoproteome experiments of the control and knockdown were normalized by dividing the abundance values of the proteins or the phosphopeptides (in phosphoproteome analysis) in each column by the column mean. The normalized values were then log2 transformed, and fold change in abundance for proteins were calculated. A 1.5 fold change in expression was considered to identify differentially expressed proteins and phosphopeptides. We used a single sample pair in duplicates for our study. Thus, as in the RNA analysis, to identify and add statistical strength to the differentially abundant proteins and phosphopeptides, the Prediction Interval (PI) model was applied to the log2 normalized abundance values of proteins or of phosphopeptides (for the phosphoproteome data). We analyzed the two replicates separately. Prediction interval (PI) was calculated from the linear regression model applied using the log2 values from the control sample, and the log 2 abundance values of the proteins or phosphopeptides in the test sample were predicted based on the values in the control sample at a 95% prediction interval (upper and lower PI) to do so. For determining the upregulated or downregulated protein, we used the predict () function in R as in the transcriptomics data: protein up= protein test > upper PI and protein test − protein control > 0:58 protein down= protein test < lower PI and protein test − protein control < −0:58 The z-score for each protein was calculated using the following formula: For further analysis and interpretation, proteins with ≥2 unique peptides or at least 2 peptide-spectrum matches (PSM) were considered for single peptide identifications. Single peptide based identifications with 1PSM per peptide were included only if supported with concordant transcript-level evidence. All phosphopeptides considered in the subsequent analysis were with ≥2 PSM with less than 40% coefficient of variation. Cytokine Array Cytokine profiling was performed using the Proteome Profiler TM Array Human (XL) Cytokine Array kit (ARY022B-R&D Systems). Arrays were incubated overnight, at 4°C, with 500ml of the conditioned media (CM) from GPR56 control or knockdown U373 cells, and the arrays were processed as per the manufacturer's instructions. The array images were analyzed using ImageJ to determine signal intensities. The mean pixel density of the cytokine/chemokine spots of GPR56 knockdown CM were expressed relative to GPR56 control CM values to determine the differential expression. Macrophage Infiltration Assay The conditioned medium of GPR56 control or knockdown U373 cells cultured in parallel at 70% confluency in DMEM/10% FBS was collected. U937 cells were cultured in RPMI1640 medium and induced by 5nM PMA (phorbol 12-myristate-13-acetate) (P1585, Sigma-Aldrich) for 48 hours as per the protocol described earlier (41,42) to differentiate them into macrophages. To check the level of macrophage infiltration, a standard transwell Boyden chamber invasion assay was performed (43), where U937 derived macrophages (n= 4x10 5 ) seeded in the top chamber (with 10% FBS) migrated towards secreted chemoattractants in the conditioned media from GPR56 KD cells containing 10% FBS) in the bottom chamber. Cell migration across the membrane was evaluated as described for 'Invasion assay'. Network Analysis Differentially altered gene and protein entities from the top five deregulated pathways, altered cytokines in the CM of the GPR56 knockdown cells, and the proteins with differentially altered phosphopeptides detected in our analysis were merged and used to examine protein-protein interactions. STRING database version 11 (http://string-db.org) was applied to construct the network. All the entities were uploaded into the STRING tool, and the default interactions were extracted with a confidence score >0.4. Then, the network was visualized using Cytoscape software (3.8.0). GPR56 Expression in GBM Is Heterogeneous With Lowered Expression in Hypoxic Regions Early studies have shown that GPR56, an aGPCR, is overexpressed in GBM (20,34). However, when we reassessed the mRNA expression of GPR56 by qRT-PCR in multiple GBM biopsy specimens (n=28) and at the protein level using immunohistochemistry (IHC) on a commercial tissue microarray (n=27), we found expression of GPR56 to be heterogeneous across specimens (Figures 1A-C and Supplementary Tables 1, 2). Moreno et al. recently reported similar findings and identified that GPR56 expression varies between subtypes of GBM (26). To understand the heterogeneous expression of GPR56 further, we examined the expression pattern across different tumor compartments, using the spatial gene expression data from Ivy GAP resource (7). Figure 1D shows the abundance of GPR56 transcripts across various tumor areas. As seen, GPR56 was found to be distinctly upregulated in the cellular and infiltrating tumor area (CT and IT), while it was downregulated in microvascular proliferating (MVP) and pseudopalisiding cells around necrosis (PAN) areas. The IHC analysis also supported that GPR56 expression was low in the PAN (Red arrows) MVP regions and high in CT areas (green arrows). In addition, we examined the public domain GBM single-cell gene expression dataset (8) to understand its expression in different neoplastic cell types better. The single-cell neoplastic data identified four major groups of neoplastic cells, identified by six gene-based modules, i.e., hypoxia-independent (MES1-like) and hypoxia-dependent mesenchymal (MES2-like), astrocytic (AClike), oligodendroglial lineage (OPC-like), and neural-stem and progenitor type (NPC1-like and NPC2-like) as well as two cell cycling (CC) modules (G1S and G2M -CC) ( Figure 1E). We observed that in neoplastic cells, GPR56 expression was high in cycling (G1S) and in AC-like, OPC-like, and NPC-like cells but was lost in mesenchymal-like cells (MES1, MES2), particularly in hypoxia dependent MES2-like cells. Multi-Omics Analysis of GPR56 Knockdown U373 GBM Cells Reveal Cellular and Molecular Changes Consistent With the Mesenchymal Transition We generated GPR56 knockdown U373 GBM cells to do transcriptomic, proteomic, and phosphoproteomic analysis in order to gain insight into the molecular changes associated with GPR56 function. GPR56 expression levels were assessed following isolation of various single cell clones from shRNA GPR56 transfected cells using Western blotting. Figure 2A demonstrates that there was a significant reduction in GPR56 levels in clones 3 and 4, as assessed by loss of signal intensity for full-length GPR56 at 72 kDa and the processed N-GPR56 at 60 kDa. We have used shRNA GPR56 knockdown, clone 3 (GPR56-KD) for the rest of the analysis. Confocal analysis of control (GPR56-NC) and GPR56-KD cells revealed punctate staining for GPR56 in U373 control cells ( Figure 2B, top panel) and loss of staining in GPR56-KD (bottom panel). This was further verified at the mRNA level using qRT-PCR ( Figure 2C). We next tested if this alteration in GPR56 expression affected chemotactic motility in invasion potential of GPR56-KD cells using a matrigel invasion chamber and compared invasion to the respective control cells. Figure 2D shows a significant increase in invasive behavior of the GPR56-KD cells. Similarly, wound closure was accelerated in GPR56 knockdown cells compared to control cells using a scratch wound assay. At 24 and 48h, about 69.3% ± 3.2%, and 94.06% ± 10.9% of the wound got closed respectively in the GPR56 knockdown cells compared to 46.1% ± 11.7% and 70.6% ± 4.8%, respectively during the same period in the GPR56 control cells ( Figure 2E). Additionally, functional assays were also performed with GPR56 shRNA clone 4, and the results further support the observations made with clone 3 (Supplementary Figure 1). These data indicate that GPR56 silencing plays a role in the invasive and migratory properties of the GBM cells. To gain insights into molecular changes associated with GPR56 knockdown response, we carried out transcriptome, proteome, and phosphoproteome analysis of GPR56 control and knockdown cells. Figure 3 shows differential expression of RNA and proteins and proteins with differential phosphorylation in these cells. The transcriptome analysis of GPR56 knockdown U373 cells showed 1010 differentially expressed transcripts with 697 upregulated and 313 downregulated ( Figure 3A and Supplementary Table S3A). With the protein samples obtained from the same GPR56 knockdown cells, we used a liquid chromatography-based tandem mass spectrometry (LC-MS/MS) approach to identify differentially altered proteins and phosphoproteins. We could access 2221 proteins in the proteomic analysis (< 1% FDR at peptide level); 46 were differentially expressed, out of which 28 were upregulated, and 18 were downregulated. ( Figure 3B and Supplementary Table S3B). The total coverage of proteins observed was relatively low (presumably due to the LC-MS/MS analysis carried out without pre fractionation of the tryptic peptides before mass spectrometry, as discussed under Methods). However, for all differentially expressed proteins identified, we observed that there was a positive correlation between protein and transcript abundances (average r=0.45; Figure 3D). The phosphoproteomic analysis ( Figure 3C) identified 4471 phosphopeptides mapping to 1997 proteins; about half of these were new identifications, not detected in the global proteomic analysis. Out of 1997, 70 proteins had differentially altered phosphopeptides, with 36 showing elevated levels and 34 lower phosphorylation levels as compared to the control (Supplementary Table S3C). The phosphorylation was distributed across all types of phosphorylation sites (Serine, threonine, and tyrosine). Phosphopeptide differentials could originate either due to a change in the protein abundance per se or due to phosphorylation events as such. For this purpose, we examined the correlation between corresponding protein abundances (using global protein analysis) and the abundances of the phosphopeptides mapping to the respective proteins (average r=0.4; Figure 3E). We observed that the abundance of some phosphopeptides matched with corresponding proteins with an overall change in abundance. In contrast, 30 phosphopeptides were differentially altered, while the overall abundance of the corresponding protein itself remained unchanged, implying differential phosphorylation of these sites, which potentially leads to functional alterations of the proteins (Supplementary Table S4). For functional annotations of differential expressions observed, we used RNA-seq data (differentially expressed genes; DEGs) and the PANTHER database (Version 15.0) on account of higher coverage and a larger number of differential entities observed. In the molecular function group, the DEGs were enriched mainly in DNA binding, catalytic activity, molecular regulators, and transcriptional regulator activity ( Figure 4A). Most DEGs are catalogued within the biological processes mapped to cellular processes, biological regulation, metabolic processes, and signaling. The top signaling pathways enriched are inflammation mediated by chemokine and cytokine signaling pathway, CCKR signaling pathway, angiogenesis, WNT, integrin pathway ( Figure 4B). The gene entities mapping to these pathways are given in the Table (Figure 4C). These enriched pathways are known to be associated with the mesenchymal transition or status in general (44). We have indeed detected several genes associated with EMT in general (45), such as TWIST1, MMP2, ITGB1, FN1, TG2, etc., from the transcriptome data, with details provided in the Supplementary Table S5. Based on the transcriptome data on GBM generated by the TCGA group, Verhaak et al. (3) defined four subtypes of GBM with distinct gene signatures, namely PN, N, CL, and MES subtypes, with the PN and the MES subtypes being more commonly observed. Given the enrichment of pathways associated with mesenchymal transition, we performed singlesample gene set enrichment analysis (SSGSEA) using these signatures of the molecular subgroups of GBM. We observed that differential gene expression data from GPR56 knockdown cells were enriched positively for genes associated with the MES subtype and negatively to the PN subtype ( Figure 4D, Supplementary Table S6). This observation is consistent with an earlier report with GBM cells (26). Further, we looked at the differentially phosphorylated protein group (Supplementary Table S4; n=30 proteins) in GPR56 silenced cells, and we found that it included proteins that have been reported to be involved in epithelial-mesenchymal transition (EMT), in general ( Table 3). In GBM, proneural-mesenchymal transition (PMT) is considered the equivalent of epithelial-mesenchymal transition associated with other aggressive cancers. Taken together, these results are consistent with an association of loss of GPR56 with the mesenchymal transition. Tumor cells associated with mesenchymal transition are believed to promote tumor development by releasing growth factors and cytokines (62). Considering the mesenchymal-like shift observed upon loss of GPR56, we sought to examine the secretome of GPR56 knockdown cells. For this purpose, the conditioned medium (CM) of GPR56 control and knockdown cells were analyzed using cytokine arrays. Our analysis showed increased secretion of tumor invasion markers, proinflammatory cytokines, pro-angiogenic factors in the CM of GPR56 knockdown cells ( Figure 5A). Further, one of the effects of cytokines released by mesenchymal cells is their ability to recruit immune cells (63). To determine the role of GPR56 expression by GBM cells in recruiting immune cells, we performed an invasion assay using PMA-induced U937macrophage cells in the Boyden top compartment with the CM of GPR56 knockdown cells as the chemotactic medium in the bottom chamber and compared invasion infiltration of macrophage cells to the CM of control cells. We observed the conditioned medium from the GPR56 knockdown cells enhanced infiltration of macrophages ( Figure 5B). Thus, both cytokine array data and macrophage infiltration assay are consistent with the enrichment of soluble pro-inflammatory mediators in the CM of GPR56 knockdown cells. GPR56 and TG2 Interaction May Have a Determinant Role in the Mesenchymal Transition of GBM Interestingly, our multi-omics findings revealed that upon loss of GPR56, TG2 a known ligand of GPR56 in cancer was upregulated ( Figures 6A, B), suggesting coordinated expression of the two. The inverse expression level of GPR56 and TG2 was also observed in the Ivy Gap GBM RNA-seq data set ( Figure 6C). We validated this inverse expression using IHC in GBM specimens ( Figures 1E and 6D), supporting the earlier finding of an 'antagonistic' relationship between the two proteins in melanoma (29). Based on the finding that GPR56 expression was significantly lower in MVP and PAN areas (more distinct in MVP; Figure 6C), which are usually hypoxic (7), we queried plausible association between hypoxic state and expression of GPR56 and TG2. In solid tumors generally, hypoxia triggers molecular events leading to epithelial to mesenchymal transition (EMT), resulting in metastasis (64). In GBM, EMT is not prevalent as observed in other tumors, and hypoxia can cause similar molecular changes leading to the transition of the less aggressive proneural GBM to highly aggressive, chemo and radiation-resistant mesenchymal GBM, referred to as proneural to mesenchymal transition (PMT) (44). Literature survey indeed revealed that the GPR56 gene was found to be a target of hypoxia-inducible factors, and hypoxia downregulates GPR56 in some cancer cells (65,66). Thus, we induced hypoxia (1% O 2 for 72h) in three GBM cell lines in vitro and analyzed the expression of GPR56 and TG2 ( Figure 6E). In line with earlier reports and our observation in Ivy Gap, we observed lowered levels of GPR56 and elevation of TG2 expression in all GBM cell lines cultured under hypoxic conditions compared to normoxia (21% O 2 ). Similarly, loss of GPR56 is associated with the mesenchymal transition (26), while inhibition of TG2 has been reported to reverse mesenchymal differentiation of glioma stem cells (67). However, the present analysis does not provide any evidence for the direct role of hypoxia-inducing factors (HIF) in the co-regulation of GPR56 and TG2 expression and needs to be verified with direct investigations using HIF knockout cells. Nevertheless, together these observations permit us to propose a link between tumor hypoxia, reciprocal expression of GPR56 and TG2, and mesenchymal transition. To gain further insights and assess protein interaction partners, we used differentially expressed gene and protein entities from the top five deregulated pathways (n = 55), all proteins with differentially altered phosphopeptides (n = 70), and altered cytokines (n = 16), from knockdown cells and constructed protein-protein interaction network using STRING (Supplementary Table S7). With a total of 138 altered entities, the STRING output captured 155 interactions (p-value: 1.0e-16), which were then visualized in Cytoscape. Our network indicated that GPR56-TG2 interaction is a core component from among all the entities belonging to pathways and processes that are aligned with the mesenchymal transition, as discussed above. Together, all our observations discussed above support a regulatory link of the GPR56 and TG2 interaction in the mesenchymal transition in GBM (Figure 7). DISCUSSION In this study, beginning with the heterogeneous expression of GPR56 in GBM specimens, we examined its expression in histologically distinct areas of GBM tumor tissue (7) as well as in single-cell expression data from GBM (8). Further, upon silencing of GPR56 in GBM cells, we performed a detailed functional and multi-omics molecular analysis of the GPR56 knockdown GBM cells to understand the consequential molecular changes and thereby the role of GPR56 in GBM. The main finding of our analysis is: 1, GPR56 expression in GBM is heterogeneous (1A and B), supported by Ivy GAP spatial gene expression data showing higher expression in the cellular tumor and infiltrating tumor region and lower in the MVP and PAN regions. (Figures 1D, E). The single-cell expression data of GBM (8) further indicates low expression of GPR56 in the hypoxia-dependent mesenchymal cell types (MES1, MES2) ( Figure 1E). 2. GPR56 silencing in GBM cell line U373 increased their invasive, migratory behavior ( Figures 2B, C) as well as resulted in changes in gene, protein, and phosphoprotein expression, mapping to pathways and processes which are consistent with the mesenchymal state (Figures 3 and 4; Table 3). 3, The GPR56 knockdown cells also revealed an altered pattern of chemotactic and pro-inflammatory cytokine release which may mediate enhanced recruitment of immune cells (Figures 5A, B). 4, Interestingly, we observed the expression of TG2 to be reciprocal to that of GPR56, in the knockdown cells as well as in spatial tumor areas ( Figure 6). Similarly, hypoxic conditions reduced GPR56 expression and increased TG2 expression in GBM cells in-vitro ( Figure 6E), consistent with an 'antagonistic' relationship between the two proteins earlier reported in melanoma (29). Thus these observations point towards the appearance of mesenchymal state on suppression of GPR56 expression, and the interdependent reciprocal expression of GPR56 and TG2 observed may be a key feature in this process. We have tried to integrate all this data along with literature information in order to tease out molecular insights into the possible interplay of GPR56 and TG2 in the transitioning of cells towards the mesenchymal state. The mesenchymal transition involves complex and highly coordinated molecular changes leading to altered cell adhesion and migration behavior of the cells, which are governed by a number of master regulators (MRs) including intracellular transcription factors such as STAT, Gli, ZAB induced by TNFa mediated nuclear factor kappa B (NF-kB) activation (44). In GBM, GPR56 is shown to inhibit the NF-kB pathway (26), while TG2 has been reported to activate it (68)(69)(70). On the other hand, TG2 expression itself is activated by NF-kB, thus creating an auto feedback loop (71). Given the regulatory loop between TG2 and NF-kB and the apparent role of GPR56 in limiting TG2 levels, GPR56 may be an important regulator of mesenchymal transition. Hypoxic conditions not only suppress GPR56 expression but may be linked to other molecular changes associated with the transition of the cell towards the mesenchymal phenotype. Thus, integrating our observations and the published information, we propose a putative model explaining the interplay of GPR56, TG2, and other interactors that may form an essential molecular network underlying MES transition in GBM (Figure 8). 3 | List of proteins detected with differential phosphorylation in GPR56-KD cells and known to be associated with epithelial to mesenchymal transition in general. No Gene Symbol Expression in GPR56 KD cells Function in EMT PMID Publication Phosphoproteomics data was analyzed as described under methods to identify proteins with altered phosphopeptides and not their overall abundance, indicating the change in their phosphorylation status in the GPR56-KD cells. They were mapped to the epithelial-mesenchymal transition process as per published literature, and the proteins are listed in the Table. In support of the scheme shown in Figure 8, in the GBM proneural cells, GPR56 is present at higher levels and may negatively regulate the NF-kB pathway-driven mesenchymal transition, as reported by Moreno et al. (26). Furthermore, it has been reported that PN glioma stem cells, on overexpression of TG2, upregulate mesenchymal MRs and mesenchymal markers (33). The low level of TG2 in the PN state is thus consistent with the "suppression" of the mesenchymal state as an additive mechanism. Thus, the inverse expression of GPR56 and TG2 through internalization of TG2 by GPR56 and its degradation, as reported in melanoma (29), may be an important aspect even in GBM. TG2 protein exists extracellularly, on the cell surface as well as intracellularly. Further, TG2 can assume two distinct mutually exclusive conformationsopen or closed forms regulated by binding of allosteric modulators Ca 2+ and GTP. Conformation-sensitive FRET experiments in live cells have shown that TG2 assumes a 'closed' conformation when localized in the perinuclear area and an 'open' conformation when localized near the plasma membrane. This may suggest that localized conformational transition (open form) occurs, presumably linked to Ca 2+ channel activation, provoking a shift in intracellular Ca 2+ /GTP concentration (72). The 'closed' signaling-active GTP-bound conformation of TG2 can drive intracellular signaling, whereas the 'open' transamidaseactive conformation of TG2 can enhance cancer cell survival after externalization and through altering ECM assemblies (32). Indeed, stromal TG2 (ECM) can promote tumor growth, and this may be suppressed by GPR56 expression on tumor cells as has been reported in melanoma cells whereby GPR56-TG2 binding results in internalization of the latter, followed by degradation through the endosomal pathway as discussed above (29). Thus, both transcriptional suppression and protein degradation may contribute to the negative regulation of TG2 in PN cells. Hypoxia may trigger several molecular changes through the induction of hypoxia-inducible factors, possibly as an adaptive response. These include 1. Deregulation of Ca 2+ homeostasis which may be facilitated by changes in Ca 2+ signaling (influx) elements that have been reported to accompany GBM progression (73). 2. Release of inflammatory cytokines and their activation leading to molecular changes consistent with the mesenchymal state of the cells. For example Moreno et al. (26) showed that cytokine TNFa promoted a decrease in GPR56 expression, which may relieve the inhibition of the NF-kB pathway-mediated mesenchymal transition in GBM. Cytokines secreted in the microenvironment and known to be involved in EMT in general were also found to be upregulated in the transcriptome data and may induce TG2 expression (74,75). The elevated TG2 may lead to intracellular activation of the NF-kB pathway as an additive mechanism and induction of other MES-regulators. 3. Ca2+ mediated conformational change of TG2 accompanied by its extracellular translocation may enhance the extracellular TG2 pool, which may also be contributed to by endothelial cells and immune cells -a major source of secreted TG2. Together this may result in remodeling of ECM through increased crosslinking of ECM proteins by TG2 and interaction with cell surface integrins and activation of integrin signaling. 4. Intracellular TG2 in its GTP-bound 'closed' form (31) may also directly interact with integrins and modulate integrin signaling to bring about changes in migration-related downstream events. All these events may contribute to and have a role in mesenchymal transition in GBM. In summary, our study reveals some new insights on the regulatory role of GPR56 and TG2 in the mesenchymal transition of GBM. TG2 has been documented in the literature as a natural ligand of GPR56 (30). Our findings suggest that GPR56 may play a role in regulating the dynamics of TG2 levels and activity in tumor cells during the mesenchymal transition. Their reciprocal expression may be the dominant determinant of the ECM architecture on the one hand, and intracellularly, NF-kB mediated activation of pro-mesenchymal signaling cascades on the other. Elevation of TG2 levels concurrent with the loss of GPR56 is most likely due to transcriptional activation of the TG2 gene as well as its prolonged persistence in ECM due to the absence of GPR56mediated endosomal pathway and degradation. Since TG2 is already looked at as a potential therapeutic target in cancer (33,76) as its expression promotes chemo/radioresistance and invasive functions by inducing mesenchymal transition, a more in-depth understanding of the interplay of these two molecules in the therapeutic perspective, would be important to investigate. However, there is no clarity on the regulatory factors or mechanisms that underlie suppression of GPR56 expression in the mesenchymal state. Although TNFa alpha seems to promote downregulation of GPR56 in the mesenchymal GBM cells (26), the exact mechanism is not clearly understood. Association with the hypoxic condition and involvement of hypoxia-inducing factors (HIF 1 and 2) are strong indicators emanating from our analysis, but at this point, they are more of a probability to be experimentally confirmed. There could also be other regulatory factors involved. For example, miRNA-10a has been reported to be associated with the mesenchymal state, and temozolomide resistance in GBM cells (77), and interestingly, miRNA10a - Table S3C), and the cytokines released ( Figure 5A) were used to construct the network using the STRING tool. Green indicates EMT-related molecules; pink, Phosphoproteins Blue, both (Phospho and EMT related) Grey, others. Direct key node interactors of GPR56 and TG2 are highlighted in Red. GPR56 target interaction are revealed by the miRWalk target prediction tools, raising the possibility of its role in the regulation of GPR56 expression. Thus, GPR56 and its regulation present an attractive subject for future investigations. CONCLUSION Expression of GPR56 in Glioblastoma (GBM) was found to be heterogeneous, with the heterogeneity arising from expression of the receptor in spatially different tumor tissue and cell types (high in proliferating cells and low in hypoxic mesenchymal cells). On the basis of cellular assays and multi-omics analysis of GPR56 silenced U373 GBM cells, we infer that GPR56 plays a vital role in GBM cell invasion, migration, and mesenchymal transition. Furthermore, analysis of GPR56 silenced cells and spatial gene expression data of GBM tumors also revealed the expression of Transglutaminase 2 (TG2 -a known interactor of GPR56) to be inversely correlated with that of GPR56. GBM cell lines cultured under hypoxic conditions further supported the reciprocal regulation of the expression of the two genes. Integrating these observations, we have proposed a putative mechanistic link between the inverse expression of GPR56 and TG2, the hypoxic niche, and the regulation of mesenchymal transition in GBM. DATA AVAILABILITY STATEMENT The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found below: The RNA-seq data used in this study are available at the Gene Expression Omnibus (NCBI-GEO) database (http://www.ncbi.nlm.nih.gov/gds) under the accession ID GSE192874. The mass spectrometry proteomics data used in this study have been deposited to the ProteomeXchange Consortium (http://proteomecentral.proteomexchange.org) via the PRIDE (http://www.ebi.ac.uk/pride) partner repository with the dataset identifier PXD031569. ETHICS STATEMENT The studies involving human participants were reviewed and approved by Institutional (MSCTR) Ethics Committee, NHH/ AUTHOR CONTRIBUTIONS RG performed and coordinated various experiments, integrated and analyzed data and wrote the manuscript. PS carried out transcriptomic and bioinformatics analysis. DN assisted in IHC and WB. AE and GS helped with proteomic and mass spectrometry experiments. KC, performed surgeries on GBM patients and provided patient samples and patient data. AL evaluated histology and immunohistochemistry sections. KV participated in data analysis and interpretations. LB generated stable knockdown cells and performed their initial characterization. DA and VK supervised the development of knockdown cells by LB, helped in data interpretation, critically reviewed and edited the manuscript. NS supervised public domain and experimental transcriptomics and bioinformatics analysis, jointly worked with RS in data interpretation and overall development of the study. RS conceptualized and designed and supervised the overall study, total data analysis, integration of the results and wrote the manuscript. All authors have read and approved the final version of the manuscript.
2022-05-03T13:24:25.722Z
2022-05-03T00:00:00.000
{ "year": 2022, "sha1": "6636dd08c0b8e822f9341e59b42bf605688afe9b", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fonc.2022.841890/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7814a6304dbe4623f081d152e4f326c7f79aa10c", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
119086437
pes2o/s2orc
v3-fos-license
An Improved Upper Bound for the Ground State Energy of Fermion Lattice Models We present an improved upper bound for the ground state energy of lattice fermion models with sign problem. The bound can be computed by numerical simulation of a recently proposed family of deformed Hamiltonians with no sign problem. For one dimensional models, we expect the bound to be particularly effective and practical extrapolation procedures are discussed. In particular, in a model of spinless interacting fermions and in the Hubbard model at various filling and Coulomb repulsion we show how such techniques can estimate ground state energies and correlation function with great accuracy. In the study of strongly interacting quantum systems, the determination of ground state properties is a basic step in the analysis of many physically relevant problems. To accomplish this task, several powerful numerical methods are available and big efforts have been spent in their development during the last years [1]. However, as is well known, in fermion models, numerical simulations face serious obstructions that dramatically reduce their performance [2]. In this Brief Report, we propose an improved upper bound for the ground state energy and an effective strategy to circumvent this difficulty in a certain class of one dimensional models. For moderate lattice size, Exact Diagonalization methods are possible [3] with no additional problems in the fermionic case. Typically, these are calculations based on the Lanczos algorithm and the ground state of systems with up to several thousands states can be determined. For larger state space, other methods must be considered. In particular, the most flexible tool is Monte Carlo simulation that evaluates quantum expectation values by a careful stochastic sampling of the configuration space. Often, when the model under study is fermionic, the measure to be sampled is non-positive. This fact can be related to the intrinsic anticommuting nature of the Fermi creation annihilation operators, but is more general and appears also in bosonic contexts, like quantum spin models with non trivial exchange terms [4]. This situation, usually described as the "sign-problem", is quite serious and standard algorithms simply fail. Several proposals are available to overcome or at least reduce the sign problem in specific problems, in particular realistic models in more than one dimension. One of the most powerful techniques is the Constrained Path Monte Carlo Method [5] where the sign problem is eliminated by introducing a physically motivated guiding wave function that approximates the nodal structure of the exact ground state. A numerical analysis is then possible and expectation values can be computed. The approximation is partially controlled and, for instance, the estimates of the ground state energy are known to provide rigorous upper bounds. Other alternatives are the Fixed-Node approximation, the Projection Quantum Monte Carlo, path integral representations or the Auxiliary Field Monte Carlo [6]. Of course, trial wave functions are useful also in cases with no sign problem, but their purpose to accelerate Monte Carlo simulations and the final results are in principle independent on the choice of the guiding function. Our first goal will be that of providing an improved general upper bound for the energy of fermion lattice models with sign problem; the bound is computable by numerical simulations. As a second aim, we ask whether one dimensional problems allow for any special simplification and practical treatment of the sign problem in the spirit of [7]. In particular we look for methods that do not rely on any biased approximation. A hint to a positive answer comes from the remark that in one dimension, the sign problem appears to be somewhat artificial. For instance, we can consider a model where the standard fermion hopping term is the unique source of alternating signs in the off diagonal matrix elements of the Hamiltonian. In this case, the sign problem disappears as soon as open boundary conditions are used [8]. Its manifestations are therefore expected to be completely negligible in observable quantities that admit a thermodynamical limit, independent on the boundary conditions. Possible examples are the ground state energy per site and finite correlation lengths. For this reason, we are led to believe that in such cases, the finite size effect of the sign problem should be under control without much effort. To begin our analysis, let us consider a d dimensional hypercubic lattice with V = L 1 L 2 · · · L d sites and periodic boundary conditions. Let us assign to each site i a pair of spinless fermion creation annihilation operators c i and c † i with canonical algebra Let us denote a configuration by n, the set of occupation numbers at all the lattice sites. We work in the occupation number basis with states {|n } such that c † i c i |n = n i |n and assume a given ordering of the sites to fix signs. We want to study Hamiltonians of the form where ( i, j denotes a pair of connected sites, for instance nearest neighbours). The interaction term is taken to be diagonal in the {|n } basis with V being a real function. The fermion number N = i n i is conserved and the analysis of the spectrum can be carried on in each sector with definite N . In the one dimensional case, V = L 1 ≡ L, we can assume the following canonical site ordering where |0 is the empty state. The paired sites are i, i+1 modulo periodic boundary conditions. The off diagonal matrix elements of H are then negative and Monte Carlo simulations do not have sign problems, except when the fermion number N is even; in this case, there exist matrix elements with the wrong sign, precisely those connecting pairs of states that differ by a hopping through the boundary. As shown in [9], it is possible to introduce a new Hamiltonian, H eff , with no sign problem. Its ground state energy E eff 0 can be computed by numerical methods and provides a rigorous upper bound for the ground state energy E 0 of H. In [10], this result has been extended by introducing a family of Hamiltonians {H(γ)} depending on a real free parameter γ. Let us recall the definition of (2)) and D diagonal with D ii = 0. D is a trial wave function useful to accelerate convergence to the ground state. The new Hamiltonian H(γ) is given by If we call E 0 (γ) the ground state energy of H(γ), then it is possible to show that: In the Monte Carlo approach adopted in [10], it is also emphasized that a strictly positive γ must be used and that an extrapolation to γ = 0 is required for best accuracy. We now show that the function E 0 (γ) is concave in γ. Namely, for 0 ≤ α ≤ 1 and any two real γ 1 , γ 2 , we have A general simple proof can be given by means of the variational characterization of E 0 . The dependence of H(γ) on the parameter γ is linear and we can write with hermitean, γ independent H 1 and H 2 . Then, if S is the set of normalized stated, we have Concavity means that the incremental ratio is a decreasing function. Computing the incremental ratio on the intervals [−1, 0] and [γ 1 , γ 2 ] with arbitrary 0 ≤ γ 1 < γ 2 we find If we furthermore assume E 0 (γ) to be differentiable at γ = 0 and take the limit γ 2 , γ 1 → 0 we can replace the the upper bound E 0 ≤ E 0 (0) by the the improved one that expresses the geometrical fact that a concave curve lies below its tangent at any point. We remark that E ′ 0 (γ) can have finite jumps. If they can be excluded a priori, than the above proof follows also by using second order perturbation theory. The concavity constraint suggests that the function E 0 (γ) could be particularly well behaved. For this reason we also try to obtain an extrapolation of its value at γ = −1 by assuming it smooth enough. To be fair, this possibility must be considered just a practical recipe. Nonetheless, we shall give numerical arguments to support its robustness. We begin with a simple model of interacting spinless fermions. The Hamiltonian is For lattice size L = 8, 12, 16 and 20, at half filling and for several interaction couplings V , we determine by Lanczos diagonalization the exact ground state energy E 0 , the bound E 0 (0), the derivative η = E ′ 0 (0) and the improved bound E (imp) 0 = E 0 (0) − η. Moreover, we also attempt an extrapolation to γ = −1 starting from the values of E 0 (γ) in the range γ ∈ (0, 1). In principle, any such extrapolation can be very dangerous. On the other hand, the extrapolated E (m) 0 (−1) obtained by a polynomial fit of degree m displays a flat behaviour with small oscillations unless m is too large when the extrapolation process breaks down. The typical degree of the best polynomial is 8. To accelerate the convergence and to determine the best estimate, we also used Aitkin's algorithm [11] that improves the converging sequence {z n } replacing it with {z ′ n } defined by (where ∆ 2 z n = ∆(∆z n )). We call E (ext) 0 the resulting prediction. In Fig. 1, we show the results for E 0 , E 0 (0), E (imp) 0 and E (ext) 0 in terms of their relative percentual accuracy defined as 100 · |δE/E 0 | where δE is the error in the ground state energy estimate. The improved bound is significantly better than E 0 (0). Both E 0 (0) and E (imp) 0 converge to the exact value as the system size L → ∞ as expected from our initial discussion on the asymptotic irrelevance of the boundary effects. The extrapolated bound E (ext) 0 is quite precise and around the permille level. A similar analysis can be done for the measurement of observables. Unfortunately, for these, we cannot derive a simple bound like the one for the energy and in fact random hermitean operators produce easily wild behaviour for γ ∈ (−1, 0). On the other hand, for operators associated to physically meaningful quantities independent on the boundary conditions we can expect a mild dependence on γ. As a typical non local observable, we consider the integrated staggered correlation function defined by In Fig. 2, we show how the exact values are very accurately reproduced by the extrapolated values. The relative accuracy is always well below the percent level. In Fig. 3, we show the behaviour of the functions E 0 (γ) and S(γ) at V = 1 for the system with L = 16 together with a 8-th degree polynomial fit to emphasize their smoothness. For large systems, the upper bound must be computed by numerically extrapolating E 0 (0) and E ′ 0 (0). This can be done by a straightforward Monte Carlo simulation at variable γ. To explore the feasibility of this proposal, we perform such a study for free fermions (V = 0) on a lattice with L = 40 sites at half filling. Here, the exact ground state energy is known, E 0 /2t = − cot(π/L) ≃ −12.706 and can be used as a check. We use the Green Function Monte Carlo with Stochastic Reconfiguration [12,13]. We run simulations with variable number of walkers in order to extrapolate the infinite population size limit. In Fig. 4 we show the estimated value of E 0 (γ) at several positive γ values together with a simple parabolic fit. The estimated value of the bound is E 0 (0) − E ′ 0 (0) = −12.61 (1), about 1% off the exact value. Similar results are obtained by studying the one dimensional Hubbard model. Denoting by ↑, ↓ the two spin degrees of freedom, the Hamiltonian reads Again, we determine for several lattice size, filling fraction and coupling U the four quantities E 0 , E 0 (0), E imp 0 and E ext 0 . The results are shown in Fig. 5 where we show the relative percentual accuracy of E (ext) 0 . The quality of the results is not good as in the spinless model, but the errors are again small, of a few percents. In fact, a scaling analysis shows convergence to the exact values in the large volume limit. To summarize, the family of Hamiltonian with no sign problem proposed in [10] makes possible the derivation of a size consistent bound for the ground state energy that improves the one at γ = 0. Moreover, much information can be reconstructed for the original Hamiltonian. The accuracy of our analysis on small systems is certainly beyond a practical implementation, but suggests that also for more complicated systems, not allowing a direct analysis, the extrapolated values can provide useful numerical hints. Preliminary results on the two dimensional Hubbard model are encouraging and will be presented elsewhere [14].
2019-04-14T02:28:37.074Z
2001-05-23T00:00:00.000
{ "year": 2001, "sha1": "7f1cc763905bcf8005562dfa117ebcf0af3823ef", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-lat/0105025", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "4c14e49714016c23d55234d919a7d76089f11b65", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
266433367
pes2o/s2orc
v3-fos-license
Associations of diet quality and food consumption with serum biomarkers for lipid and amino acid metabolism in Finnish children: the PANIC study Purpose To investigate the associations of overall diet quality and dietary factors with serum biomarkers for lipid and amino acid metabolism in a general population of children. Methods We studied 194 girls and 209 boys aged 6–8 years participating in the Physical Activity and Nutrition in Children study. Food consumption was assessed by 4-day food records and diet quality was quantified by the Finnish Children Healthy Eating Index (FCHEI). Fasting serum fatty acids, amino acids, apolipoproteins, as well as lipoprotein particle sizes were analyzed with high-throughput nuclear magnetic resonance spectroscopy. Data were analyzed using linear regression adjusted for age, sex, and body fat percentage. Results FCHEI was directly associated with the ratio of polyunsaturated (PUFA) to saturated fatty acids (SFA) (PUFA/SFA), the ratio of PUFA to monounsaturated fatty acids (MUFA) (PUFA/MUFA), the ratio of PUFA to total fatty acids (FA) (PUFA%), the ratio of omega-3-fatty acids to total FA (omega-3 FA%), and inversely associated with the ratio of MUFA to total FA (MUFA%), alanine, glycine, histidine and very-low density lipoprotein (VLDL) particle size. Consumption of vegetable oils and vegetable-oil-based margarine (≥ 60% fat) was directly associated with PUFA/SFA, PUFA/MUFA, PUFA%, the ratio of omega-6 FA to total FA (omega-6 FA%), and inversely associated with SFA, MUFA, SFA to total FA (SFA%), MUFA%, alanine and VLDL particle size. Consumption of high-fiber grain products directly associated with PUFA/SFA, PUFA/MUFA, omega-3 FA%, omega-6 FA%, PUFA% and inversely associated with SFA and SFA%. Fish consumption directly related to omega-3 FA and omega-3 FA%. Consumption of sugary products was directly associated with histidine and VLDL particle size. Vegetable, fruit, and berry consumption had direct associations with VLDL particle size and the ratio of apolipoprotein B to apolipoprotein A1. Consumption of low fat (< 1%) milk was directly associated with phenylalanine. A higher consumption of high-fat (≥ 1%) milk was associated with lower serum MUFA/SFA and higher SFA%. Sausage consumption was directly related to SFA% and histidine. Red meat consumption was inversely associated with glycine. Conclusions Better diet quality, higher in intake of dietary sources of unsaturated fat and fiber, and lower in sugary product intake were associated with more favorable levels of serum biomarkers for lipid and amino acid metabolism independent of adiposity. Trial Registration ClinicalTrials.gov: NCT01803776, registered March 3, 2013. Introduction Poor diet quality is associated with metabolic disturbances, including increased body fat content, dyslipidemia, higher blood pressure and metabolic syndrome in adults [1] and children [2,3].Moreover, a diet abundant with unhealthy choices, such as a high consumption of sugar-sweetened beverages and a low consumption of fruits and vegetables, has been linked to an increased risk of cardiometabolic diseases in adults [4].Metabolomics, the study of small molecules in biofluids, provides an opportunity to explore the possible mediating factors in the complex pathophysiological pathways between diet and health [5]. In cross-sectional and cohort studies among adults, increased serum, or plasma concentrations of saturated fatty acids (SFA), aromatic amino acids (AAA), branched-chain amino acids (BCAA) and apolipoprotein (apo) B, a small low-density lipoprotein (LDL) particle size, and a large very-low-density (VLDL) particle size have been linked to metabolic disturbances and cardiometabolic diseases [6][7][8][9][10].Furthermore, increased serum or plasma concentrations of SFA, AAA, and BCAA as well as a small LDL particle size have been associated with insulin resistance, an atherogenic lipid profile, and cardiometabolic risk score in cross-sectional studies among children [11][12][13]. Better diet quality has been linked to a more cardioprotective metabolic profile characterized as higher concentrations of serum or plasma polyunsaturated fatty acids (PUFA), lower serum concentrations of BCAA, AAA [10] and apoB [14], a smaller VLDL particle size, and a larger LDL particle size in adults [15].However, the evidence in children on the associations between diet and metabolites, which might be the mediating factors between diet and health, is scarce.Since diet quality is relatively low among children globally [16,17], and dietary habits track from childhood to adulthood [18], research focusing on children's nutrition and health is extremely important in the prevention of multiple diseases and in improving overall public health.Therefore, we wanted to perform a study examining multiple dietary factors and metabolites in a large population sample of children. We examined the associations of overall diet quality, assessed by the Finnish Children Healthy Eating Index (FCHEI), and single dietary factors with serum concentrations of fatty acids, amino acids, apoB and apoA1, as well as the sizes of VLDL, LDL, and high-density lipoprotein (HDL) particles, measured by a high-throughput nuclear magnetic resonance (NMR) spectroscopy metabolomics analysis, in a population sample of 6-8-year-old Finnish children. Study population These cross-sectional analyses are based on the baseline data of the Physical Activity and Nutrition in Children (PANIC) study, which is an 8-year physical activity and diet intervention study with an ongoing follow-up in a population sample of children from the city of Kuopio, Finland.The study has been described in more detail elsewhere [19].In short, 736 children were invited to participate in the baseline examinations of the study between 2007 and 2009.Of all invited, 512 (70%) attended.The PANIC study protocol was approved by the Research Ethics Committee of the Hospital District of Northern Savo in 2006 (Statement 69/2006).The parents or caregivers gave their written informed consent, and the children provided their assent to participation.The present study sample consisted of 403 children (194 girls, 209 boys) with complete data on variables used in the analyses, except linoleic acid for which data from 382 children were available. Assessment of diet Food consumption and energy intake were assessed by food records filled out by the parents or caregivers on four predefined consecutive days including either two weekdays and two weekend days (99.5% of participants) or three weekdays and one weekend day (0.5% of participants), as described earlier [19].At the first study visit, a clinical nutritionist instructed the parents to record all food and drinks consumed by their child at home and outside home.At the second study visit, the clinical nutritionist reviewed and completed, if necessary, the food records together with the parents.Food records were analyzed using the Micro Nutrica ® dietary analysis software, version 2.5 (The Social Insurance Institution of Finland), that uses Finnish and international data on the nutrient compositions of foods [20] and that was regularly updated by a clinical nutritionist. We computed FCHEI [21] to assess overall diet quality.FCHEI consists of five categories: (1) vegetables, fruit, and berries, (2) vegetable oils and vegetable-oil-based margarine (≥ 60% fat), (3) low-fat (< 1%) milk, (4) fish, and (5) foods with high sugar content.The scoring of the index has been described elsewhere [22].Briefly, the consumption of these foods was divided by energy intake and categorized to deciles.Deciles were scored, a higher decile getting a higher score apart from sugary products that were inversely scored.The sum of scores from the five categories was calculated, with a minimum of 0 indicating lower diet quality and a maximum of 50 indicating better diet quality. Biochemical analyses Venous blood samples were collected from the children after a 12-h fast.Blood samples were centrifuged and stored at a temperature of − 75 °C until analyses.Serum concentrations of total fatty acids (FA), SFA, monounsaturated fatty acids (MUFA), PUFA, omega-3-fatty acids, omega-6-fatty acids, docosahexaenoic acid (DHA), linoleic acid (LA), alanine, glutamine, glycine, histidine, isoleucine, leucine, valine, total BCAA (including isoleucine, leucine and valine), phenylalanine, tyrosine, apoB and apoA1 as well as the sizes of VLDL, LDL, and HDL particles were measured using high-throughput NMR spectroscopy metabolomics analysis (Nightingale Health Ltd, Kuopio Finland) as described in detail earlier [23].Finally, the degree of serum fatty acid unsaturation (i.e., the presence and amount of double bonds in the carbon chains of the fatty acids), the ratios of MUFA to SFA (MUFA/SFA), PUFA to SFA (PUFA/SFA), PUFA to MUFA (PUFA/MUFA), SFA to total FA (expressed as a percentage, SFA%), MUFA to total FA (expressed as a percentage, MUFA%,), PUFA to total FA (expressed as a percentage, PUFA%), omega-3-FA to total FA (expressed as a percentage, omega-3-FA%), omega-6-FA to total FA (expressed as a percentage, omega-6-FA%), DHA to total FA (expressed as a percentage, DHA%), LA to total FA (expressed as a percentage, LA%), and the ratio of apoB to apoA1 (apoB/apoA1) were assessed. Other assessments Body height and weight were measured, and body mass index (BMI) and body mass index standard deviation score (BMI-SDS) were calculated [19].Body fat percentage was assessed by a dual energy X-ray absorptiometry (DXA) method with a Lunar ® DXA device (Lunar Prodigy Advance; GE Medical Systems, Madison, WI, USA). Statistical analyses The sample size of this study is based on the power calculations for our study on the intervention effects on fasting insulin and homeostatic model assessment of insulin resistance in children [24].Briefly, we determined the number of children required to detect at least 0.30 standard deviation difference between the intervention group (60% of children) and the control group (40% of children) with a power of 80% and a two-sided p value for the difference between the groups of 0.05 allowing for a 20% loss to follow-up or missing data.These power calculations provided a required sample size of at least 275 children in the intervention group and 183 children in the control group.Data were analyzed using the SPSS Statistics software, version 27.0 (IBM Corporation, IBM SPSS Statistics for Windows, Armonk, NY, USA).Differences and associations with p values of < 0.05 were considered statistically significant.All continuous variables were checked for normality by observing histograms and using the Kolmogorov-Smirnov test.If not normally distributed, variables were transformed with logarithmic or square root transformation.Not all variables were normally distributed after these transformations, and non-parametric tests were used.To compare basic characteristics, dietary factors and metabolites between boys and girls, we used the Student's t test for normally distributed variables or the Mann-Whitney's U test for variables with skewed distributions. Linear regression analyses adjusted for age and sex were used to examine the associations of FCHEI and food consumption with serum biomarkers of lipid and amino acid metabolism.For statistically significant associations, we further adjusted the data for body fat percentage that may confound or mediate the observed associations [25,26].We used the non-transformed dietary variables as the residuals of the linear regression analyses were normally distributed.Of the outcome variables, only serum MUFA was used as logarithmically transformed.Benjamini-Hochberg False Discovery Rate (FDR) using the FDR value of 0.2 was used to adjust results for multiple comparison [27]. Basic characteristics and serum metabolites in children Boys were taller, had lower body fat percentage and higher energy intake, consumed more red meat and sausages and had higher serum PUFA/SFA, PUFA/MUFA, PUFA%, omega-6 FA% and LA% than girls (Table 1).Girls had higher serum total FA, SFA, MUFA, MUFA/SFA, MUFA%, glutamine, apoB and apoB/apoA1 and lower serum apoA1 than boys (Table 2). Associations of FCHEI and single dietary factors with serum fatty acids A higher FCHEI score was associated with higher serum PUFA/SFA, PUFA/MUFA, PUFA% and omega-3 FA% as well as lower MUFA% after adjustment for age and sex (Table 4).A higher consumption of low-fat (< 1%) milk was associated with a higher serum DHA (Table 3).A higher consumption of vegetable oils and vegetable-oil-based margarine (≥ 60% fat) was associated with higher PUFA/SFA, PUFA/MUFA, PUFA%, omega-6 FA%, and lower serum SFA, MUFA, and lower ratios of SFA and MUFA to total FA.In addition, a higher consumption of high-fiber grain products was associated with higher serum PUFA/SFA, PUFA/MUFA, PUFA%, omega-3 FA%, omega-6 FA%, and lower serum SFA, MUFA, and SFA%.A higher consumption of low-fiber grain products was associated with lower serum omega-3 fatty acids and a lower degree of serum fatty acid unsaturation.A higher consumption of high-fat (≥ 1%) milk was associated with lower serum MUFA/SFA and higher SFA%.A higher fish consumption was associated with higher serum omega-3 fatty acids and the ratio of omega-3 FA%.A higher consumption of sugary products was associated with higher SFA%.Last, higher consumption of sausages was associated with lower PUFA/ SFA and higher SFA%.The associations of FCHEI with higher serum, PUFA/SFA, PUFA/MUFA, PUFA%, omega-3 FA%, and lower MUFA%, the associations of vegetable oils and vegetable-oil-based margarine (≥ 60% fat) and highfiber grain products with higher serum PUFA/SFA, PUFA/ MUFA, PUFA%, omega-6 FA%, and with lower serum SFA and SFA% remained statistically significant after further adjustment for body fat percentage (Tables 3 and 4).In addition, the association of vegetable oils and vegetableoil-based margarine (≥ 60% fat) with lower MUFA%, the association of high-fiber grain products with higher serum omega-3 FA%, the association of high-fat (≥ 1%) milk with MUFA/SFA and SFA%, the association of fish consumption with serum omega-3 fatty acids and omega-3 FA%, and the association of sausage consumption with SFA% remained statistically significant after further adjustment for body fat percentage.The associations of FCHEI with PUFA/SFA, PUFA/MUFA, MUFA%, PUFA%, omega-3 FA%, the associations of vegetable oils and vegetable-oil-based margarine (≥ 60% fat) with SFA, MUFA, PUFA/SFA, PUFA/MUFA, SFA%, MUFA%, PUFA%, omega-6 FA%, and the associations of high-fiber grain products with SFA, MUFA, PUFA/ SFA, PUFA/MUFA, SFA%, PUFA%, omega-3 FA% and with omega 6-FA% remained statistically significant after FDR correction. Associations of FCHEI and single dietary factors with serum amino acids FCHEI was inversely associated with serum alanine, glycine, and histidine after adjustment for age and sex (Table 5).The consumption of vegetable oils and vegetable-oil-based margarine (≥ 60% fat) was associated with lower serum alanine.A higher consumption of low-fat (< 1%) milk was associated with higher serum phenylalanine and lower glutamine.A higher consumption of sugary products and sausages was associated with higher serum histidine.A higher consumption of red meat was associated with lower serum glycine.All these associations, except that between low-fat (< 1%) milk and serum glutamine, remained statistically significant after additional adjustment for body fat percentage.The associations of FCHEI with alanine, glycine, and histidine, the association of vegetable oils and vegetable-oilbased margarine (≥ 60% fat) with lower serum alanine, the association of low-fat (< 1%) milk with phenylalanine, and the association of sugary products with histidine remained statistically significant even after FDR correction. Associations of FCHEI and single dietary factors with lipoprotein particle size A higher FCHEI was associated with a smaller VLDL particle size after adjustment for age and sex (Table 6).A higher consumption of sugary products and a higher consumption of vegetables, fruit, and berries were associated with a larger VLDL particle size whereas a higher consumption of vegetable oils and vegetable-oil-based margarine (≥ 60% fat) was associated with a smaller VLDL particle size.Additional adjustment for body fat percentage did not affect these associations.All the associations except for the association of a higher consumption of vegetables, fruit, and berries with a larger VLDL particle size remained after FDR correction. Associations of FCHEI and single dietary factors with apolipoproteins FCHEI was not associated with serum apolipoproteins after adjustment for age and sex (Table 7).However, a higher consumption of vegetables, fruit, and berries was associated with higher serum apoB/apoA1.This association remained statistically significant after further adjustment for body fat percentage but not after FDR correction. Discussion We found several associations of overall diet quality and single dietary factors with serum biomarkers of lipid and amino acid metabolism, particularly fatty acids, in a population sample of primary school-aged children.For instance, better diet quality was associated with higher ratios of PUFA and omega-3 FA, lower serum alanine, glycine, and histidine and a smaller VLDL particle size.Higher consumption of vegetable oils and vegetable-oil-based margarine (≥ 60% fat) and high-fiber grain products were associated with higher serum PUFAs, omega-6 FA, and lower serum SFA and MUFA. In addition, consumption of high-fiber grain products was associated with higher serum omega-3 FA%.A higher fish consumption was associated with higher serum omega-3 fatty acids, whereas a higher consumption of sugary products, high-fat dairy and meat products was associated mainly with fatty acids, serum amino acids, such as higher SFA%, lower PUFA/SFA, higher serum phenylalanine and histidine, and lower glycine.Also, a higher consumption of vegetable oils and vegetable-oil-based margarine (≥ 60% fat) was associated with a smaller VLDL particle size and a higher consumption of sugary products was associated with a larger VLDL particle size.Six of the observed associations, mainly the association of dietary factors with fatty acids such as a higher consumption of low-fiber grain products with lower serum omega-3 fatty acids and a lower degree of serum fatty acid unsaturation, were partly explained by body fat percentage, suggesting that adiposity mediates these associations.Several associations remained after the correction for multiple testing.However, concerning serum fatty acids, mainly the associations of FCHEI, vegetable oils, and vegetable-oil-based margarine (≥ 60% fat) and high-fiber grain products with serum fatty acid ratios remained statistically significant after the FDR correction. Fatty acids Better diet quality was mostly associated with ratios of fatty acids, such as higher PUFA/SFA, PUFA/MUFA, MUFA%, PUFA%, omega-3 FA% but not with concentrations of any separate serum fatty acids.A previous study in Brazilian children did not find associations of diet quality and serum fatty acids [28].However, many studies have found associations particularly between better diet quality and higher circulating n-3-fatty acids in children [29,30]. Our results indicate that better diet quality may enhance the ratios of fatty acids in the serum to a more favorable direction, that meaning higher PUFA concentrations in relation to MUFA, SFA or total FA.Our findings concerning the associations of a higher consumption of vegetable oils and vegetable-oil-based margarine (≥ 60% fat) with lower serum levels of SFA, and a higher fish consumption with higher serum omega-3 fatty acids are consistent with the findings of previous studies in children [31,32].Furthermore, the results of this study suggest that a higher consumption of vegetable oils and vegetable-oil-based margarine (≥ 60% fat) may influence the ratios of serum fatty acids to a more favorable direction, as in the case of overall diet quality.We also observed direct associations of the consumption of high-fiber grain products with serum PUFA/SFA, PUFA/ MUFA, PUFA%, omega-3 FA%, omega-6 FA%, and inverse associations with serum SFA, MUFA, and SFA%.We have previously reported that high-fiber grain product consumption is directly associated with the dietary intake of PUFAs but not MUFAs or SFAs in children [33].Thus, children who consume high-fiber grain products may have a higher PUFA intake and a lower MUFA and SFA intake which is expected to improve serum fatty acid profile.Higher blood PUFAs have been viewed as cardioprotective in adults [6] and children [34].Our results suggest that vegetable oils and vegetable-oil-based margarine (≥ 60% fat) and fish, foods that are rich in PUFAs might be cardioprotective by improving fatty acid metabolism since childhood.It has been reported that dietary counselling from childhood to adulthood, focusing on the quality of dietary fat, improves cardiovascular health [35].Some of the observed associations of dietary factors with serum fatty acids were explained by body fat percentage, suggesting that adiposity may mediate these associations. Amino acids Studies on the relationships between dietary factors and serum or plasma amino acids in children are scarce.We observed that better overall diet quality was associated with lower serum alanine, glycine, and histidine.An intervention study showed that plasma alanine was lower in participants following a healthy Nordic diet compared with participants following a typical Danish diet in adults [36], consistently with our findings.One explanation for these findings could be that better diet quality enhances insulin sensitivity [37] and thereby increases muscle protein synthesis [38] and decreases serum alanine.Importantly, higher circulating alanine has been directly associated with cardiovascular events in adults [7].The results of our study suggest that better diet quality could improve cardiometabolic health by influencing amino acid metabolism since childhood.However, alanine is a non-essential amino acid that can be endogenously synthetized [39].Hence, there might be factors affecting alanine concentrations that we were not able to account for and that would explain the observed associations. A Korean intervention study found that serum glycine decreased after following a healthy diet with less red and processed meats compared to controls following a Western diet [40].We found inverse associations of overall diet quality and red meat consumption with serum glycine, the latter being inconsistent with the results of the Korean intervention study.Glycine is a conditionally essential amino acid and can be synthetized, like alanine, endogenously from other sources [41] and thus it is possible that serum glycine does not directly reflect dietary intake. The observed association of better diet quality and lower serum histidine may be partly explained by the association between a higher consumption of sugary products and higher serum histidine since higher consumption of sugary products can lower the FCHEI score.Histidine is an essential amino acid derived from animal-based products, such as meat and dairy [42].Typical sources of sugar in our study population include dairy products, such as sweetened sour milk products and ice cream [33].Thus, it is possible that a high consumption of these products increases serum histidine in children.However, not all milk products, such as sour milk products, were associated with serum histidine.Histidine was also directly associated with sausage consumption, which is logical considering that meat is a predominant source of histidine [42].However, the consumption of red meat was not related to serum histidine. Phenylalanine is an aromatic amino acid whose serum concentrations have been directly associated with cardiovascular outcomes in adults [7] and with insulin resistance in children [12].We observed a direct association between the consumption of low-fat (< 1%) milk and serum phenylalanine.Consistent results have been reported in a study examining the effects of dairy-and meat-based complementary diets among infants [43].The consumption of milk with higher fat content was not associated with phenylalanine in our data.Finnish children have, however, been reported to consume notably more low-fat milk than milk with higher fat content [33].Thus, it is possible that an excessive consumption of milk leads to higher serum phenylalanine although the association is logical considering that milk and dairy products are a source of phenylalanine in diets [42].Also, a slightly higher protein content of low-fat milk may partly explain this association.Surprisingly, our results do not indicate that there is an association between diet and BCAA concentration, which is contrary to the results of cross-sectional studies among adults [10,44] and an intervention study among children [45].However, in one of the cross-sectional studies [10], the association between diet and BCAA levels was not detected when analyses were repeated in a younger population (ages between 3 and 18 years).In addition, our findings are in line with the results of a dietary counselling intervention aiming to maintain a healthier diet, which had no effect on BCAA levels in children and adolescents [46].It is possible that adults consume bigger portions and thus larger amounts of the dietary sources of BCAA's compared to youth or children, which is then reflected in the BCAA levels in the bloodstream.This is supported by the fact that in the intervention study among children where diet was reflected in the BCAA levels [45], the children consumed notably high portions of dairy or meat.In the present study, the consumption of dietary sources of BCAA's might have been subtle, thus not having an impact on the BCAA levels.It has also been observed that amino acid concentrations in the plasma and skeletal muscle change with age [47].Therefore, differences in skeletal muscle and amino acid metabolism between different age groups might also explain the contrary results. Lipoprotein particle size Better overall diet quality was associated with a smaller VLDL particle size, which is in line with findings among adults [15].We have previously found that a higher FCHEI score is associated with lower triglyceride levels in Finnish children [48].The concentration of triglycerides, especially in the liver, might influence the formation of different VLDL subfractions; for example, the larger and less dense VLDL particles contain more triglycerides [49].It is thus possible that triglyceride levels might explain the association between better overall quality and a smaller VLDL particle size in the present study.We also found an association between a higher consumption of sugary products and a larger VLDL particle size that could be explained by a higher fructose intake related to a high sugar product consumption.We have previously observed that sugary products are the main source of sucrose in Finnish children [33].Since fructose is a structural unit of sucrose, it is possible that the intake of fructose is higher in children consuming higher amounts of sugary products.Fructose intake has been directly associated with liver fat content in adults [50].VLDL particles are mostly produced in and secreted from the liver [51] and elevated fatty acid content of the liver has been observed to induce the formation of larger VLDL particles [52].Therefore, a higher intake of fructose might lead to higher secretion of larger VLDL particles from the liver through increased liver adiposity.We also found an association between a higher consumption of vegetables, fruit, and berries and a larger VLDL size.It is possible that fructose intake from this food group increases VLDL particle size.However, there might be other lifestyle factors that explain the observed association that we could not account for.The inverse association between the consumption of vegetable oils and vegetableoil-based margarine (≥ 60% fat), rich in PUFAs and low in SFAs, and VLDL particle size in the present study might also be explained by the accumulation of liver fat since the intake of SFAs has been directly linked to liver fat content in adults and children [53,54].We did not observe associations of dietary factors with LDL particle size, inconsistent with previous intervention studies in children [55,56].In accordance with the results of some previous studies [56], we observed no associations of dietary factors with HDL particle size. Apolipoproteins Lifestyle interventions have been shown to decrease circulating apoB and increase circulating apoA1 in children with obesity and hypercholesterolemia [57,58].We found no associations of overall diet quality or single dietary factors with serum apolipoproteins, except the direct association of vegetables, fruit, and berries consumption with apoB/apoA1, in a general population of children.Apolipoproteins are structural components of lipoprotein particles [51], mainly LDL cholesterol particles [59].We did not find an association between overall diet quality and serum LDL cholesterol in our previous study in children [48].Together with the results of previous studies [57,58], our observations suggest that overall diet quality does not influence serum apolipoproteins in metabolically healthy children, but it may be seen in children with hypercholesterolemia and with elevated circulating LDL cholesterol levels.The direct association between the consumption of vegetables, fruit, and berries and apoB/apoA1 in our study might be related to fructose intake from this food group, since it has been shown that a short-term fructose intake restriction reduces serum apoB in children [60].However, other lifestyle factors that we could not account for mediate this association. Study strengths and limitations We examined a general population sample of prepubertal children who have not yet been exposed to possible confounding factors for the associations of dietary factors with serum biomarkers for lipid and amino acid metabolism, such as alcohol consumption, smoking, diseases, and medications.Moreover, food consumption was assessed comprehensively by 4-day dietary records, reviewed by clinical nutritionists.Overall diet quality was assessed using FCHEI, which has been validated in Finnish children [21] and represents accurately the dietary challenges in our study population.Also, we used high-throughput NMR spectroscopy analysis to assess the serum biomarkers of lipid and amino acid metabolism.This method is well-suited to identify blood metabolites in large study populations [5].This study also has limitations.First, the sample size of this study was not originally based on the current research question but is based on the power calculations for our previous study on the intervention effects on fasting insulin and homeostatic model assessment of insulin resistance in children [24].However, we estimated that 395 participants were required to detect an association with a small effect size (f 2 ≥ 0.02) in a multivariate linear regression analysis at the power of 80. Second, the assessment of diet using any measure administrated by the parents or caregivers of children, including dietary records, is prone to inaccurate reporting.Also, due to the complexity of eating behavior and human metabolism, we could not consider all possible confounding factors in the statistical analyses.Thus, the results must be interpreted carefully, simultaneously reviewing other studies on the topic.We could not conclude all possible mechanisms explaining the observed associations due to the lack of research on this topic in children.Last, we cannot draw conclusions about the causality between diet and serum metabolites due to the cross-sectional nature of this study. Conclusion We observed many associations of overall diet quality and single dietary factors with serum biomarkers of lipid and amino acid metabolism in children, and these associations were mainly independent of adiposity.Better overall diet quality, a higher consumption of dietary sources of unsaturated fatty acids and high-fiber grain products, and a lower consumption of sugary products were associated with higher serum PUFAs, lower serum MUFAs and SFAs, and smaller serum VLDL particles.The results indicate that good diet quality is important in improving lipid and amino acid metabolism from an early age.Further evidence from long-term diet intervention studies is warranted to confirm whether diet modification has beneficial effects on lipid and amino acid metabolism, and whether these have meaningful effects on cardiometabolic health since childhood. Table 1 Characteristics of children, the physical activity and nutrition PANIC study Table 2 Concentrations of serum metabolites in children at baseline p values are for the difference between girls and boys, bold indicates p values of < 0.05 were considered statistically significant BCAA branched-chain amino acids, DHA docosahexaenoic acid, FA fatty acids, HDL high-density lipoprotein, LA linoleic acid, LDL low-density lipoprotein, MUFA monounsaturated fatty acids, PUFA polyunsaturated fatty acids, SFA saturated fatty acids, VLDL very low-density lipoprotein Table 3 Associations of overall diet quality and food consumption with serum fatty acids in children Data are standardized regression coefficients from linear regression analyses adjusted for age and sex p values are reported in parentheses, significant values are bolded DHA docosahexaenoic acid, FA fatty acids, LA linoleic acid, MUFA monounsaturated fatty acids, PUFA polyunsaturated fatty acids, SFA saturated fatty acids a Associations remained statistically significant after additional adjustment for body fat percentage b Associations remained statistically significant after Benjamini-Hochberg correction for multiple testing Table 4 Associations of overall diet quality and food consumption with serum fatty acid ratios in children a Associations remained statistically significant after additional adjustment for body fat percentage b Associations remained statistically significant after Benjamini-Hochberg correction for multiple testing Table 5 Associations of overall diet quality and food consumption with serum amino acids in children Table 6 Associations of diet quality and food consumption with lipoprotein particle size in childrenData are standardized regression coefficients from linear regression analyses adjusted for age and sex p values are reported in parentheses, significant values are bolded HDL high-density lipoprotein, LDL low-density lipoprotein, VLDL very low-density lipoprotein a Associations remained statistically significant after additional adjustment for body fat percentage b Associations remained statistically significant after Benjamini-Hochberg correction for multiple testing VLDL diameter (nm) LDL diameter (nm) HDL diameter (nm) Table 7 Data are standardized regression coefficients from linear regression analyses adjusted for age and sex p values are reported in parentheses, significant values are bolded a Associations remained statistically significant after additional adjustment for body fat percentage b Associations remained statistically significant after Benjamini-Hochberg correction for multiple testing
2023-12-22T06:17:57.475Z
2023-12-21T00:00:00.000
{ "year": 2023, "sha1": "b809311f1fe66d87b753b143966888ecba0a94fd", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00394-023-03293-8.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "3de8370e33b48890d6e5b92bcdc5007f6bfbcb19", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119444414
pes2o/s2orc
v3-fos-license
Colour Deconfinement and Quarkonium Dissociation We survey how quarkonia can be used to probe colour deconfinement in relativistic nuclear collisions. Introduction The crucial tool in the search for the quark-gluon plasma (QGP) is a probe to test if the strongly interacting medium produced in nuclear collisions consists of confined or deconfined quarks and gluons. In this survey, we will show how quarkonia can be used as such a tool. First of all, this requires an understanding of the dynamics of quarkonium production in the absence of a thermal medium, i.e., in hadron-hadron collisions. The theory of quarkonium production can today be tested on a variety of different states (J/ψ, χ c , ψ ′ , Υ, Υ ′ , Υ ′′ ); it is confirmed by data over a vast range of collision energies, up to 1.8 TeV. Given quarkonium production dynamics, we then have to address three distinct problems to obtain a viable probe. What are the effects of confined and of deconfined matter on the production of the different quarkonium states? How can pre-equilibrium phenomena affect the production? We begin with a short summary of the answers which we shall obtain to these questions. Preview To be specific, we consider charmonium states; but the arguments are in general applicable to bottonium states as well. The theory of charmonium-hadron interactions predicts that the tightly bound charmonium ground state J/ψ cannot be dissociated by hadronic matter at temperatures below 0.5 GeV. The resulting transparency of confined media to J/ψ's can be tested experimentally in nuclear matter, by studying J/ψ-production with a P b-beam incident on a hydrogen or deuterium target. In contrast, a quark-gluon plasma contains deconfined gluons, and these are hard enough to break up a physical J/ψ. Hadron-nucleus collisions also provide the information needed to eliminate possible initial state effects. By comparing different hard processes with and without possible final state interactions, the role of the nuclear medium on the initial state can be clarified and the final state effects can be singled out. Combining hadron-nucleus and nucleus-nucleus studies to determine the effects of confined and deconfined systems on quarkonium production, we can thus test colour deconfinement. In a nutshell: once initial state effects are removed, confined systems have no effect on the J/ψ, but suppress the ψ ′ ; once deconfinement sets in, both J/ψ and ψ ′ are suppressed, but the ψ ′ more strongly. Probing Colour Deconfinement After this short preview of things to come, we return to the general problem of establishing quark-gluon plasma formation. In high energy nuclear collisions, two beams of partons collide; the partons are initially confined to the colliding nucleons. This confinement can be checked, e.g., by studying primary high mass Drell-Yan dilepton production and observing, except for possible nuclear shadowing effects, the same parton distribution functions as in deep inelastic lepton-nucleon collisions. After the primary collision, we expect abundant multiple interactions, leading to a rapid increase of entropy, quick thermalisation and hence the production of strongly interacting matter. The fundamental question is whether confinement survives this thermalisation. If it does, we have hadronic matter -if not, a quark-gluon plasma. We expect confinement to be lost if the parton density sufficiently surpasses that present in a hadron-hadron interaction, so that partons can no longer be assigned to specific hadrons. How can we check if this has happened? The QGP is a dense system of deconfined quarks and gluons. Its density is in fact the reason for deconfinement: in a sufficiently dense medium, the longrange confining forces become screened, so that only short-range (≪ Λ −1 QCD ) interactions between quarks and gluons remain. To study such a medium and determine its nature, we therefore need probes which are hard enough to resolve the short sub-hadronic scales and which can distinguish between confined and deconfined quarks and gluons. In addition, the probe must survive the subsequent evolution of the medium; therefore it certainly cannot be in equilibrium with the later stages of matter. Two hard, strongly interacting signals produced before equilibration and distinct from the medium have been proposed as probes for confinement/deconfinement: heavy quark-antiquark resonances (charmonium, bottonium) [1,2], and hard quarks or gluons (jets) [3,4]. In the following, we will comment only briefly on jets, in order to point out the relation between the two probes, and then concentrate on quarkonium production. Quarkonium and jet production are rather well understood in hadron-hadron collisions, where they are accounted for in terms of perturbative QCD and hadronic parton distribution functions [5,6]. In both cases, the initially formed state (QQ, q or g) is in general coloured, and it has an intrinsic mass or momentum scale much larger than Λ QCD . For jets, this is also the state to be used as probe, since the behaviour of a fast colour charge passing through confined matter differs from that in a deconfined medium [7]. In confined matter, the colour charge loses energy as it passes from one hadron to the next through the "interhadronic" vacuum, and the energy loss is determined by the string tension σ acting on the colour charge as it leaves the field of a hadron [8]. In a hot deconfined medium, the crucial quantities are the colour screening radius and the mean free path; these determine with how many other charges the passing colour charge interacts and how much energy it loses per unit length [9,10]. The fate of a colour charge in the transition region between these two limits is still quite uncertain. For fast quarkonia, the situation is similar; they will pass through the medium while still in a coloured state [8], and hence they can be used as probe in the same way as jets. In addition, however, we can consider slow quarkonia, which have be-come full physical resonances within a hadronic volume around the QQ formation point and thus traverse the medium as colour singlets. Since the intrinsic spatial scales of J/ψ and Υ, determined by the heavy quark masses and the binding energies, nevertheless remain much smaller than the hadronic size Λ −1 QCD , they interact only with the partons within a big, light hadron and not with the hadron as a whole. They are thus able to probe the partonic state of any medium. In particular, they are essentially unaffected by the soft gluons which make up confined matter, while the hard gluons present in a QGP will dissociate them [2]. For both quarkonia and jets, thermal production in the expected temperature range (T < ∼ 0.5 GeV) is excluded by the mass or momentum scales involved; we can therefore be quite sure that such signals were produced prior to QGP formation. They will also not reach an equilibrium with later stages of the medium. Hard jets and fast quarkonia require too much of an energy loss for this, while slow quarkonia, as noted, are either dissociated or not affected by the medium. For both proposed probes, initial state nuclear effects can occur before QGP formation. Primary quarks and gluons may undergo multiple scattering or experience shadowing in the nucleus before they interact to form a QQ pair or a hard transverse parton. These effects have to be understood and taken into account before any QGP analysis [11,12]. It is therefore necessary to study them in processes which are not effected by the subsequent medium, such as the production of hard direct photons [13] or of high mass Drell-Yan dileptons [14]. In these cases, we have only annihilation or bremsstrahlung of the incident partons; the resulting electromagnetic signal leaves the system unaffected by any subsequent medium and its evolution. If such reactions show nuclear effects, they are presumably due to initial state phenomena. After these more general remarks, we will now consider in detail the use of quarkonium production as a probe for deconfinement in dense strongly interacting matter. We concentrate on quarkonium for several reasons. J/ψ suppression was predicted [1] to be the consequence of QGP formation, and such a suppression was subsequently indeed observed in high energy nuclear collisions at the CERN-SPS [15]. This triggered an intensive study of possible alternative origins of such a suppression. Hence the analysis necessary to establish an unambiguous probe for deconfinement has been carried much further here than for jets and can provide a good illustration of what needs to be done before drawing any conclusions. In particular, as noted at the beginning, we must understand theoretically and experimentally the dynamics of the process to be used as probe and -how it is influenced by initial state nuclear effects, -how it reacts to confined matter, -how it reacts to deconfined matter, and -how it reacts to non-equilibrium systems. In section 2, we will therefore outline the theory of quarkonium production and compare it to present data [6]. Section 3 will deal with quarkonium production in hadron-nucleus collisions. In particular, we will see how the production of fast quarkonia provides us with information on the energy loss of a colour charge in confined matter and on gluon shadowing. Next, in section 4, we will discuss the conceptual basis of quarkonia as deconfinement probe and present the results of the heavy quark theory for quarkonium-hadron interactions. We then show how to test experimentally the resulting transparency of confined matter to slow J/ψ's. Section 5 will bring a comprehensive comparison of the effect of confined and deconfined media on quarkonium production; here we will recover the melting of the J/ψ in a QGP [1,16] on a microscopic level. In Section 6, we remove initial state nuclear modifications and study the effect of pre-equilibrium deconfinement. Finally, in section 7, we give a a brief summary and an assessment of what we have learned from present data. Quarkonium Production in Hadron-Hadron Collisions In this section, we shall sketch the basic dynamics of quarkonium production in hadron-hadron collisions; for a summary, see e.g. [6]. We shall speak about charmonium states; but everything said also holds for bottonium. Colour Evaporation The first stage of charmonium formation is the production of a cc pair; because of the large quark mass, this process can be described by perturbative QCD (Fig. 2.1). A parton from the projectile interacts with one from the target; the parton distributions within the hadrons are determined e.g. by deep inelastic lepton-hadron scattering. Initially, the cc pair will in general be in a colour octet state. It subsequently neutralises its colour and binds to a physical resonance, such as J/ψ, χ c or ψ ′ . Colour neutralisation occurs by interaction with the surrounding colour field; this and the subsequent resonance binding are presumably of non-perturbative nature ("colour evaporation" [17]). In the evaporation process, the cc pair can either combine with light quarks to form open charm mesons (D andD) or bind with each other to form a charmonium state. The basic quantity in this description is the total sub-threshold charm cross section, obtained by integrating the perturbative cc production over the mass interval from 2m c to 2m D . At high energy, the dominant part ofσ cc comes from gluon fusion ( Fig. 2.1a), so that we havẽ with g p (x) and g t (x) denoting the gluon densities in projectile and target, respectively, and σ the gg → cc cross section. In pion-nucleon collisions, there are also significant quark-antiquark contributions ( Fig. 2.1b), which become dominant at low energies. The essential prediction of the colour evaporation model is that the production cross section of any charmonium state i is given by where f i is a constant which for the time being has to be determined empirically. In other words, the energy dependence of any charmonium production cross section is predicted to be that of the perturbatively calculated sub-threshold charm cross section. As a consequence, the production ratios of different charmonium states are predicted to be energy-independent. -We note that in the generalised colour evaporation model [6], only a part of the total subthreshold cross sectionσ cc goes into charmonium formation. In accord with perturbative open charm calcualtions, the remainder (more than 50 %) leads to DD production, with the missing energy obtained by interaction with the colour field. Quarkonium Production: Theory and Data The predictions of the colour evaporation model have recently been compared in a comprehensive survey [6] to the available data, using parton distribution functions [18,19] which take into account the new HERA results. In Figs. 2.2 and 2.3, we see that the energy-dependence is well described for both J/ψ and Υ production; for J/ψ production, the normalisation coeffficient is f J/ψ =0.025. The Υ results are obtained for the sum of Υ, Υ' and Υ" decaying into dimuons, with Bf Υ = 1.6×10 −3 for the normalisation; here the branching ratios cannot be directly removed. In the fixed target/ISR energy range, the results from the two different sets of parton distributions coincide; for the J/ψ at LHC energies, there is some spread due to scale uncertainties in the parton distributions, which hopefully can be removed by more precise DIS data. For the Υ production, we have already now data up to 1.8 TeV, and in Fig. 2.3 they are seen to agree very well with the prediction obtained using the "low energy" value Bf Υ = 1.6 × 10 −3 . Previous phenomenological fits [20] had led to much smaller rates; they are included (CR) in Figs. 2.2 and 2.3. In Figs. 2.4 and 2.5, the predicted energy-independence of production ratios is found to hold as well, again up to Tevatron energy. Here it should be noted that the CDF data for the ratio ψ ′ /(J/ψ) are taken at large transverse momenta (5 ≤ p T ≤ 15 GeV), while the lower energy data are integrated over p T , with the low p T region dominant. Hence colour evaporation appears to proceed in the same way at both small and large p T . The colour evaporation model does not determine the relative production rates of the different states. In order to obtain them, the colour evaporation process has to be specified in more detail. As an example, we consider the ratios of the different l = 0 states shown in Figs. 2.4 and 2.5. Assume that the initial colour octet state first neutralises its colour by interaction with the surrounding colour field, producing a colour singlet cc state. The relative weights for J/ψ and ψ ′ production can then be expressed [21] in terms of the corresponding masses and the squared charmonium wave functions at the origin, Here ψ denotes the directly produced 1S cc state, in contrast to the experimentally observed J/ψ, 40% of which originates from radiative χ c decays [22,23]. The wave functions at the origin can in turn be related to the dilepton decay widths Γ ee ∼ ( Inserting the measured values for masses and decay widths, we find To compare this to the measured value of (σ(ψ ′ )/σ(ψ)), we have to remove the χ c contributions from the experimental ratio, With the experimental values σ(ψ ′ )/σ(J/ψ) ≃ 0.14 (see Figs. 2.2 and 2.3) and (σ χ /σ j ) ≃ 0.4 [22,23], this yields σ(ψ ′ )/σ(ψ) ≃ 0.23, in good agreement with the theoretical result (2.6). We thus find that the projection of the colour singlet cc state onto J/ψ and ψ ′ correctly describes their production ratios at all energies and transverse momenta. The predictions for direct bottonium production ratios corresponding to Eq. Since the contributions from indirect production through radiative χ b decay are not yet known and there is also feeddown from higher S-states, a quantitative comparison is not possible here. Nevertheless, the predicted values differ by less than 50 % from the data and hence appear reasonable. Quarkonium Evolution and Hadron-Nucleus Collisions Quarkonium production in a (finite) medium depends crucially on the evolution stage in which the QQ pair is during its passage. Here we determine the different stages attainable in hadron-nucleus collisions and discuss the effects which the nuclear environment has on them. Parton Fusion and Colour Neutralisation in Nuclear Matter The first stage of charmonium production by hadronic interactions is, as we saw, parton fusion resulting in a generally coloured cc pair. Subsequently, the cc becomes colour neutral by emission or absorption of gluons in the colour field of the interaction, and eventually this small colour singlet state "expands" to form a physical charmonium resonance. If all or part of this evolution takes place inside a nucleus, a number of effects can modify the production: -the effective distribution of the partons which fuse to form the cc can be modified by the nuclear environment (EMC effect); -the colour octet cc state will interact with the nuclear medium strongly and without knowledge of its final charmonium state (nuclear shadowing, cc energy loss); -the physical charmonium states will interact with the nuclear medium according to their different cross sections on hadrons (absorption). Since the evolution is temporal, with finite time scales, fast and slow charmonia (in the nuclear rest frame) will have quite different fates. In this section, we shall determine the kinematic regimes for the different evolution stages [8,25]; in the subsequent sections, we will then discuss the dominant effects in each stage. To estimate how long the colour octet cc state will live, we first note that the J/ψ wave function keeps the charmed quarks close to their mass shells; we thus have p 2 ≃ m 2 c (see Fig. 3.1 for notation). The intermediate quark with four-momentum k = p + q is off-shell by an amount where q is the momentum of the third (colour neutralising) gluon; in the spirit of the parton model, we keep all gluons on-shell. In the low p T region, in which we are here primarily interested, this third gluon can be arbitrarily soft; colour neutralisation could even involve several gluons. In any case, the colour neutralisation process cannot really be calculated in a purely perturbative framework. We shall here nevertheless keep the structure of Fig. 3.1, taking it to be a phenomenological extension into the non-perturbative soft-gluon regime. A justification of such a procedure might come from recently developed resummation methods [26]. The proper life-time of the virtual coloured state is now given by the uncertainty principle as In the rest frame of the J/ψ, the life-time of the colour octet, is determined by the energy q 0 of the third gluon. It is clear from Eqs. (3.1) and (3.2) that the colour octet state can propagate over long distances only if the third gluon is soft enough. In a confined medium, gluons cannot propagate over distances larger than about 1 fm; hence the low energy cut-off is q 0 ≃ Λ QCD ≃ 0.2 GeV. This leads to τ 8 ≃ 0.25 fm (3.4) for the colour neutralisation time. In the rest frame of the nuclear target A, the cc travels in this time a distance where P A is the (lab) momentum of the cc and M its mass. From Eq. (3.5) it is clear that in hadron-nucleus collisions sufficiently fast cc pairs will still be coloured when they leave the nuclear medium. The average path length for a cc produced in a h − A collision is (3/4)R A ≃ 0.86A 1/3 fm; for heavy nuclei, such as P b (A = 208), this becomes about 5 fm. Hence charmonium states of lab momenta P A > ∼ 60 GeV have passed the nucleus as colour octets. To relate this to the kinematic variables generally used in h − A experiments, we transform the lab momentum P A to the momentum P measured in the center of mass of a hadron-nucleon collision. The Feynman variable x F is then defined as x F ≡ P/P max , with P max (s) denoting the maximum cms momentum possible for a charmonium state produced in a hadron-nucleon collision of cms energy √ s. From this, we obtain [8,25] that for √ s > ∼ 20 GeV and x F > ∼ 0, the measured charmonium states have traversed most of the nucleus as colour octets. This kinematic range covers essentially all high statistics charmonium production data taken in h−A collisions [27,28]. These experiments therefore study the passage of a colour octet cc pair through nuclear matter; as such, they do not provide any information on the interaction of fully formed physical J/ψ or ψ ′ with nucleons. An immediate consequence of this is that any nuclear effects observed for charmonium production in h − A collisions in the quoted kinematic range must be the same for J/ψ and for ψ ′ production. This conclusion is indeed well satisfied experimentally ( Fig. 3.2). We have so far considered the colour regime, i.e., the range in which the cc pair traverses the entire nucleus as a colour octet. When the distance d 0 is less than 1 -1.5 fm, the cc is colourless when it passes from the nucleon on which it was produced to the "next" nucleon inside the nucleus. At √ s ≃ 20 GeV, this occurs for x F < ∼ − 0.2. At this point, the cc is still a generic small colour singlet state and not yet a specific physical resonance. The other extreme to the colour regime is the range in which the cc has become a fully formed physical resonance (J/ψ, ψ ′ ) before it leaves the environment of the nucleon at which it was formed. Since the different states have different sizes and hence different formation times, the resonance regime will be state-specific, i.e., at a given collision energy, it will cover a larger x F -region for the J/ψ than for the ψ ′ . The distance d r (i) travelled by the cc before becoming a physical resonance state i is given by where τ r (i) denotes the resonance formation time. Potential theory [29] leads to the estimates t r (J/ψ) ≃ 0.35 fm and t r (ψ ′ ) ≃ 1 fm. For a collision energy √ s ≃ 20 GeV, this means that a J/ψ has become a fully formed resonance for x F < ∼ − 0.5. The ψ ′ is still nascent even for x F = −1; although it is a colour singlet on its passage through the nucleus if x F < ∼ − 0.2, it has not yet reached its full physical size when it leaves the nucleus even for x F = −1. In the transition regime between these two extremes, the cc experiences colour interactions on part of its passage, while it traverses the medium as a small colour singlet on the remaining part. In Table 1, we have summarised the different kinematic ranges for proton beam incident on a P b target at a cms nucleon-nucleon collision energy √ s = 20 GeV, for (1S) J/ψ and (2S) ψ ′ production. In the following sections, we survey the possible nuclear effects which can arise in the different kinematic regimes. Quantum-Mechanical Interference and Nuclear Shadowing We now address nuclear modifications of charmonium production due to what at first sight appears to be a change in the effective parton distribution function g t (x t ) in nuclei, compared to that in nucleons. The fraction of the target momentum carried by the corresponding parton in the elementary interactions shown in Fig. 2.1 is given by where M is the mass of the cc system; the ± signs hold for x F > 0 and x F < 0, respectively. For cc production at x F ≥ 0 and √ s ≥ 20, x t ≤ 0.15. In this x t region, deep inelastic scattering experiments on nuclei find a lower quark density than on nucleons (nuclear shadowing [30]), as illustrated in Fig. 3.3 (the solid line is phenomenological fit [31]. Such nuclear shadowing can, however, not be interpreted as an intrinsic change of the parton distribution in nuclei. As such, it would be applicable in factorisable form to all hard processes; but for Drell-Yan dilepton production by p − A collisions, which involves nuclear quark distribution in the same x t region, little or no modification is observed [28]. The appearent puzzle is resolved by noting that nuclear shadowing is due to quantum-mechanical interference, similar to the well-known Landau-Pomeranchuk effect [32]. This interference depends on the specific process and hence leads to different nuclear modifications for charmonium hadro-and photoproduction, for Drell-Yan dilepton production and for deep inelastic lepton-hadron scattering [12]. For a first discussion of quantum interference effects in deep inelastic scattering, see [33]. We had seen above in Eq. (3.5) that sufficiently fast cc pairs traverse the entire nucleus in a virtual coloured state. In this case, the interactions of the pair with different nucleons cannot be added incoherently and factorisation breaks down even for dynamically uncorrelated nucleons: quantum mechanical interference can now lead to nuclear modifications even though both the parton distribution and the elementary parton interaction amplitude are unchanged. Such interference effects set in when the coherence length d c = (1/mx t ), over which the cc is in an off-shell virtual state, becomes larger than the internucleonic distance d 0 = n −1/3 0 ≃ 1.8 fm, with n 0 = 0.17 fm −3 denoting standard nuclear density and m the nucleon mass. This distance has to be compared to the mean free path λ 8 of the virtual colour state in nuclear matter, where σ 8 is the cross section for the interaction of the colour octet cc with nucleons. To estimate the size of σ 8 , we assume that the overall colour neutrality required for the propagation of the coloured cc through the nucleus is provided by a comoving light qq pair. The interaction cross section of the QQqq system is then defined by the light qq pair, so that its size can be as large as that of light qq mesons (20 -30 mb). This is much larger than that of a physical J/ψ; it is now not the size of the cc, but its colour which determines the interaction. The mean free path corresponding to such a cross section becomes λ 8 ≃ 2 − 3 fm, which is about the size of the internucleonic distance d 0 = n −1/3 0 ≃ 1.8 fm, so that charmonium production in p − A collisions for x F ≥ 0 and √ s ≥ 20 GeV satisfies the "shadowing [12]. In this regime, the interactions of the virtual cc with successive nucleons interfere destructively with each other; it is the regime which for electromagnetic interactions results in the Landau-Pomeranchuk effect. The production cross section σ(pA → cc) on nuclei is here therefore less than the incoherent result Aσ(pp → cc). The basic quantity studied experimentally is the ratio it would be unity in the absence of any nuclear effects. It is observed, however, that R A/p decreases with increasing cc momentum in the nuclear rest frame (i.e., whenever d c increases), as well as with increasing A. Rather than in terms of d c , R A/p is generally studied in terms of the fractional target parton momentum x t = (md c ) −1 . Decreasing x t thus increases the path length for the colour octet inside the nucleus, and a longer path of the virtual state leads to more destructive interference. When d c reaches the nuclear diameter 2R A , we have the maximum possible interference, so that the suppression saturates for A more realistic form including geometric effects is given by which reduces to Eq. (3.10) for λ/R A ≪ 1. The available data fall into the region 0 ≤ R A /λ 8 = n 0 σ 8 R A < ∼ 0.7, so that the measured nuclear reduction lies between 0.5 and 1. In this region, the relation e −x ≃ 1 − e 1/x is quite well satisfied, so that also should also describe the data. The form (3.12) was in fact obtained as an empirical fit [34]; we see here, however, that it is the result of quantum-mechanical interference effects of the virtual cc and is not related to the interaction of a physical J/ψ with nucleons, as assumed there. For d c ≤ d 0 , the scattering on different nucleons becomes incoherent, giving , the two limits are x sat t = 0.015 and x inc t = 0.12. Qualitatively, this is the type of behaviour found in charmonium production for x F not too large [11]. The specific functional form of the suppression between the two extremes can only be estimated phenomenologically. Experimental results have often been parametrised as R A/p ≃ A α−1 ; our interference considerations imply that the coefficient α depends on x t . An analysis of charmonium production in p − A collisions [11] suggests a simple linear fit in ln A and ln x t , The scale md 0 ≃ 0.12 is chosen to make R sh A/p = 1 in the incoherence regime; the constant c has to be chosen such as to give the correction saturation value. For a fit of p − A data to a similar form, see [8]. Momentum Loss of a Colour Charge in Nuclear Matter The suppression R sh A/p cannot be directly compared to data, however, since shadowing is not the only effect suffered by the coloured cc state on its path through the nucleus. The cc is formed in the collision of the projectile with one of the target nucleons. To leave the nuclear medium, it has to traverse the remaining part of the nucleus, in which it can interact with other nucleons. Getting from one nucleon to the next means passing regions of internuclear physical vacuum, which does not support colour charges. To estimate the effect of this passage, we imagine the cc to stretch a string from its formation region to the next nearest nucleon, then from this to the one after that, and so on. If the pair intially had a cms momentum P , then this will be shifted to where κ 8 ≃ (9/4)κ ≃ (9/4) GeV/fm is the string tension of the colour octet, κ ≃ 1 GeV/fm that of the fundamental triplet [8]; L A is the total path length of the colour state. As consequence of this momentum loss, a cc pair observed at a given x F must have been originally produced at a higher value x F /δ, with The resulting normalised gives the probability that the cc pair will not undergo any scattering on its way out of the target; thus W 0 = 1 implies no nuclear J/ψ suppression. The second term in Eq. (3.16) gives the effect of the n coherent scatterings in the medium, with the resulting shift in x F . The factor 1/δ assures that the distribution remains normalised. Note that the momentum loss of cc pair, for x F ≥ 0 as described here, does not imply any integrated J/ψ suppresion: the production is simply shifted from larger to smaller x F . Dividing Eq. (3.16) by the x F -distribution for h-p collisions, For a nucleus of A = 200 and the other parameters as above, we get the dependence on x F shown in Fig. 3.4 for three different beam energies. We note that on nuclei, compared to nucleons as target, J/ψ production is shifted to lower x F , essentially because of the momentum degradation of the colour octet in passing through the medium. This momentum loss saturates at ∆P max ≃ κ 8 (R A − z 0 ) ≃ 10 GeV and is thus bounded. As a consequence, the relative shift in x F decreases with increasing beam energy, and the effect of the medium disappears for √ s → ∞, when R ml A/p ≃ 1 for 0 < x F < 1. In addition to the non-perturbative nuclear modification of fast quarkonium production in nuclei, the cc pair can undergo hard interactions with the quarks and gluons within each of the nucleons it passes through. We neglect this here, since the density of sufficiently hard scatterings in nuclei is too low to compete with string stretching, as will become evident in Section 3.4. This is changed, however, in a quark-gluon plasma, and the difference in parton hardness for confined and deconfined media is in fact crucial for the use of quarkonia as confinement/deconfinement probe. The observed suppression of quarkonium production in h − A collisions will be due to both the effects just discussed, nuclear shadowing and momentum loss of the coloured cc pair. We thus have to compare experimental results to the product of the two mechanisms, as given by Eqs. (3.12) and (3.18). The energy loss is effective mainly at larger x F ; since the integrated cross section, on the other hand, is determined by the region around x F ≃ 0, its suppression is dominantly due to nuclear shadowing. Hence Eq. (3.11) should determine the main A-dependence of charmonium suppression in p − A collisions at x f ≥ 0, and as a comparison to the equivalent form (3.12) shows [34], it indeed does. This form, but with a slightly different parametrisation of shadowing [11], has been compared to the available data [27,28] in [8]. In spite of the simplistic fit to shadowing and some kinematic approximations at small x F , the result ( Fig. 3.5) is seen to reproduce the observed suppression quite well. Quarkonium production in hadron-nucleus collisions, as experimentally studied up to now, appears thus as theoretically quite well understood, based on the interaction of a fast colour octet cc pair with nuclear matter. The dominant mechanisms for this interaction are the destructive quantum-mechanical interference of the scattering amplitudes of the cc on different nucleons, and the momentum loss of the cc as it traverses the physical vacuum between successive nucleon interactions. Charmonium Interactions in Nuclear Matter If we want to know the effect of nuclear matter on a fully formed physical J/ψ, then -as shown in Table 1 -we have to study the production of J/ψ's slow in the nuclear rest frame. The study of such slow charmonia has up to now been essentially impossible. It requires the detection of slow decay dileptons, and for this the abundance of slow hadrons constitutes an overwhelming background. In the case of fast dileptons, a hadron absorber can eliminate these, and hence all p − A studies were so far restricted to dilepton pairs of more than 20 GeV in the rest frame of the nuclear target. The advent of the P b-beam at the CERN-SPS has removed this constraint. With the P b-beam incident on a hydrogen (or deuterium) target, the nuclear rest frame moves with a lab momentum of 160 GeV. Hence now those charmonia (and their decay dileptons) which are slow in the nuclear rest frame are very fast in the lab system and will thus pass the hadron absorber; such experiments can therefore provide the cross section for the break-up interaction of physical J/ψ's on nucleons. To avoid confusion, we shall continue to define x F also in such P b − h reactions as positive for cms momenta in the direction of the hadron; with this terminology, P b − h collisions provide us with information about the so far unknown region of negative x F . Estimates [8,25] show that for √ s ≥ 17 GeV and x F ≤ −0.2, a cc is colourless before it reaches the next nucleon, so that colour interactions no longer enter. We now parametrise the survival probability for state i (i = J/ψ, χ c , ψ ′ ) as where n 0 is as before normal nuclear density, L the average path length of the charmomium state i in the nucleus, and σ i its absorption cross section of the state i in nuclear matter. If the charmonium state is fully formed before it leaves the range of the nucleon at which it was produced, σ i is simply the charmonium-nucleon cross section. If it is not yet fully formed, the effective cross section will be smaller, vanishing in the colour transparency limit of a pointlike colour singlet [35]. For illustration, we can therefore use a simplistic parametrisation of the cross section as function of the distance d the state has travelled before leaving the nucleus [36,37], where σ i is the fully developed cross section andL = [(P A /M i )τ (3.22) this reduces to Eq. (3.20) for momenta high enough to makeL = 0. To determine the actual survival probabilities, we now need the values σ i for the fully formed resonances colliding with nucleons. That is a topic in its own right and will be taken up in the next chapter. Before turning to this question, we want to connect the survival probability (3.20) to the measured suppression ratio R A/p . EMC Effect Modifications The survival probabilities S i are related to the p − A and p − p induced charmonium production cross sections through where g A (x) and g A (x) are the parton distribution functions in nuclear and proton target, respectively. The fractional parton momentum x is now, with x F ≤ 0, expressed by in terms of the variables x F and s. At the energy possible for P b-beam experiments ( √ s = 17.4 GeV), and for x F ≤ 0, we have x ≥ 0.15. We are thus above the region in which quantum-mechanical coherence effects (nuclear shadowing) can play a role; the reaction now is indeed an incoherent sum of interactions with different nucleons. In deep inelastic scattering on nuclear targets there is for x 2 ≥ 0.15 a modification in comparison to the same process on nucleons (EMC effect [38]; it can now be interpreted as a genuine change of the parton distribution function and applied to other hard processes. The resulting pattern for quark distributions q(x), is illustrated in Fig. 3.6 for M = 3.1 GeV and √ s = 17.4 GeV, making use of Eq. (3.24) to convert the Bjorken variable x to x F . Although the EMC effect has so far been observed only for quarks, it is to be expected that gluons will exhibit a similar behaviour. This is one reason why the initial state factor R A/p (x) will introduce an further x F variation in addition to that coming from S i (x F ). A second reason was already indirectly mentioned above: with increasing |x F |, charmonium production is more and more due to quark-antiquark annihilation rather than to gluon fusion. The two contributions become approximately equal around |x F | = 0.5, and as |x F | → 1, the qq contribution is dominant. We shall here assume that the gluon distributions behave similarly and use the quark form of R A/p in the whole region −1.0 ≤ x F ≤ 0. Eqs. The Theory of Quarkonium-Hadron Interactions In this section, the cross sections for the inelastic interaction between quarkonia and light hadrons will be calculated in short distance QCD, based on the small radii and the large binding energies of the lowest QQ resonances. A break-up of such states requires hard gluons, but the gluons confined to hadrons slow in the quarkonium rest frame are in general very soft, so that the resulting cross sections remain very small until quite high collision energies. The Short-Distance Analysis of Quarkonium-Hadron Interactions The interaction of quarkonia with ordinary light hadrons plays an important role both in the dynamics and in the thermodynamics of strong interaction physics. For QCD dynamics, it is important since the small quarkonium size probes the short-distance aspects of the big light hadrons and thus makes a parton-based calculation of the overall cross section possible [39][40][41][42][43]. In QCD thermodynamics, quarkonia can be used as a probe for deconfinement [1,2], provided their interaction in dense confined matter can be distinguished from that in a quark-gluon plasma. We begin this chapter with a brief summary of the QCD analysis of quarkonium interactions with light hadrons. It concludes that the small size of quarkonia, combined with the rather large mass gap to open charm or beauty, strongly inhibits their break-up by low energy collisions with light hadrons [40]; the total quarkonium-hadron cross section attains a constant asymptotic value only at very high energies, compared to corresponding cross sections for the interaction between light hadrons. This slowly rising form of the cross section is derived from an operator product expansion with ensuing sum rules and becomes quite transparent in parton language. The QCD analysis of quarkonium interactions applies to heavy and strongly bound quark-antiquark states [40]; therefore we here restrict ourselves to the lowest cc and bb vector states J/ψ and Υ, which we denote generically by Φ, following the notation of [40]. For such states, both the masses m Q of the constituent quarks and the binding energies ǫ 0 (Φ) ≃ (2M (Qq) − M Φ ) are much larger than the typical scale Λ QCD for non-perturbative interactions; here (Qq) denotes the lowest open charm or beauty state. In Φ − h interactions, as well as in Φ-photoproduction, γh → Φh, we thus only probe a small spatial region of the light hadron h; these processes are much like deep-inelastic lepton-hadron scattering, with large m Q and ǫ 0 in place of the large virtual photon mass −q 2 . As a result, the calculation of Φphotoproduction and of absorptive Φ−h interactions can be carried out in the shortdistance formalism of QCD. Just like deep-inelastic leptoproduction, these reactions probe the parton structure of the light hadron, and so the corresponding cross sections can be calculated in terms of parton interactions and structure functions. In the following, we shall first sketch the theoretical basis which allows quarkonium interactions with light hadrons to be treated by the same techniques as used in deep-inelastic lepton-hadron scattering or in the photoproduction of charm. We show the derivation for the sum rules which relate the absorptive Φ − h cross section to hadronic gluon structure functions [39,40]. This relation given, we calculate explicitly the energy dependence of the cross section. Readers only interested in this behaviour can therefore go immediately to Eq. (4.24). Consider the amplitude for forward scattering of a virtual photon on a nucleon, In the now standard application of QCD to deep-inelastic scattering one exploits the fact that at large spacelike photon momenta q the amplitude is dominated by small distances of order 1/ −q 2 (Fig. 4.1a). The Wilson operator product expansion then allows the evaluation of the amplitude at the unphysical point pq → 0, where p is the four-momentum of the nucleon. Since the imaginary part of the amplitude (4.1) is proportional to the experimentally observed structure functions of deepinelastic scattering, the use of dispersion relations relates the value of the amplitude at pq → 0 point to the integrals over the structure functions, leading to a set of dispersion sum rules [44]. The parton model can be considered then as a particularly useful approach satisfying these sum rules. In the case J µ =Qγ µ Q, i.e., when vector electromagnetic current in Eq. (4.1) is that of a heavy quark-antiquark pair, large momenta q are not needed to justify the use of perturbative methods. Even if q ∼ 0, the small space-time scale of x is set by the mass of the charmed quark, and the characteristic distances which are important in the correlator (4.1) are of the order of 1/2m Q (Fig. 4.1b). In [41,42], this observation was used to derive sum rules for charm photoproduction in a manner quite similar to that used for deep-inelastic scattering. In the interaction of quarkonium with light hadrons, again the small space scale is set by the mass of the heavy quark, and the characteristic distances involved are of the order of quarkonium size, i.e., smaller than the non-perturbative hadronic scale Λ −1 QCD . Moreover, since heavy quarkonium and light hadrons do not have quarks in common, the only allowed exchanges are purely gluonic. However, the smallness of spatial size is not enough to justify the use of perturbative expansion [40]. Unlike in the case of Φ-photoproduction, heavy quark lines now appear in the initial and final states (see Fig. 4.1c), so that the QQ state can emit and absorb gluons at points along its world line widely separated in time. These gluons must be hard enough to interact with a compact colour singlet state (colour screening leads to a decoupling of soft gluons with the wavelengths larger than the size of the Φ); however, the interactions among the gluons can be soft and nonperturbative. We thus have to assure that the process is compact also in time. Since the absorption or emission of a gluon turns a colour singlet quarkonium state into a colour octet, the scale which regularizes the time correlation of such processes is by the quantum-mechanical uncertainty principle just the mass difference between the colour-octet and coloursinglet states of quarkonium: τ c ∼ 1/(ǫ 8 −ǫ 1 ). The perturbative Coulomb-like piece of the heavy quark-antiquark interaction is attractive in the colour singlet (k = 1) and repulsive in the colour-octet (k = 8) state; in SU(N) gauge theory To leading order in 1/N , the mass gap between the singlet and octet states is therefore just the binding energy of the heavy quarkonium ǫ 0 , and the characteristic correlation time for gluon absorption and emission is Although the charm quark is not heavy enough to ensure a pure Coulomb regime even for the lowest cc bound states (η c and J/ψ), the mass gap determined from the observed value of open charm threshold clearly shows that τ c < Λ QCD . For the Υ, the interaction is in fact essentially Coulomb-like and the mass gap to open beauty is even larger than for charm. One therefore expects to be able to treat quarkonium interactions with light hadrons by the same QCD methods that are used in deep-inelastic scattering and charm photoproduction. We thus use the operator product expansion to compute the amplitude of heavy quarkonium interaction with light hadrons, where the set {O n } includes all local gauge invariant operators expressible in terms of gluon fields; the matrix elements O n are taken between the initial and final light-hadron states. The coefficients c n are computable perturbatively [39] and process-independent. As noted above, in deep-inelastic scattering the expansion (4.5) is useful only in the vicinity of the point pq → 0. The same is true for the case of quarkonium interaction with light hadrons. As shown in [40], the expansion (4.5) can therefore be rewritten as an expansion in the variable where M h is the mass of the light hadron; the approximate equality becomes valid the heavy quark limit. For the lowest 1S quarkonium state one then obtains where r 0 and ǫ 0 are Bohr radius and binding energy of the quarkonium, and the sum runs over even values of n to ensure the crossing symmetry of the amplitude. The most important coefficients d n were computed in [39] to leading order in g 2 and 1/N . Since the total Φ − h cross section σ Φh is proportional to the imaginary part of the amplitude F Φh , the dispersion integral over λ leads to the sum rules 2 π Eq. (4.7) provides only the inelastic intermediate states in the unitarity relation, since direct elastic scattering leads to contributions of order r 6 0 . Hence the total cross section in Eq. (4.8) is due to absorptive interactions only [40], and the integration in Eq. (8) starts at a lower limit λ 0 > M h . Recalling now the expressions for radius and binding energy of 1S Coulomb bound states of a heavy quark-antiquark pair, and using the coefficients d n from [39], it is possible [40] to rewrite these sum rules in the form Eq. (4.14) relates the Φ − h cross section to the gluon structure function. To get a first idea of this relation, we neglect the n-dependence of I(n) compared to that of O n ; then we conclude that since all order Mellin transforms of these quantities are equal up to a constant. From Eq. (4.16) it is clear that the energy dependence of the Φ − h cross section is entirely determined by the x−dependence of the gluon structure function. The small x behaviour of the structure function governs the high energy form of the cross section, and the hard tail of the gluon structure function for x → 1 determines the energy dependence of σ Φh close to the threshold. To obtain relation (4.16), we have neglected the n-dependence of the function I(n). Let us now try to find a more accurate solution of the sum rules (4.13). We are primarily interested in the energy region not very far from the inelastic threshold, i.e., since we want to calculate in particular the absorption of Φ's in confined hadronic matter. In such an environment, the constituents will be hadrons with momenta of at most a GeV or two. A usual hadron (π, ρ, nucleon) of 5 GeV momentum, incident on a J/ψ at rest, leads to √ s ≃ 6 GeV, and this corresponds to λ ≃ 5 GeV. From what we learned above, the energy region corresponding to the range (4.17) will be determined by the gluon structure function at values of x not far from unity. There the x-dependence of g(x) can be well described by a power law where the function (18) is normalized so that the second moment (4.12) gives the fraction g 2 of the light hadron momentum carried by gluons, < O 2 >= g 2 ≃ 0.5. This suggests a solution of the type where a and α are constants to be determined. Substituting (4.18) and (4.19) into the sum rule (4.13) and performing the integrations, we find . (4.20) We are interested in the region of low to moderate energies; this corresponds to relatively large x, to which higher moments are particularly sensitive. Hence for the range of n for which Eq. (4.5) is valid [42], n < ∼ 8, the essential n-dependence is contained in the Γ-functions. For n > ∼ 4, Eq. (4.20) can be solved in closed form by using an appropriate approximation for the Γ-functions. We thus obtain a Γ(α + 1) Γ(k + 2) ≃ const. n α−k−5/2 . Hence to satisfy the sum rules (4.14), we need . (4.22) Therefore the solution of the sum rules (4.13) for moderate energies λ takes the form To be specific, we now consider the J/ψ-nucleon interaction. Setting k = 4 in accord with quark counting rules, using g 2 ≃ 0.5 and expressing the strong coupling g 2 in terms of the binding energy ǫ 0 (Eq. 4.10), we then get from Eq. (4.23) the energy dependence of the J/ψ N total cross section with λ given by Eq. (6) and λ 0 ≃ (M N + ǫ 0 ). This cross section rises very slowly from threshold, as shown in Fig. 4.2; for P N ≃ 5 GeV, it is around 0.1 mb, i.e., more than an order of magnitude below its asymptotic value. We should note that the high energy cross section of 2.5 mb in Eq. as high energy value. This is somewhat smaller than that obtained from geometric arguments [46] and potential theory [47]. For sufficiently heavy quarks, the dissociation of quarkonium states by interaction with light hadrons can thus be fully accounted for by short-distance QCD. Such perturbative calculations become valid when the space and time scales associated with the quarkonium state, r Q and t Q , are small in comparison to the nonperturbative scale Λ −1 Λ −1 QCD is also the characteristic size of the light hadrons. In the heavy quark limit, the quarkonium binding becomes Coulombic, and the spatial size r Q ∼ (α s m Q ) −1 thus is small. The time scale is by the uncertainty relation given as the inverse of the binding energy E Q ∼ m Q and hence also small. For the charmonium ground state J/ψ, we have (4.27) With Λ QCD ≃ 0.2 GeV, the inequalities (4.25) seem already reasonably well satisfied, and also the heavy quark relation E J/ψ = (1/m c r 2 J/ψ ) is very well fulfilled. We therefore expect that the dissociation of J/ψ's in hadronic matter will be governed by the J/ψ-hadron break-up cross section as calculated in short-distance QCD. Nevertheless, in view of the finite charm quark mass, it makes sense to ask if the formalism developed here correctly describes J/ψ interactions with light hadrons. We will take up this question first theoretically, in the next section, where we shall see if non-perturbative interactions can lead to significant contributions to J/ψ break-up. In Section 4.3, we then see how to verify experimentally the shortdistance QCD prediction for J/ψ break-up by light hadrons. Non-Perturbative Quarkonium Dissociation For an isolated J/ψ-hadron system, non-perturbative interactions can be pictured most simply as a quark rearrangment [48]. Consider putting a J/ψ "into" a stationary light hadron; the quarks could then just rearrange their binding pattern to give rise to transitions such as J/ψ + N → Λ c +D or J/ψ + ρ → D +D (Fig. 4.3). The probability for such a transition can be written as where the spatial distribution of the cc bound state is given by the squared wave function |φ ψ (r)| 2 . The function R(r) in eq. (4.28) describes the resolution capability of the colour field inside the light hadron. Its wave length is of order Λ −1 QCD , and so it cannot resolve the charge content of very much smaller bound states; in other words, it does not "see" the heavy quarks in a bound state of radius r Q << Λ −1 QCD and hence cannot rearrange bonds. The resolution R(r) will approach unity for rΛ QCD >> 1 and drop very rapidly with r for rΛ QCD < 1, in the functional form R(r) ≃ (rΛ QCD ) n , rΛ QCD < 1, (4.29) with n = 2 [49] or 3 [39]. As a result, the integrand of eq. (4.28) will peak at some distance r 0 , with r Q < r 0 < Λ QCD . Since the bound state radius of the quarkonium ground state decreases with increasing heavy quark mass, while R(r) is m Q -independent, r 0 → r Q → 0 as m Q → ∞. Hence P r vanishes in the limit m Q → ∞ because R(r 0 ) does, indicating that the light quarks can no longer resolve the small heavy quark bound state. In a potential picture, the situation just described means that the charm quarks inside the J/ψ have to tunnel from r = r ψ out to a distance at which the light quarks can resolve them, i.e., out to some r ≃ cΛ −1 QCD , where c is a constant of order unity (Fig. 4.4). Such tunneling processes are therefore truly non-perturbative: they cover a large space-time region, of linear size Λ −1 QCD , and do not involve any hard interactions. Following [48], we shall here estimate the contribution of nonperturbative tunnelling to the dissociation of quarkonium states. In general, the problem of quark tunneling cannot be solved in a rigorous way, since it involves genuine non-perturbative QCD dynamics. However, the large mass of the heavy quark allows a very important simplification, the use of the quasiclassical approximation. In this approximation, the rate of tunneling R tun can be written down in a particularly transparent way: it is simply the product of the frequency ω ψ ′ of the heavy quark motion in the potential well and the tunnelling probability P tun when the quark hits the wall of the well, The frequency ω ψ is determined by the gap to the first radial excitation, Consider now the potential seen by the cc (Fig. 4.4). For a particle of energy E, the probability of tunneling through the potential barrier V (r) is obtained from the squared wave function in the "forbidden" region. It can be expressed in terms of the action W calculated along the quasiclassical trajectory, Here the momentum |p| is given by and r 1 , r 2 are the turning points of the classical motion determined from the condition V (r i ) = E. In our case, the width of the barrier is approximately 0.6 Λ −1 QCD , while its height (V − E) is equal to the dissociation threshold E Q . The mass M in Eq.(4.34) is the reduced mass, M = m Q /2. We thus have For the J/ψ, we get from (4.35) the value W ≃ 3; this a posteriori justifies the use of quasiclassical approximation, which requires S > 1. Using eq. (4.35), we obtain as final form for the tunneling rate (4.30) With the above mentioned J/ψ parameters, this leads to the very small dissociation rate R tun ≃ 9.0 × 10 −3 fm −1 . (4.37) In terms of R 0 , the J/ψ survival probability is given by where t max denotes the maximum time the J/ψ spends adjacent to the light hadron. In the limit t max → ∞, S ψ ′ vanishes. However, the uncertainty relations prevent a localisation of the two systems in the same spatial area for long times. From ∆x ≤ Λ −1 QCD we get ∆p ≥ Λ QCD , so that the longest time which the J/ψ can spend in the interaction range of the light hadron is with m for the mass of the light hadron. For nucleons or vector mesons, this time is 4 -5 fm, and with this, the survival probability is very close to unity; hence non-perturbative tunneling interactions provide only negligible contributions to J/ψ dissociation. In addition to such tunnelling, there can be direct and sequential thermal excitation to the continuum. In particular the latter still requires further analysis [48]. The Experimental Study of Charmonium-Hadron Interactions We now return to the break-up rate for J/ψ interactions with slow light hadrons, as predicted by heavy quark QCD. Since this prediction is crucial for the use of quarkonia as confinement/deconfinement probe, it certainly must be checked experimentally. We shall now consider how that can be done. In Eq. (3.22), we had obtained the survival probability for charmonium states i in p−A interactions in terms of the break-up cross sections σ i . We now just have to insert the cross section (4.24) into this expression to obtain the J/ψ survival probability as function of x F . In Fig. 4.5 the result is shown for a cms collision energy of 17.4 GeV, the value provided in CERN SPS P b-beam experiments. Included in this figure is also the colour octet interaction region x F ≥ 0, together with data [27] and the fit obtained in section 3.2. We see that in the regime x F ≤ −0.2, in which physical J/ψ's interact with the nuclear medium, the survival probability is essentially unity, due to the smallness of the cross section (4.24) in the threshold region. This picture has to be contrasted to the geometric absorption approach, which provides the basis for all hadronic accounts of J/ψ-suppression in nucleus-nucleus collisions. Here the J/ψ cross section is assumed to attain its high energy value σ J/ψN (s = ∞) ≃ 2.5 mb immediately at threshold. The survival probability for this case is also shown in Fig 4.5, and is seen to differ both qualitatively and quantitatively from that based on the QCD result (4.24). Through a measurement of J/ψ-production in p − A collisions at negative x F , technically attainable through data from a P b-beam incident on a hydrogen or deuterium target at positive x F , one can thus check if the threshold behaviour predicted by short distance QCD for inelastic J/ψ-hadron interactions is indeed correct. The actual experimental results will differ from what is shown in Fig. 4.5 for two reasons. We have already noted in section 3.4 the modifications expected because of the EMC effect. A second reason is that the J/ψ peak observed in the measured dilepton spectrum is only to about 60 % due to directly produced 1S cc J/ψ resonances; the remaining 40 % are mainly due to χ c production with the subsequent radiative decay χ c → J/ψ + γ [22,23].* Concerning the EMC effect, we can either fold it into the predictions shown in Fig. 4.5, or it can be measured independently and then removed from the J/ψproduction data. Measuring Drell-Yan dilepton production in the same kinematic region provides R A/p (x) (see Eq. (3.23)) for quarks directly, without any final state modification. A measurement of open charm production leads to R A/p (x) in just the same superposition of quark-antiquark annihilation and gluon fusion as in charmonium production, but again without any final state effect. Separate measurements of Drell-Yan and/or open charm production would thus determine the EMC modification without addititional final state effects. Such measurements can therefore be used to unfold the EMC modification of J/ψ data. To avoid any χ c admixture, it would of course be ideal to measure both J/ψ and χ c -production directly in p − A collisions [23]. This may be too difficult due to the abundance of photons in the case of heavy nuclei as targets. We shall therefore try to estimate the effect of the χ c admixture in the two scenarios considered here by simply adding the corresponding predictions with the noted 60/40 weights. For the geometric approach with its asymptotic cross sections, this requires a calculation completely analogous to that for the J/ψ, but now with σ χ ≃ 6 mb and the correspondingly longer resonance formation time in theL of Eq. (3.22). The application of short distance QCD to calculate the inelastic χ c -hadron cross section is as reliable as for the J/ψ, since the binding energy of the χ c , ǫ χ ≃ 0.24 GeV, is about equal to Λ QCD . Nevertheless, such a calculation can give us * We shall for simplicity ignore here a further small contribution ( < ∼ 5 %) fom ψ ′ decay). some idea of the expected behaviour. The result is [25] The asymptotic value is thus a factor two larger than the geometric estimate; this is a consequence of the fact that short distance QCD [40,41] leads to higher powers in the bound state radii than just r 2 . The behaviour of the χ c N cross section (4.40) is shown in Fig. 4.2. Inserting Eq. (4.40) into (3.22) and adding J/ψ and χ c contributions then gives us the short distance QCD survival probability for the measured J/ψ. It and the corresponding result from the geometric approach are shown in Fig. 4.6. As seen, for x F ≤ −0.4, the two approaches differ qualitatively in their functional form and quantitatively by more than 20%. An experimental test of the theory for the interaction of heavy quarkonia with light hadrons should therefore be possible. As last point in this section, we comment briefly on the interaction of the ψ ′ with light hadrons. Since its binding energy is only about 50 MeV, it lies almost at the open charm threshold and can definitely not be treated by short distance QCD. Here it might therefore not be unreasonable to assume that it attains its high energy value rather soon after threshold. This value can only be estimated by geometric arguments, and these suggest around 10 mb [46,47]. Quarkonium Dissociation in Confined and Deconfined Media Here we first show that at fixed temperature (or energy density), a deconfined medium contains much harder gluons than a confined medium. Tightly bound quarkonia probe gluon hardness: while they were found to remain essentially unaffected in a confined medium at T < ∼ 0.5 GeV, a QGP of such temperature is shown to be very effective in their disssociation. To relate this to the environment produced in nuclear collisions, the resulting charmonium survival is studied in the case of isentropic longitudinal expansion. The Parton Structure of Confined vs. Deconfined Matter The ultimate constituents of matter are evidently always quarks and gluons. What we want to know is if these quarks and gluons are confined to hadrons or not. Let us therefore assume that we are given a macroscopic volume of static strongly interacting matter and have to analyse its confinement status. As prototype for matter in a confined state, we consider an ideal gas of pions. Their momentum distribution is thermal, i.e., for temperatures not too low it is given by exp(−E π /T ) ≃ exp(−p π /T ). Hence the average momentum of a pion in this medium is p π = 3 T . The distribution of quarks and gluons within a pion is known from structure function studies; the gluon density is g(x) ≃ 0.5(1 − x) 3 for large x = p g /p π . 1) As a consequence, the average momentum of a gluon in confined matter is given by Hence in a medium of temperature T ≃ 0.2 GeV, the average gluon momentum is around 0.1 GeV. In contrast, the distribution of gluons in a deconfined medium is directly thermal, i.e., exp(−p g /T ), so that Hence the average momentum of a gluon in a deconfined medium is five times higher than in a confined medium 2) ; for T = 0.2 GeV, it becomes 0.6 GeV. An immediate consequence of deconfinement is thus a considerable hardening of the gluon momentum distribution. Although we have here presented the argument for massless pions as hadrons, it remains essentially unchanged for heavier mesons (ρ/ω) or nucleons, where one can use a non-relativistic thermal distribution for temperatures up to about 0.5 GeV. We thus have to find a way to detect such a hardening of the gluon distribution in deconfined matter. The lowest charmonium state J/ψ provides an ideal probe for this. It is very small, with a radius r ψ ≃ 0.2 fm ≪ Λ −1 QCD , so that J/ψ interactions with the conventional light quark hadrons probe the short distance features, the parton infra-structure, of the latter. It is very strongly bound, with a binding energy ǫ ψ ≃ 0.65 GeV ≫ Λ QCD ; hence it can be broken up only by hard partons. Since it shares no quarks or antiquarks with pions or nucleons, the dominant perturbative interaction for such a break-up is the exchange of a hard gluon, and this was the basis of the short distance QCD calculations presented in the previous chapter. We thus see qualitatively how a deconfinement test can be carried out. If we put a J/ψ into matter of temperature T = 0.2 GeV, then -if the matter is confined, p g conf ≃ 0.1 GeV, which is too soft to resolve the J/ψ as a cc bound state and much less than the binding energy ǫ ψ , so that the J/ψ survives; -if the matter is deconfined, p g decon ≃ 0.6 GeV, which (with some spread in the momentum distribution) is hard enough to resolve the J/ψ and to break the binding, so that the J/ψ will disappear. The latter part of our result is in accord with the mentioned prediction that the formation of a QGP should lead to J/ψ suppression [1,16]. There it was argued that in a QGP, colour screening would prevent any resonance binding between the perturbatively produced c andc, allowing the heavy quarks to separate. At the hadronisation point of the medium, they would then be too far apart to bind to a J/ψ and would therefore form a D and aD. Although the details of such a picture agreed well with the observed J/ψ suppression [50], it seemed possible to obtain a similar suppression by absorption in a purely hadronic medium [51], through collisions of the type Taking into account the partonic substructure of such hadronic break-up processes, we now see that this is in fact not possible for hadrons of reasonable thermal momentum. Our picture thus not only provides a dynamical basis for J/ψ suppression by colour screening, but it also indicates that in fact J/ψ suppression in dense matter will occur if and only if there is deconfinement. We note, however, that the dynamical approach to J/ψ suppression does not require a thermal equilibration of the interacting gluons, so that it will remain applicable even in deconfined pre-equilibrium stages. While we have studied the hadronic part of the argument in detail in the previous sections, we have so far not considered the dynamics of quarkonium dissociation in a deconfined medium. This will be taken up in the next section. Quarkonium Dissociation by Deconfined Gluons In section 4.2 we had obtained the cross section for the dissociation of a tightly bound quarkonium by an incident light hadron. Eq. (4.23) was obtained [2] by convolution of the inelastic gluon-charmonium cross section with the gluon distribution in the light hadron. The gluon-quarkonium cross section itself is given by with k denoting the momentum of the gluon incident on a stationary quarkonium. In 5.1 thus provides the basis for the claim that in matter temperature T ≤ 0.5 GeV, gluons of thermal momentum can break up charmonia, while hadrons cannot. We note here that, just as in the photoelectric dissociation of atoms, the break-up is most effective when the momentum of the gluon is somewhat above the binding energy. Gluons of lower momenta can neither resolve the constituents in the bound state nor raise them up to the continuum; on the other hand, those of much higher momenta do not see the (by their scales) large object and just pass through it. To illustrate this more explicitly, we calculate the break-up cross section for the J/ψ as function of the temperature T of an ideal QGP. Using Eq. (5.4) with m c = 1.5 GeV and the J/ψ binding energy of 0.64 GeV, we then get The result is shown in Fig. 5.2 and confirms that up to about T ∼ 0.5 GeV, only a deconfined medium can dissociate J/ψ's. We see moreover that the effective cross section for break-up in the temperature range 0.2 ≤ T ≤ 0.5 GeV is about 1.2 mb. It is this value which will determine the suppression of the (pure 1S) J/ψ in a deconfined medium. Before we turn to charmonium production in a more realistic non-static environment, we want to consider briefly the possible role of quarks in the dissociation of charmonia in a QGP. This can be treated in a fashion quite similar to the interaction of quarkonia with light hadrons; the gluon distribution function in the hadron should now be replaced by an effective gluon distribution "in" quarks, i.e., the quark splitting function P (x) characterising the process q → q + g. It's functional form for x → 1 is fixed both by quark counting rules [52] and the Altarelli-Parisi equations [53], which leads to an average gluon momentum The average momentum of gluons emitted by thermal quarks is thus higher than for gluons confined to thermal hadrons; it is nevertheless low enough to suggest that in an equilibrium plasma, the direct interaction with thermal gluons is the main dissociation mechanism. In pre-equilibrium, however, the quarks can be much faster and become the dominant cause of dissociation. Charmonium Survival in an Expanding Medium The exponential quarkonium survival probability of the general form (3.20) applies to a stationary medium of finite size. In nuclear collisions, the medium needs some proper time t 0 to be formed, and it then expands until a time t f when the era of strong interactions in the medium ends. Hence slow quarkonia in the medium of not too high an initial temperature will in general stop interacting when the medium has cooled down enough, even though they have not yet left it. The survival probability of a quarkonium state i in such an expanding medium can be written as where n(t) is the density of scattering centers at time t 0 ≤ t ≤ t f and σ i (t) the break-up cross section for state i in the medium at that time. If we assume isentropic longitudinal expansion, density, temperature and time are related by with T ≤ T 0 . Using this relation, we can rewrite Eq. (5.6) in the form where T f is the temperature of the medium for which the break-up of state i stops. The first term in the exponent corresponds to dissociation by hadrons, the second by gluons. We had seen that the main contribution to the dissociation comes from gluon-quarkonium interactions, making S(T 0 ) ≃ 1 up to T 0 ≃ T c ≃ 0.15 GeV. Hence by inserting the cross section (5.5) (Fig. (5.2)) into Eq. (5.10), we obtain the survival probability of a J/ψ in a medium in isentropic longitudinal expansion, for an initial temperature T 0 ≥ T c ; below that, it is unity. The thermalisation time t 0 is generally argued to be around 1 fm; and for an ideal QGP, the density of gluons at temperature T is given by the Stefan-Boltzmann form The resulting suppression as function of temperature is shown in Fig. 5.3. The survival probability is seen to decrease very rapidly with increasing temperature, essentially vanishing for T > ∼ 0.3 GeV. Pre-Equilibrium Deconfinement In the previous chapter, we established quarkonium suppression as a probe to check if a given sample of strongly interacting matter consists of deconfined quarks and gluons. In an actual nuclear collision, however, such a suppression could have been caused before the onset of thermalisation. Here we therefore study the possibility of pre-equilibrium quarkonium suppression. Shadowing and J/ψ Suppression in Nucleus-Nucleus Collisions Consider a nucleus-nucleus collision at CERN SPS energy ( √ s ≃ 20 GeV in the nucleon-nucleon cms), leading to J/ψ production at mid-rapidity, y J/ψ =0, with the production mechanism as described in Chapter 2. During the first 0.25 fm, the produced cc pair is a colour octet and will interact as such with the passing nucleons of target and projectile. The momenta of these nucleons in the rest system of the J/ψ will be around 10 GeV. The situtation seen by the J/ψ is thus very similar to that encountered at x F = 0 in collisions of 200 GeV/c protons with a nuclear target. As discussed in Chapter 3, one here finds a suppression dominated by "nuclear shadowing", i.e., destructive interference of the scattering on different nucleons. Such an effect will now arise from both projectile and target, however, and this suppression must be removed before final state effects can be studied [11]. To see the effect of this shadowing correction, we show in Fig. 6.1 the latest data taken by the NA38 collaboration at CERN in p − U and S − U collisions [54,55]. Here the suppression is measured with respect to the high mass Drell-Yan continuum, isospin corrected for p − U . The nucleus-nucleus data are shown as function of the neutral transverse hadronic energy E 0 T produced in the collision. The p−U value is simply included in this figure; it is not associated to any particular E 0 T . The difference between the p − p and p − A results is, as discussed in Chapter 3, essentially given by the shadowing function R sh A/p (x F ≃ 0). To correct the data for shadowing effects, we therefore divide the p − U data by R sh U/p (0) and the S − U data by R sh S/p (0)×R sh U/p (0). The result is shown in Fig. 6.2; the p−U value and that for the S beam at low E 0 T now agree. The remaining suppression in the nuclear collision data is now due to effects on the colour singlet cc in its state after the time of colour neutralisation. We thus want to study the effect of the environment on this cc. If we ignore nuclear stopping, target and projectile nucleons retain in the cc cms their initial momenta P N ≃ 10 GeV/c. The cc then sees two nuclei, each Lorentz-contracted to about 1 fm, since (2m/ √ s) ≃ 0.1; for simplicity, we consider here A − A collisions. At the colour neutralisation time τ 8 ≃ 0.25 fm, the centers of these thin discs have become separated by approximately 0.5 fm, so that the target and projectile nucleons still have considerable overlap. Stopping would slow down the nucleons to increase this. The detailed kinematics encountered by the colour singlet cc is, however, quite unimportant for the essential question: can a pre-equilibrium state of hadrons account for the observed suppression? The initial medium, at the point of maximum overlap between target and projectile, is one of twice nuclear density, containing nucleons of 10 GeV/c momentum in the cc rest system; equivalently, we can assume normal nuclear density and an effective cc path length L = 2 × 3R A /4, with R A = 1.12 A 1/3 [34]. Whatever scattering processes now occur lead to an increase of the density n, but at the same time to a decrease of the momenta P of the hadronic constituents of the medium. Momentum conservation requires for the momentum flow through a given surface F P n F = P N n 0 F = const. (6.1) so that the momentum in fact drops as 1/n. If we neglect the cross section reduction during the time needed for the singlet cc to become a full-sized resonance, the survival probability of a J/ψ in the medium becomes The medium actually seen by the J/ψ will have a higher density [56], but its constituents will have lower momenta. In view of Eq. (6.1), we can nevertheless use the form (6.2) to calculate the survival probability. In contrast to the phenomenological fit of [34], Eq. (6.2) contains the break-up cross section at the actual collision energy, and in Chapter 4 we have seen that this cross section is much below its geometric high energy value. At P N = 10 GeV/c, we find (see Fig. 4.2) σ J/ψ ≃ 0.8 mb. Inserting this into Eq. (6.2), with an average value L ≃ 10 fm for central S − U collisions, leads to S ≃ 0.9 for the survival probability. Since we have here neglected the cross section reduction during the J/ψ evolution as well as the path length reduction by the part already included in the shadowing correction, the actual survival probability will be larger. Thus hadronic pre-equilibrium interactions cannot account for the measured J/ψ suppression. We therefore conclude that the J/ψ suppression observed in the NA38 experiment [15] provides evidence for the existence of deconfined gluons in the medium probed by the J/ψ. By this we mean that the medium in which the J/ψ finds itself after a central nucleus-nucleus collision contain gluons whose momentum distribution is harder than that found in mesons or nucleons [57]. Only in collisions with such gluons can the tightly bound J/ψ be dissociated; gluons confined to hadrons are not sufficiently hard. We emphasize that this conclusion makes crucial use of the energy dependence of the J/ψ-light hadron break-up cross section as calculated in short distance QCD. As pointed out in Chapter 4, this result can and should be experimentally checked. Deconfined Gluons and QGP The existence of deconfined gluons is clearly not equivalent to the existence of a QGP, in which such gluons are in thermal equilibrium. It is only the first step towards a QGP: it shows that in the large-scale medium there exist deconfined partons. Their thermalisation comes at the end of a parton interaction cascade, and it is not at all clear whether at present energies there are enough deconfined partons in a sufficiently large and long-lived system to reach this stage. How can we check experimentally whether parton thermalisation is or is not achieved? The cascade formation of a QGP [58,59] starts with the production of gluons in primary collisions; these then interact again to produce secondary gluons, quite possibly through multigluon production [60], and so on, until production and absorption balance to form an equilibrium system. In equilibrium, the number of gluons per unit volume is simply determined by the energy (or entropy) density of the system. In the pre-equilibrium stage, it is lower and (still) proportional to the number of primary collisions. We thus expect J/ψ suppression by deconfined gluons in a pre-equilibrium system to increase with the number of nucleon-nucleon collisions (or equivalently, with the effective path length of the J/ψ). On the other hand, equilibrium suppression would be independent of the number of such collisions and depend on the effective energy density of the system only [57]. To test this, we can study P b − P b collisions as function of increasing centrality (i.e., of increasing E T ). In this case, the energy density remains essentially unchanged [29] while the number of collisions increases.* If the J/ψ suppression is found to increase, the system cannot have reached equilibrium, and hence the suppression must be due to deconfined gluons in the pre-equilibrium stage. Once the suppression becomes independent of the number of primary collisions, equilibrium and hence the QGP is reached. The forthcoming NA50 P b-beam data from the CERN-SPS should thus be able to resolve this question. Conclusions We first summarize the essential theoretical points of this work, and then survey the main conclusions we think can be drawn from present data. Quarkonium production in hadron-hadron collisions is today quite well understood in terms of elementary parton interactions (gluon fusion, quark-antiquark annihilation). The distributions of the partons within the colliding hadrons are determined in deep inelastic lepton-hadron scattering, and the (non-perturbative) colour neutralisation of the produced heavy QQ pair can also be fixed empirically. Hadron-nucleus collisions determine what happens to quarkonium production in a confined medium. In the production of fast quarkonia in the nuclear rest frame, a fast colour octet QQ passes the medium, leading to quantum-mechanical interference ("nuclear shadowing") and energy loss. Slow quarkonia in the nuclear rest * Since a change in centrality of S − U collisions changes both the number of primary collisions and the effective energy density, this does not distinguish preequilibrium from equilibrium. frame are subject to EMC effect modifications of the colliding partons in addition to collisions with nucleons in the nucleus. Because of the small size and the large binding energy of the lowest quarkonium states, their interaction with light hadrons is calculable in short distance QCD. They can interact in leading order only through the exchange of a hard gluon, and the gluon distribution in the light hadrons is known to be very soft. The resulting prediction is a cross section which rises very slowly from threshold to its high energy value, suppressing strongly any break-up of quarkonium ground states by slow mesons or nucleons. As a consequence of this suppressed quarkonium dissociation by light hadrons, confined matter (at temperatures T ≃ 0.5 GeV) becomes transparent to J/ψ's. The momentum of deconfined thermal gluons, on the other hand, is large enough to give rise to effective J/ψ dissociation; such dissociation can occur also by deconfined gluons not in equilibrium. Strongly interacting matter thus leads to J/ψ suppression if and only if it is deconfined. The loosely bound ψ ′ can be broken up in both confined and deconfined matter, though presumably more in a deconfined medium. In nucleus-nucleus collisions, the interaction of the coloured cc pair with target and projectile nucleons can be taken into account through the nuclear shadowing determined in hadron-nucleus collisions. Any strong J/ψ suppression remaining after removal of these shadowing effects is accountable only by interaction with deconfined gluons. What have we then learned from the h − A [27,28] and A − B [15,61] data available so far? What should the forthcoming P b-beam data clarify? The equality of J/ψ and ψ ′ suppression in h − A collisions for x F ≥ 0, as well as the size and x F dependence of the observed effect, are in full accord with the passage of a colour octet through nuclear matter. The equality of J/ψ and ψ ′ suppression and the observed x F dependence are in clear disagreement with any description based on the absorption of physical charmonium states in nuclear matter. There is a lack of data for charmonium production in a kinematic regime in which fully formed J/ψ's could interact with nuclear matter. Such data could be obtained by experiments using the P b-beam incident on a light target [25]. The observed difference between J/ψ and ψ ′ suppression in nucleus-nucleus interactions [61] indicates that the relevant mechanism here is different from that in h − A collisions. The observed J/ψ suppression cannot be accounted for in terms of nuclear shadowing on both projectile and target; even after removal of shadowing, an E Tdependent suppression (about 40 % between low and high E T ) remains. If the strong threshold suppression of J/ψ break-up by light hadrons, as predicted in short distance QCD, is experimentally confirmed, the J/ψ suppression observed in O − U and S − U interactions can only be accounted for by the presence of deconfined gluons. It remains open whether the deconfined gluons required for J/ψ suppression in nuclear collisions are already equilibrated; hence their existence does not establish QGP formation, but quite likely only a first step towards a thermalised deconfined medium. If the gluons are not in equilibrium, the resulting J/ψ suppression should increase with increasing E T in P b − P b collisions at fixed energy; in equilibrium, the suppression would remain approximately constant.
2019-04-14T02:42:19.847Z
1995-05-19T00:00:00.000
{ "year": 1995, "sha1": "47c6d7aedf81dafe42fd378954531be187ffb962", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-ph/9505345", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "cc0ae98e582724f5dc7be1d39526c7f7af90c049", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
259317112
pes2o/s2orc
v3-fos-license
Influence of the Anderson Transition on Thermoelectric Energy Conversion in Disordered Electronic Systems So far, the efficiency of thermoelectric energy conversion remains low compared to traditional technologies, such as coal or nuclear. This low efficiency can be explained by connecting the thermoelastic properties of the electronic working fluid to its transport properties. Such connection also shows that operating close to electronic phase transitions can be an efficient way to boost the thermoelectric energy conversion. In this paper, we analyze themoelectric efficiency close to the metal-insulator Anderson transition. Our results reveal the direct link between the thermoelectric and thermoelastic properties of Anderson-type systems. Moreover, the role of the conductivity critical exponent in the thermoelectric energy conversion is analysed. Finally, we show that relatively large values of the thermolectric figure of merit may be obtained in the vicinity of the Anderson transition. Introduction Thermoelectric conversion performance is usually determined using a combination of three transport coefficients: the electrical conductivity σ, the thermal conductivity κ, and the Seebeck coefficient α.This combination is known as the thermoelectric figure of merit zT [1,2,3]: where T is the average temperature across the system.As phonons of the crystal lattice and charge carriers − usually electrons, contribute to thermal transport, κ can be written as κ = κ e + κ ph .The range of applications of thermoelectric technology would be significantly extended, provided zT exceeds a value of at least 4 [4] and that devices operate under appropriate working conditions [5].This is a formidable challenge, due to the interdependence of the transport coefficients, ruled by phenomenological laws such as the Wiedemann-Franz law connecting heat and electric conductivity [6].To establish an upper bound for zT under given working conditions, arXiv:2307.00921v2[cond-mat.mtrl-sci]27 Feb 2024 it is enough to consider the thermoelectric figure of merit of the conduction electron gas alone, z e T , which disregards the lattice thermal conductivity κ ph : Coupling between heat and electrical transport, as was underlined by Apertet et al. [7], results in a convective process, namely a heat flow associated with the displacement of charge carriers.The convective part of the heat flow, which adds to the conductive part due to electrons and phonons under open circuit conditions, can be enhanced near the critical temperature of an electronic transition [8,9].In the literature, thermoelectric conversion was discussed for superconducting [10] and metal-insulator Anderson transition [11,12,13,14]. To address the conversion efficiency optimization problem, it is instructive to analyze the thermodynamic properties of the conduction electron gas, which is the actual working fluid of thermoelectric devices.The Seebeck coefficient given by the ratio of the gradients of two intensive variables, electrochemical potential µ and temperature T : α = −∇µ/(q∇T ) [15], has a thermostatic counterpart: α th = −dµ/dT , which derives from the Gibbs-Duhem equation, as α th = S/N is the entropy per particle.The quantity α th can be written using also the definitions of the thermoelastic coefficients of the electron gas and the Maxwell relations, and a thermodynamic figure of can be defined from the calculation of the isentropic expansion factor [9]: where β is the analogue to thermal dilatation coefficient, χ T is the analogue to isothermal compressibility, and C N is the analogue to specific heat at constant volume.Definitions are given further below. As discussed in [9,10], driving the electron gas close to a phase transition yields a significant increase of the isentropic expansion factor, which boosts the energy conversion efficiency.In the latter works, thermally driven effects, namely fluctuating Cooper pairs and nematic fluctuations, were considered in 2D systems and thin films.Other effects such as disorder, can influence the thermodynamic and transport properties of the electron gas: in a disordered system, the charge carriers states at a given energy can either be localized or delocalized depending on the disorder strength.In this work, we analyze the effects of the transition from the delocalized to localized states, or the Anderson metal-insulator transition, on the thermodynamic properties of the electron gas and its ability to perform an efficient energy conversion in the vicinity of the critical point.Indeed, one may expect that the Seebeck coefficient may drastically increase as the system is driven away from its metallic phase as the entropy per carrier increases.Thermoelectric conversion near the Anderson transition has been studied in [11,12,13], but the link between thermoelastic and transport properties has not been yet considered, while it has been studied for the metal to superconductor phase transition [10]. The paper is organized as follows.In the next two sections, for completeness and clarity, we give a brief recap of the basic ingredients of our approach: the transport coefficients and the thermoelastic coefficients.We then focus on the Anderson transition, detailing the assumptions and parameters we use for our model.We present and discuss our numerical results in the subsequent section, and we end the paper with concluding remarks. Transport coefficients The standard approach to calculate σ, α, and κ e is to relate these transport coefficients to Onsager's kinetic coefficients L ij (i, j = 1, 2), in the frame of linear non-equilibrium thermodynamics [16,17,18]: where e is the electron charge. To compute the Onsager coefficients L ij , we use the Boltzmann equation in the relaxation time approximation [19]: where is the transport distribution function, and µ is the electrochemical potential introduced above.Here, is the transport distribution function, with τ (E) being the electron relaxation time, v(E) the electron group velocity, and g(E) the density of states.Note that the relaxation time depends on the model of the system and also varies with respect to the type of collisions [18]. Thermoelastic coefficients The thermodynamics of the noninteracting electron gas is very similar to its classical gas counterpart.Using the correspondence between volume V and the number of electrons N and pressure P and the chemical potential µ, namely, V −→ N and −P −→ µ, one can define analogous coefficients for the electron gas β, χ T , C N (already introduced in Eq. ( 3)), and C µ [9], where C µ is the analogue to specific heat at constant pressure.Following the same approach as with transport coefficients, we relate thermoelastic coefficients to the distribution function [9,20].The analogue to isothermal compressibility is: the analogue of thermal dilatation coefficient reads: and the specific heat at constant electrochemical potential is given by: The specific heat at constant particle number C N and at constant electrochemical potential C µ are connected via the Maxwell's relation Anderson transition model Anderson developed the concept of localized and extended states with a simple theoretical model: if a quantum-mechanical system is sufficiently disordered (e.g., a semiconductor with lattice defects or impurities) and at sufficiently low carrier density, diffusion cannot take place, which results in wave function localization [21].The model assumes a distribution of sites occupied by particles, which may be random or might be regular in three-dimensional space.Later, Mott introduced the mobility edge concept [22], which is an energy level, E c , in a bulk semiconducting system's energy band, e.g., an impurity band, which separates localized states (with energy E < E c ) from extended states (with energy E > E c ) within the same band.The location of E c within the band depends on the disorder strength (i.e.number and type of defects, impurities, etc.) and the density of states g(E) near the mobility edge follows a power-law: where d is the system's dimension, and y a scaling parameter, and the expression for the electrical conductivity at zero temperature reads [23]: where A is a constant.Moreover, the electrical conductivity at zero temperature is related to the transport distribution function as σ(T = 0, E) = 2e 2 Σ(E) [12].We are here only interested in the d = 3 case because of the absence of quantum diffusion in two dimensions and one dimension [24,25] for which there is no mobility edge E c .The theory is valid for the values of y in the range 0 < y < d [23]. In the present work, we consider the electron gas at the metal-insulator transition [26].The calculations below are done using Eq. ( 14), and with g(E F ) = 2.45 × 10 24 J −1 •m −3 at the Fermi level.The constant A in Eq. ( 15) was chosen as A = 10 22 Ω −1 • m −1 • J −x [27] to match the typical values of the electrical conductivity.For convenience, let x = 1/y, which is also known as the conductivity critical exponent.We investigate the influence of this parameter on the thermodynamic and thermoelectric figure of merit, Z th T and z e T , showing their dependencies for several values of x.We consider a range starting at x = 1/3.Note that the exact value of x is not known, notwithstanding the various numerical, analytical, and experimental methods used for determining it [11].For example, MacKinnon provides the value of x = 1.54, which is in range of typical values 0.5 < x < 2 [28].In our calculations, we use the following expressions for the temperature-dependent chemical potential of the three-dimensional electron gas reads [29]: where t = T /T F , with T F = E F /k B being the Fermi temperature; t * = 1.36.The coefficient b + is given by b + = a + /Γ(3/2), a + = 2/3, Γ(x) is the gamma function, a 1 = 0.016, a 2 = −0.957, a 3 = −0.293,and a 4 = 0.209 [29].In our calculations, T F = 42.3K for the three-dimensional electron gas, whose concentration n was taken as n = 10 18 cm −3 .The mobility edge is set to E c (T ) = 0 eV, based on the simplified model of Wegner [23]. Results and discussion As we study the effect of disorder on thermoelectric energy conversion, several values of the critical exponent are considered.Our model includes extended and localized sates, with a smaller proportion of the last ones since the mobility edge E c = 0.As for the extended states, their influence is significant because for the system with weak enough disorder, the extended states may exist and contribute to the conductivity, which is finite even at zero temperature [30].Figure 1 illustrates the electron gas figure of merit z e T and the thermodynamic figure of merit Z th T as functions of the temperature T .To compute z e T , we calculated first the transport coefficients κ e , σ, and α numerically integrating Eqs.(7-9) using the transmission function Σ(E) = σ(T = 0, E)/2e 2 [12], where σ(T = 0, E) is given by Eq. ( 15).As regards the thermodynamic figure of merit Z th T , we integrated Eqs.(10,11,12) using the density of states g(E) given in Eq. ( 14). Both figures of merit grow steadily over the whole temperature range.Importantly, both z e and Z th increase as the system is driven close to the phase transition, implying that the increase of the electron gas isentropic expansion factors fosters the desired behavior of the transport parameters.At relatively small temperatures (T ≲ 150 K), the larger the critical exponent, the larger the figures of merit, while at higher temperatures (T ≳ 150 K), the dependence is reversed.This shows that thermal excitation of carriers get them into the extended states, however the crossovers of the curves computed for various values of x as T increases shows that the interplay between disorder effects and temperature effects is not trivial. To understand how thermoelastic properties are reflected in transport properties, we show the correlation between the figure of merit z e T and the figure of merit Z th T in Fig. 2. Depending on the value of the critical exponent, the correlation between the figures of merit varies from a straight line at x = 0.5 to a power law behavior for other values of x.We underline that z e T is a monotonously growing function of Z th T , regardless of the value of the critical exponent.This means that we can estimate how efficient energy conversion can be in the best-case scenario (that is, neglecting phonons) on purely thermodynamic grounds, by studying the behavior of the figure of merit Z th T rather than z e T . Next, we show how efficient is energy conversion near the transition temperature and in the whole temperature range.To characterize the efficiency of this process, we introduce the thermodynamic efficiency where γ = C µ /C N is an analogue to the classical isentropic expansion factor, and η C is the Carnot efficiency.At T ≈ 50 K, when k B T ≈ µ − E c and thermoelectric transport is affected by Though the trend shown in the table appears simple, its explanation is not, as establishing a clear relationship between the isentropic expansion factor and the critical exponent x would require an analysis beyond the scope of the present work, involving a correlation between the thermoelastic coefficients of the electron gas in the extended state, the disorder strength and the critical exponent.Here, we may suggest the following interpretation.With an increased disorder strength, the electrical conductivity σ drops as electronic states become gradually localized below the mobility edge.How fast σ drops can also be related to the value of x (whose exact value is not known) as shown in Eq. ( 14); so, assuming that a large value x corresponds to situations when the disorder strength is large, leads to see why the maximum conversion efficiency increases with disorder and x.This interpretation is in line with that in Ref. [13]: increasing the disorder generates more efficient thermoelectricity.Finally, from a thermodynamic viewpoint, we may also relate this ascertainment to the fluctuation-compressibility theorem [32,33] mentioned in Ref. [10] where fluctuating Cooper pairs and nematic fluctuations were discussed: increasing disorder generates density fluctuation of the electronic extended states across the system, which in turn fosters the increase of the isentropic expansion factor γ and hence of the maximum efficiency η max defined in Eq. ( 17). Conclusion We have connected the thermoelectric and thermoelastic properties of the three-dimensional Anderson model, discussing the role of the critical exponent.In contrast with the sharp enhancement of thermoelectric conversion close to the superconducting phase transition [10,9], our results show a smooth, monotonously growing dependence of the thermoelectric figure of merit on temperature.Indeed, in single particle models a sharp energy-dependence of the transmission function is requested to obtain large thermoelectric efficiencies.On the other hand, in the Anderson model the dependence of the transmission function Σ(E) on energy is non-analytical at the mobility edge E c − see Eq. ( 15) −, but less sharp than the more thermoelectrically efficient boxcar-function-shaped transmission functions [34].However, as noted in [13], though a large ZT fosters a high energy conversion efficiency, this does not necessarily result in a high output power [5,35]; so an optimal thermoelectric energy conversion in disordered electronic systems must account for the disorder strength when considering the power-efficiency trade-off. Another essential issue is the influence of the mobility edge E c on the thermoelectric properties of the Anderson transition.For simplicity, we considered E c = 0, although one can choose a temperature-dependent mobility edge as in [31].Such a choice may help to study the temperature-dependent interplay between the localized and extended states and how it may influence the electron gas figure of merit z e T .In [13], several values of E c were considered and high values of the figure of merit zT were obtained.A more realistic model of the mobility edge and how it may affect the energy conversion in the metal-insulator transition deserves more attention and is beyond of scope of the present work. On a general perspective, our results confirm the validity of the thermodynamic approach as a useful and physically intuitive way to estimate the ideal thermoelectric performance of the working fluid, neglecting the detrimental effect of phonons.Such an approach naturally suggests the consideration of electronic phase transitions to boost thermoelectric efficiency [10,9]. Figure 1 . Figure 1.The electron gas figure of merit z e T (left) and the figure of merit Z th T (right) are depicted functions of the temperature T , for different values of the critical exponent x.The notation 0.(3) refers to the repeating decimal 1/3 = 0.(3). Figure 2 . Figure 2. The parametric plot of the figure of merit z e T versus the figure of merit Z th T for various values of the critical exponent. Table 1 show how η max /η C varies as a function of the critical exponent x. Table 1 . Dependence of the maximum thermodynamic efficiency on the critical exponent.
2023-07-04T06:42:14.990Z
2023-07-03T00:00:00.000
{ "year": 2024, "sha1": "bcdb51474009bb1982ca0be8a775395f2f4a2b06", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/1742-6596/2701/1/012018/pdf", "oa_status": "GOLD", "pdf_src": "ArXiv", "pdf_hash": "a252c8114c9164e3323e7a4243521917373d99f2", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Physics" ] }
247029923
pes2o/s2orc
v3-fos-license
Brushing Effect on the Properties of Glass Ionomer Cement Modified by Hydroxyapatite Nanoparticles or by Bioactive Glasses This study evaluated the physical and mechanical properties of glass ionomer cement (GIC) associated with 5% hydroxyapatite nanoparticles (NPHAps) and 10% bioactive glass (BAG) 45S5 before and after brushing at different storage times. Surface roughness was evaluated using a rugosimeter, Vickers hardness using a microdurometer, and mass variation measured in an analytical balance at 1, 7, 15, 30, and 60 days before and after the brushing test, with the aid of toothbrushing simulator and soft bristle toothbrushes. Nonnormal distribution was observed, and the nonparametric Wilcoxon and Kruskal–Wallis tests followed by Dunn's were performed, with a significance level of 5%. We observed higher values for mass loss on the first day for all groups. The surface roughness was lower in the control and NP groups, 30 days after brushing. Higher values for hardness were found in the control group and lower ones for NP, after brushing. The control and BAG groups presented a decrease in hardness over time. The NP group presented the highest values before brushing, while the control group had the highest values after brushing. The association of NPHPa with the GIC is the most promising combination, since it presented satisfactory values for surface hardness. However, conventional GIC not associated with NPHPa or BAG is still an option, since it is available in the market and the most economically viable option. Introduction e glass ionomer cement (GIC) has an ionic exchange mechanism with dental structures, which allows chemical adhesion to both enamel and dentin [1]. In addition, it presents characteristics such as biocompatibility, ability to release and reincorporate fluoride from the oral environment and linear thermal expansion coefficient similar to dentin [2,3]. However, GIC has limitations due to mechanical properties, such as low wear resistance, hardness, and tensile and compressive diametral strength [4]. It is capable of inducing the remineralization of dentin and enamel, since the fluoride release assists in the formation of fluorapatite, besides being considered the material of choice for performing atraumatic restorative treatment, as it has confirmed caries control action [5,6]. Takahashi et al. [7] and Palmer et al. [8] associated the GIC to chlorhexidine in order to potentiate the antibacterial property of the material. ese researchers found an increase in antibacterial property compared to conventional GIC, considering possible changes in the physical and mechanical properties of the material due to the modification of its original composition. Incorporation of nanoparticles (NPs) into various restorative materials in order to improve their mechanical [9,10], physical, and antibacterial properties [4,11] is also widely evaluated in literature. NPs have extremely reduced size, which results in a large superficial area and higher contact with the environment in which they are found [12]. ey increase the permeability of cell membranes, as well as the flow out of the cytoplasmic contents of the cell, facilitating their penetration into the microorganism, leading to the destruction of the lipids and cellular proteins of the cell [12][13][14]. e hydroxyapatite [HAp: Ca 10 (PO 4 ) 6 OH 2 ] demonstrates biocompatibility, compositions and structure of apatite-like crystals, which is present in dental structures in humans and in bone tissues. e NP of hydroxyapatite (NPHAp), when added to the restorative GIC, demonstrated an increase in flexural strength and in the release of fluoride ions, acting not only as a reinforcement of the material but also as an adsorbent component and an ion exchange agent, resulting in better chemical and mechanical properties [15]. Alatawi et al. [16] found an antibacterial increase with the association of GIC and NPHAp due to fluoride ions release. Moshaverinia et al. [4] also demonstrated that the incorporation of NPHAp into the GIC provided improvements in compressive strength and tensile diametral strength. Recent studies have also shown that the association of GIC with NP can reduce the number of pores and increase compressive strength. Association with NPHAp may also increase flexural and shear bond strength [17,18]. Kantovitz et al. [19] found greater compressive strength when he combined GIC with titanium NP and also less mass loss with no difference in roughness after brushing. We have also studied the addition of bioactive glasses (BAG) to the GIC for the improvement of its remineralizing properties, which have been explored in the literature [20][21][22]. In dentistry, there are several applications of BAG, such as implantology, maxillofacial surgery, periodontics, pulp therapy, and restorative materials. BAGs have the ability to chemically bind to bone minerals. ey are composed of oxides of calcium, phosphorus, silicon, and sodium in different proportions which precipitate and confer remineralizing action when in contact with dentin [20,21]. According to Bakry et al. [22], BAGs have the capacity to penetrate the dentinal tubules and the presence of calcium and phosphate may be capable of remineralizing the subsurface demineralized enamel. BAGs, when in aqueous environments, form an apatite layer on its surface, both in vitro and in vivo [23]. Not only fluoride ions but also calcium and phosphate ions are released from the GIC when the material was associated with BAG [24]. Its high pH value when in aqueous medium [25,26] can make changes in the mechanical properties of the material, such as reduction of surface hardness or increase of flexural strength [27]. GIC's performance associated with NP or BAG depends on the maintenance of the original characteristics of these materials and the quality of their surface, which has a primary role to be in contact with the oral environment and its elements. Once there is an increase in surface roughness, colonization of microorganisms becomes easier and faster [28]. e wear of the material also results in increased roughness, which also occurs due to dental brushing and dentifrice quality and toothbrush quality and pressure exerted on it, besides brushing frequency [29]. Inherent factors of the material such as integrity between the matrix and the glass particles, size, shape of the particles, and porosity should also be considered [30]. Many studies demonstrate the remineralizing and antibacterial capacity of BAG and NP, respectively, when associated with different restorative materials [4,15,20,26]. It is important to develop researches that answer the doubts about the possible changes in physical, chemical, and mechanical properties when BAG and NP are associated with GIC, in addition to changes that may occur due to brushing. Considering that GIC associated with 5% of NPHAp and 10% of BAG 45S5 lead to the improvement of its properties, this study evaluated the surface roughness, Vickers hardness, and mass variation of, before, and after brushing at different storage times. Materials and Methods is is an experimental laboratory. e test specimens were made with GIC restorative (Ketac Molar EasyMix-3M ESPE, Campinas, SP, Brazil) and divided into 3 experimental groups, with 50 specimens in each. Ten percent (10%) BAG 45S5 [21,26] and 5% NPHAp [4] (SIGMA-ALDRICH; ref: 677418-10; batch: MKBW9108V) were added to the GIC. e amount of powder used for the Control group was established using the arithmetic mean. From this measurement, the desired weight percentage of the GIC powder was removed and the same percentage of NPHAp or BAG was added. After homogenization, the powder was agglutinated with a drop of the liquid. is drop was dispensed onto the mixing pad with the bottle positioned vertically as indicated by the manufacturer (Powder/liquid ratio 2 : 1). With the use of a Centrix syringe (DFL and Comércio S.A. Rio de Janeiro, RJ, Brazil), the GIC associated with NPHAp or BAG was inserted in silicone matrices with 3 mm height and 6 mm diameter [31]. For complete setting reaction of the material, the specimens were stored in a suitable container with approximately 100% relative air humidity at 37°C incubator for 24 hours [32]. Right after, the specimens were submitted to tests of mass variation, Vickers hardness, and surface roughness, before and after brushing test for different periods of time. After the first 24 hours, the specimens were weighed daily by means of an analytical balance (Ind. Com. Eletro-Eletrônica GE-HA-KA Ltda, model BG 440, São Paulo, Brazil), once a day, until the initial mass (IM) was stabilized, and the IM value was obtained. After the brushing, a new weighing sequence was performed to determine the final mass (FM). During all experimental times, the specimens were kept immersed in deionized water for 1, 7, 15, 30, and 60 days. As the measurements were obtained every 24 hours, the specimen mass was considered to be stable from the moment that five consecutive measurements were observed with the same value. And the mass variation values were obtained based on the difference between the initial mass (before brushing) and the final mass (after brushing) [28]. e surface roughness of the specimens with a cut-off of 0.25 mm was analyzed, and the values (Ra) were obtained by arithmetic mean between the peaks and valleys recorded by the rugosimeter (Surfcorder SE 1700, Kosaka Laboratory Ltd., Kosaka, Japan). On each surface, three readings were made in different positions, starting 2 mm below the edge of the specimen, always passing through its center. e Vickers hardness reading was performed by a single operator in a digital microdriometer (Micromet 2100-Buehler Ltda., Lake Bluff, Illinois, USA), applying a load of 50 kgf for 30 seconds on the surface of the specimens. In each specimen, six indentations were made at equidistant points. e results expressed Vickers hardness values (VHN) directly by the test machine. e brushing test was performed in a brushing simulation machine (MEV-2T-Odeme Dental Research, Miami, USA) with a linear course of 60 mm extension in 2 seconds (30,000 cycles simulating 3 years of brushing) [33], with the aid of soft bristle toothbrushes (Dental PowerDent Classic Power Brush, PowerDent, São Paulo, Brazil) and 6 g of toothpaste "Colgate Máxima Proteção Anticáries" (Colgate, 90 grams with 1450 ppm fluorine-Colgate-Palmolive Industrial LTDA, São Paulo, Brazil) mixed with 6 ml of water ( Figure 1) [34]. e data obtained were statistically analyzed using the statistical package SPSS 22.0 (SPSS Inc., Chicago, IL, USA), for normality using the Kolmogorov-Smirnov test where a nonnormal distribution was observed. For the analysis of superficial roughness and Vickers hardness comparing values before and after brushing for each time interval, in each experimental group, Wilcoxon's nonparametric test was performed. For the analysis of surface roughness and Vickers hardness over time, divided before and after the brushing test, in each experimental group and for comparison between groups at the various times, the Kruskal-Wallis nonparametric test was performed, followed by the Dunn test. e nonparametric Kruskal-Wallis test was followed by the Dunn test using subtraction of the IM of the test specimens, weighed before immersion, and the FM of each specimen after the tests. All were performed with a significance level of 5%. Results e mass variation showed a statistically significant difference when considering each experimental group separately (Table 1). In Control and BAG groups, significant mass loss was observed on the first day of experiment (p � 0.016). For the NPHAp group mass loss was observed until the seventh day (p ≤ 0.001). ere was still a statistical difference between the groups at 1, 30, and 60 days. On the first day, this difference was representative between groups NPHAp and BAG, with greater loss of mass for the BAG (p � 0.016). At 30 days, the BAG group showed greater statistically significant mass loss compared to the control and NPHAp (p ≤ 0.001). At 60 days, the NPHAp group presented higher mass, being statistically different from the Control and BAG groups (p ≤ 0.001). e surface roughness, before and after brushing test, showed a statistically significant difference for the time of 30 days, when the Control (p � 0.011) and NPHAp (p � 0.037) groups had the lowest roughness value after brushing test ( Table 2). Statistically significant difference was observed for surface roughness in the Control group only after brushing in the first and seventh days (p � 0.006). Higher values for surface roughness were observed for the 60 th day (Table 3). It was observed over time, when considering the values between the experimental groups before brushing, that there was a statistically significant difference at 1 and 7 days. In the first day, higher roughness value was presented by the BAG group (p � 0.006). At 7 days, both NPHAp and BAG groups had higher surface roughness values (p � 0.004). After the brushing test at 1 and 7 days, they were also the ones that presented statistical difference, and at 1 day the Control group presented lower surface roughness (p � 0.003) and at 7 days, the Control group presented lower roughness when compared to the BAG group (p � 0.003) ( Table 3). e Control group presented increase in Vickers hardness after brushing test for 1, 7, and 30 days (p � 0.007; p � 0.047; p � 0.008). e NPHAp group presented decrease only for the 7 th day, after the brushing test (p � 0.009). For the BAG group, no significant differences were found (Table 4). When comparing over time (Table 5), it was observed that in the Control group there was a higher value of Vickers hardness for 7 days, before brushing (p � 0.010). For the BAG group, the highest value of Vickers hardness was in the first day of, before (p � 0.002), and after brushing (p � 0.009) ( Table 5). Higher values of Vickers hardness with statistical difference over time between the different experimental groups, before brushing, were for the NPHAp group. After brushing, only at 30 days' control group presented the highest values of Vickers hardness (p � 0.031). And for a comprehensive view of the data over time, the following images show the pre-and postbrushing variability of surface roughness ( Figure 2) and Vickers hardness ( Figure 3). Discussion e use of BAG or NP has been studied [4,15,20,26] in order to improve the remineralization and antibacterial activity of dental materials, without, however, changing their physical or mechanical properties. Similar to the literature [4,11,21,27,30,34,35], our study found statistically significant differences in these properties of the GIC Ketac Molar EasyMix when associated with BAG or NP. International Journal of Dentistry e mass loss, which indicates the amount of material wear [27,36], is a property that, when altered, can cause serious damage to the longevity of the restoration. Factors such as acid base reaction of the GIC, presence of air bubbles in their interior, and proportion and size of the glass particles are related to the variation of this property, increasing its susceptibility to erosion, pronounced displacement of inorganic particles and greater exposure of air bubbles [29,37]. e syneresis and/or imbibing of this material should also be considered [28,38]. In this study, we observed greater mass loss in the Control and BAG groups on the first day compared to other days, and for the NPHAp group, in the first and seventh days. e first days are critical for the complete maturation of GIC [37], and the subjection of this material to the test may have led to greater changes on its surface, such as loss of glass particles and/or organic matrix, resulting in a lower mass and consequently higher wear of the material (Table 1). When compared between the groups, the greatest mass loss was observed in the BAG group (Table 1), probably due to greater dissolution of the organic component of GIC, which has a great capacity for water absorption, resulting in poor binding between the BAG particles and the matrix of the GIC [21]. Statistically significant differences were found in the mass variation of some types of GIC [36], relating this fact to the difference in the amount of water inside the materials before their weighing. Table 1 shows an increase in mass at times of 30 and 60 days for the Control group, 15 to 60 days for the NPHAp group, and 15 days for the BAG group, possibly due to fluoride recharging by the GIC, when in contact with the toothpaste during the brushing test. Panigrahi et al. [39] and Yli-Urpo et al. [21] observed that after the association of a GIC with the remineralizing material there was a higher release of fluoride, and consequently a higher incorporation of these ions. e degradation of restorative materials may also be related to the pH decrease of the buccal cavity, sorption of water, and erosion of these materials, which results in the degradation of the matrix and interface of its surface and may also result in greater surface roughness. In addition to the accumulation of biofilm on the material surface, it also results in alterations in aesthetics, cracking, change in color, and reflection of light [2,29] and consequent decrease in the longevity of restoration due to caries lesions, gingival inflammation, among others [38]. After being submitted to the brushing test, the Control and NP groups showed a decrease in surface roughness at 30 days (Table 2) probably due to the possible polishing of this surface. Bala et al. [40] evaluated the surface roughness of a nanoparticulate GIC in comparison to conventional GICs and found lower roughness values for the former, after polishing. Although it presented the lowest values for surface roughness when compared to the other experimental groups, the control group demonstrated an increase of this property, directly proportional to the time (Table 3). In this work, Cibim et al. [34] evaluated the surface roughness of a modified GIC by TiO 2 NP and found that, regardless of the concentration of NP, it did not affect the distribution and bonding between NP particles and the GIC matrix. ey also reported that particle size affects surface roughness and that nanometric particles may favor this property. Mitra et al. [35] pointed out the tendency to form clusters of NP when associated with a dental material, which, when subjected to abrasion caused by brushing, may have the surface clusters detached, leaving the surface of the restorative material with minor defects, resulting in better optical properties. When incorporating BAG to a GIC, Valanezhad et al. [27] found cracks in the surface of the material, caused by the tensions generated during sample preparation and inadequate dispersion of the BAG particles within the GIC matrix. ey reported that BAG particles represented centers of stress concentration, where fissures began. is report supports the data obtained in this study, which demonstrated the highest values of surface roughness for the BAG (Table 3). e authors also observed dissolution of the GIC matrix after immersion of the material in PBS, with increased surface roughness. omassewski et al. [36] observed that all the GICs not associated with NP or BAG suffered wear after simulated brushing and increased roughness. In this study, it is also possible to observe increase of roughness for the control group after the brushing, directly proportional to the time of storage (Table 3). e evaluation of the superficial hardness is also important when we consider the success of a restoration, considering that this property is altered by exposure to water and to saliva [41], besides the composition of the polyacrylic acid that makes up the GIC [42]. is study found a statistically significant increase in the values of this property in the Control group and decrease in the NPHAp group after the brushing test (Table 4). Analyzing each group separately over time, the Control and BAG groups showed a decrease in surface hardness values before the brushing test (Table 5). e NP presented higher hardness values before the brushing test, and after the same test the Control Group presented the higher values compared to the BAG (Table 5). According to Xie et al. [30], the presence of dispersed glass particles in the polymer matrix can result in higher values of surface hardness. Prentice et al. [11] suggested that the addition of NP to GIC results in less glass particles on the surface of the material, providing a more intense acid reaction and a decrease in its hardness. Panahandeh et al. [43] also found a decrease in Vickers hardness when GIC was associated with NP and the formation of clusters was pointed out as responsible for this. Moshaverinia et al. [4], however, observed an increase in surface hardness when GIC was associated with fluorapatite, corroborating the values obtained in this study, before the brushing test. Increased surface hardness values were also found by Moshaverinia et al. [37] after one week of storage in distilled water of a fluorapatite NP modified GIC. is is possibly due to the intensity increase of the acid-base reaction of the GIC due to low release of calcium ions from fluorapatite NP, with higher number of bridges with high phosphate and calcium ion concentration, which reinforced the matrix, improving the interaction between organic and inorganic networks. According to the authors as the cement ages in distilled water, it promotes more cross-linking, leading to increased surface hardness values. Valanezhad et al. [27] found, as well as this study, a decrease in the values of this property, probably due to the presence of cracks in the material. In an aqueous environment, the GIC absorbs water resulting in poor bonding between the BAG particles and the GIC matrix, which leads to a decrease in the surface hardness of the material, as well as ions that precipitate on the glass particles [44]. Yli-Urpo et al. [21] observed that when immersed in deionized water, GIC associated with BAG presented decreases in hardness values. e dissolution and precipitation of components may alter the surface morphology of the material, thus leading to variations in its properties [21,27,44]. Another factor that may have affected not only hardness, but also surface roughness, as already mentioned, is the wear that occurs naturally over time, or as simulated in this study, by brushing. is wear due to brushing can alter the aesthetic and structural characteristics of dental materials, leading to notable effects on hardness [45]. Kyoizumi et al. [46], however, in their study concluded that there is an influence of brushing on material properties, especially regarding the variation of the bristles, whether softer or harder. However, more than brushing, wear changes come from the combination with the type of material, not just the brushes. ey conclude that the hardness grades of toothbrushes have minor effects on abrasion and surface roughness of composite resins. More in-depth studies on wear are needed, as it is very complex, especially evaluating the microstructural part of the surface of materials, since there is still no standardization in the evaluation of this property [47]. e use of hydroxyapatite as a remineralization system is based on a biomimetic approach that aims to restore the tooth with the same substance that constitutes its hard tissues [48]. In the study by Butera et al. [49], it was possible to observe that the use of a dentifrice containing Zn-carbonate hydroxyapatite on composite resin in the oral environment increased the deposition of calcium and silicon indicating the presence of remineralizing activity, being also a mechanism that can collaborate for the prevention of secondary decay. e findings of this research together with those found in the literature indicate the potential that exists in the association and use of hydroxyapatite with restorative materials. is study has as a limitation of being in vitro, and the use of only one GIC, which was used because it is still one of the gold standards in the literature and the material of choice in the pediatric clinic of the institution where the research was conducted. Standardization when simulating brushing is also a limitation, as it is susceptible to several factors (brush type, applied force, and time). e immersion solution also has limitations. Artificial saliva is the first option when thinking about simulating the oral cavity, but considering other tests to be performed by this study group, deionized water was used. is can lead to differences in the results, mainly because artificial saliva contains significant amounts of calcium and phosphate that can influence the properties of the GIC [50]. However, studies that compared immersion in artificial saliva and deionized and/or distilled water did not observe significant differences in several properties such as compressive strength [50] and surface degradation of the material [51] and found the same pattern of fluoride release [52]. And as all groups in this study were immersed in the same solution, the results are subject to comparison and validation. In addition, this study was carried out mediately, requiring evaluations and confirmations of findings in longterm studies since in the search for a dental material with better properties for clinical use, the analysis of its properties and composition is interesting, guaranteeing adequate antibacterial activity and greater longevity of the restorations, without suffering excessive wear. e association of GIC with NPs or with BAGs has been widely studied, and in this same study, we brought the comparison of a type of NP (NPHAp) and a type of BAG (45S5), which is not easily found in the literature. Conclusion It is concluded that the association of NP or BAG with the GIC generated changes in the properties studied, and the association of NPHAp with the GIC is the most promising one, since it presented satisfactory values for surface hardness. However, conventional GIC not associated with NPHAp or BAG is still the best option found in vitro; since it presented the best results, it is already in the market and is economically the most viable option. is study is considered to be of great clinical relevance since this GIC is widely used in pediatric dentistry, and constant scientific investigations to improve its properties are important to ensure its long-term clinical success. Disclosure is manuscript was written based on the master's thesis by Rafael Amorim Martins, which can be found in the Institutional Repository of UNESP. Conflicts of Interest e authors declare no conflicts of interest regarding the publication of this paper.
2022-02-23T16:23:36.619Z
2022-02-21T00:00:00.000
{ "year": 2022, "sha1": "34f5529624ee175cfa8e1f4c3ca5cda14aa3b129", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/ijd/2022/1641041.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3b31f33ef9e652cde25f4c187929567711c6c8c8", "s2fieldsofstudy": [ "Materials Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
10764142
pes2o/s2orc
v3-fos-license
Annexin A2 promotes phagophore assembly by enhancing Atg16L+ vesicle biogenesis and homotypic fusion Plasma membrane budding of Atg-16L-positive vesicles represents a very early event in the generation of the phagophore and in the process of macroautophagy. Here we show that the membrane curvature-inducing protein annexin A2 contributes to the formation of these vesicles and their fusion to form phagophores. Ultrastructural, proteomic and FACS analyses of Atg16L-positive vesicles reveal that 30% of Atg16L-positive vesicles are also annexin A2-positive. Lipidomic analysis of annexin A2-deficient mouse cells indicates that this protein plays a role in recruiting phosphatidylserine and phosphatidylinositides to Atg16L-positive vesicles. Absence of annexin A2 reduces both vesicle formation and homotypic Atg16L vesicle fusion. Ultimately, a reduction in LC3 flux and dampening of macroautophagy are observed in dendritic cells from Anxa2−/− mice. Together, our analyses highlight the importance of annexin A2 in vesiculation of a population of Atg16L-positive structures from the plasma membrane, and in their homotypic fusion to form phagophore structures. For phagophores that originate from the plasma membrane, it has been proposed that Atg-16L-positive vesicles directly bud from the plasma membrane, and generate pre-phagophore structures either by direct homotypic fusion of vesicles, or after trafficking to early/recycling endosomes, by heterotypic fusion with Atg9 þ vesicles 2,4,5 . Subsequent pairing of Atg16L with the conjugated protein pair Atg12-Atg5 results in the generation of tubulovesicular double membranes, which elongate further to form the autophagosome 7 . Coincident with the arrival of Atg12/5 to the phagophore, Atg16L-positive compartments also acquire LC3, which gives rise to the mature phagophore. Once the elongation is complete, the double membranes seal to complete the autophagosome; Atg12, Atg5 and Atg16L are then released into the cytosol, whereas part of LC3 remains associated with the autophagosome until either its fusion with lysosomes to become an autolysosome, or its fusion with late endosomes to form an amphisome. The machinery regulating Atg16L vesicle homotypic fusion was recently found to comprise the SNARE protein VAMP7 and its partner SNAREs (syntaxin 7, syntaxin 8 and Vti1b) 7 . On the other hand, heterotypic fusion of Atg16L-Atg9 vesicles relies on the SNARE protein VAMP3 (ref. 4). Both Atg16L þ and Atg9 þ vesicles have been observed in Rab11 þ -recycling endosomes, where the fusion events are likely to occur 4,5 . A recent analysis performed in our laboratory on annexin A2 þ -purified vesicles 26 indicated that some of these vesicles were Atg16L þ . In the present study, we investigated the role of annexin A2 in Atg16L þ vesicle formation, and its overall participation in MA. Our data indicate that annexin A2 aids the vesiculation of Atg16L-positive vesicles at the plasma membrane by recruiting phosphatidylserine (PS) and phosphatidylinositides, and by promoting Atg16L-mediated fusion to form phagophore structures in early MA. Results Annexin A2 associates with Atg16L þ vesicles. In agreement with the reported biological roles of annexin A2, ultrastractural analysis of immunogold-stained, primary mouse dendritic cells showed the presence of annexin A2 at the plasma membrane, in the cytoplasm, bound to vesicles of variable size and in late endosomal compartments (Fig. 1a-c). Annexin A2 staining was also observed on phagophore-like structures present in the cytosol (Fig. 1d), on B20% of autophagosomes (Fig. 1e), and on vesicles budding from the plasma membrane (Fig. 1c), reflecting results reported previously reported for Atg16L þ vesicles involved in phagophore formation 7 . Thus, we asked whether both proteins could be involved in cellular events that provide membrane for phagophore formation in the initial stages of MA. Total cellular vesicles were prepared using ultracentrifugation of the cellular cytosol from the mouse dendritic cell line JAWS II (DC). Pelleted vesicles were stained with a monoclonal antibody to Atg16L, or an isotype control, and sorted using flow cytometry to specifically separate Atg16L þ vesicles from the total vesicle population (Fig. 1f). Vesicle lysates were resolved using SDS-PAGE and immunoblotted for annexin A2, with total cell lysates and cytosol included as positive controls (Fig. 1g, Supplementary Fig. 1). To establish the Ca 2 þ dependence of annexin A2 association with vesicular structures, the vesicles were left untreated or incubated with EDTA before lysis. Western blot analysis revealed the presence of annexin A2, but not annexin A5, in Atg16L-positive vesicles, and confirmed the role of Ca 2 þ in recruitment of annexin A2 from the cytosol (Fig. 1g, Supplementary Fig. 1). Colocalization between Atg16L and annexin A2 was further confirmed using immunofluorescence (Fig. 1h). Proteomic analysis of Atg16L/annexin A2 þ vesicles. Next, we investigated the protein composition of annexin A2 þ /Atg16L þ vesicles to further define their role in MA. Total cellular vesicles were prepared from isolated DC cytosol. Following staining with Atg16L and annexin A2 antibodies, doubly positive vesicles were sorted (Fig. 2a). Quantitative FACS analysis indicated that 28% of the total vesicles were Atg16L þ . These data are consistent with the notion that, unlike other cell types, autophagy is highly active in dendritic cells, even under physiological conditions due to their role in immunosurveillance, which requires constant transport of the extra-and intracellular proteome to endo/lysosomal compartments [27][28][29] . In addition, dendritic cells, as opposed to other cell types, are highly phagocytic. Thus, it is likely that many of the vesicles that give rise to phagophores derive from the plasma membrane in this cell type. Of the total Atg16L þ vesicles, B30% were also annexin A2 þ , indicating that the combined markers define a subpopulation of autophagic precursors (Fig. 2a). Likewise, only about half of annexin A2 þ vesicles were also Atg16L þ (Fig. 2a), consistent with the notion that annexin A2 is implicated in other cellular functions besides autophagy. The purity of the sorted vesicles was confirmed with ultrastructural morphology (Fig. 2b). To further analyse the origin of the vesicles, sorted material was lysed and proteins resolved with SDS-PAGE. Excised bands were digested with trypsin and the retrieved peptides analysed using mass spectrometry (Fig. 2c,d, Table 1). Several proteins known to participate in vesicle formation at the plasma membrane were identified; these included clathrin, the AP-2 adaptor complex, and accessory proteins such as synaptotagmin II, an AP2 binder, and endophilin A3, a protein involved in vesicle fission. Cytoskeletal proteins, known to be involved in vesicle rocketing and movement, were also detected; these included actin, dynactin and cofilin 1 (ref. 24). In addition, the SNARE protein VAMP7 and its binding partners Vti1b, Hrb and ARF 6 were also identified 7 . These SNAREs were previously shown to be associated with Atg16 þ vesicles (Table 1) 7 . VAMP3, the SNARE involved in heterotypic Atg16L-Atg9 vesicle fusion, was also detected by proteomic analysis 5 . Girdin and Aurora kinase A, inhibitors of LC-3 vesicle binding, were also sequenced 30,31 , as well as several proteins involved in phosphatidylinositol (PI) metabolism including, synaptojanin 2, an enzyme that dephosphorylates PI at positions 3, 4, 5 of the inositol ring to form PIP2. Finally, Vps13, a vacuolar sorting protein required for efficient phagocytosis, membrane bending and PI-phosphate regulation, was also detected 32,33 (Table 1). Together, the results clearly validate that the Atg16 þ -annexin A2 þ vesicle subpopulation originates from the plasma membrane and contains multiple molecular effectors involved in vesicle docking and fusion. Annexin A2 generates PI-and PS-enriched Atg16L þ vesicles. At the plasma membrane, annexin A2 facilitates the formation of lipid microdomains enriched in the phosphoinositide PI (4,5)P2, which promotes membrane deformation and actin-mediated vesicle rocketing 19,23 . Conversely, annexin A2 binds PS to facilitate endosomal biogenesis or membrane repair in both endosomes and at the plasma membrane 21,26 . To determine whether the formation of Atg16L þ -annexin A2 þ vesicles is supported by PS and/or PI recruitment, Atg16L þ vesicles were sorted from the cytosol of bone marrow dendritic cells (BMDCs) from wild-type (Anxa2 þ / þ ) and annexin A2 knockout (Anxa2 À / À ) mice (Fig. 3a). Total lipids were extracted from 1 Â 10 5 vesicles and separated using thin layer chromatography. ARTICLE Purified lipids were used as controls (Fig. 3b). No differences were observed between Anxa2 þ / þ and Anxa2 À / À vesicles with respect to cholesterol and cholesterol esters, the most abundant vesicular lipids (Fig. 3b). In addition, no apparent differences were observed in the abundance of PE, the lipid required for LC3 recruitment to the phagophore, between vesicles from the two genotypes. On the other hand, a quantitative decrease in recovered PS was observed in Anxa2 À / À versus Anxa2 þ / þ vesicles. Of the three analysed phosphatides (PI, PIP2 and PIP3), only trace amounts of PI were detected in Anxa2 þ / þ , but not Anxa2 À / À , vesicles (Fig. 3c); this result is not surprising considering the low sensitivity of thin layer chromatography and the very low abundance of each of these signalling lipids in membrane structures. To further analyse the lipid content of Anxa2 þ / þ and Anxa2 À / À vesicles, we performed mass spectrometry (MS/MS) analysis on purified vesicle-derived lipids (Fig. 3d, Supplementary Data 1). MS/MS revealed that both PA and PE were present in vesicle lipids from both sources, whereas neither PS nor PI was detected in Anxa2 À / À vesicles (Supplementary Data 1, Fig. 3). Using MS, we were able to detect neither PIP2 nor PIP3, both of which are known to be present in trace amounts. Thus, even though it is possible that both lipids are present in the vesicles, they are probably found in very low concentrations, below our level of experimental detection. Together, these data support the requirement for annexin A2 in plasma membrane vesiculation of Atg16L þ vesicles, and its role in forming PI-PS-enriched microdomains to sustain this process. Annexin A2 promotes homotypic fusion of Atg16L þ vesicles. Although annexin A2 is not a fusogenic protein per se, it has been (f) Quantification of fusion events among Atg16L þ vesicles sorted from Anxa2 þ / þ and Anxa2 À / À DCs; the mean and s.e. were calculated from n4300 events. **Po0.01; ***Po0.001. (g,h) Quantification of fusion events among Atg-16L vesicles sorted from Anxa2 þ / þ and Anxa2 À / À DCs, plus or minus reconstitution with annexin A2; the mean and s.e. were calculated from n4300 events. *Po0.05; ****Po0.0001 (all unpaired two-tailed Student's t-test). associated with Ca 2 þ -mediated membrane fusion events. In association with its binding partners (S100A10 and S100A11), annexin A2 is known to bridge disparate membranes and promote fusion 17,20 . We, therefore, evaluated the potential role for annexin A2 in Atg16L-mediated phagophore formation and elongation using a comparative fluorescence-based fusion assay. Atg-16L þ vesicles were isolated from Anxa2 þ / þ BMDC using Atg16 antibody and FACS sorting. The vesicles were labelled with ARTICLE either red or green fluorochromes, incubated for 30 min in fusion buffer, and equilibrated in a range of Ca 2 þ concentrations. As expected, fusogenic events were quantitatively proportional to the Ca 2 þ concentration (Fig. 4a-d). In addition, visualization of fusion products by EM confirmed the presence of elongated, double-membrane structures that possessed ultrastructural characteristics of an emerging phagophore (Fig. 4e). Next, Atg16L þ vesicles isolated from Anxa2 þ / þ and Anxa2 À / À BMDC were subjected to in vitro fusion at 20 mm Ca 2 þ . In immunofluorescence studies, we noted an equal presence of single and double vesiculate structures, as well as elongated fusion products, in Anxa2 þ / þ samples, whereas the majority of Anxa2 À / À samples contained mainly single vesicles with a dramatically reduced number of fusion structures (Fig. 4f). Fusion events could be partially restored in Anxa2 À / À samples upon addition of recombinant annexin A2 (Fig. 4g,h). Together, the data indicated an additional role of annexin A2 in homotypic fusion of Atg16L þ vesicles 7 . Annexin A2 is required to sustain autophagic flux. To analyse the consequences of annexin A2 depletion on MA, we first stained Anxa2 þ / þ and Anxa2 À / À dendritic cells for endogenous LC3 (Fig. 5a,b). We found a significant reduction in the number of LC3 þ vesicles in Anxa2 À / À cells, but no change in the average size of the LC3 þ puncta (Fig. 5b). Electron microscopy also confirmed the lower abundance of autophagic vacuoles in Anxa2 À / À cells, with no apparent change in the morphology and size of completed vesicles (Fig. 5c). These findings support the hypothesis that annexin A2 is required for the early steps of autophagosome biogenesis (nucleation); however, it is not a determinant of final autophagosome size, in which Atg-16L fusogenicity is less important. The reduced content of autophagic vacuoles in Anxa2 À / À dendritic cells could reflect either slower formation or accelerated lysosomal degradation of the formed vesicles. Direct analysis of autophagic flux, assessed by measuring LC3 degradation ( Fig. 5d-g), and maturation of autophagic compartments, highlighted by the tandem reporter mCherry-GFP-LC3 ( Fig. 5h-j), suggests that depletion of annexin A2 leads to reduced autophagic flux. Immunoblot analysis revealed lower steady-state levels of LC3-II (Fig. 5d,e, Supplementary Fig. 1), and reduced accumulation of LC3-II upon treatment with inhibitors of lysosomal proteolysis in Anxa2 À / À versus Anxa2 þ / þ cells (Fig. 5d,f). Analysis of autophagosome biogenesis, assessed as the increase in LC3-II at two times during proteolysis inhibition, suggests that the reduced autophagic flux in Anxa2 À / À cells is due mainly to reduced autophagosome formation (Fig. 5i,j). We also used the tandem reporter mCherry-GFP-LC3, which allows for a more dynamic measurement of both autophagosome formation and maturation; acidification of autophagosomes upon lysosomal fusion quenches their green fluorescent protein (GFP) fluorescence, whereas mCherry fluorescence persists, allowing for identification of autolysosomes as red-only puncta. Direct fluorescence confirmed that Anxa2 À / À cells maintained in serum-containing medium had a reduced number of both autophagosomes and autolysosomes (Fig. 5h,j). In agreement with these morphological data, Anxa2 À / À cells maintained under these conditions also showed a significant decrease in the total degradation rates of long-lived proteins (Fig. 5k) that was even more pronounced when we analysed the fraction of proteins degraded in the lysosomal compartment, as evidence by their sensitivity to inhibition of lysosomal proteolysis, or those whose degradation was directly dependent on active autophagy, as evidenced by their sensitivity to the inhibition of autophagosome biogenesis by the PI3K inhibitor 3-methyladenine (Fig. 5l). Interestingly, the reduced autophagosome content in Anxa2 À / À cells was no longer observed upon induction of autophagy by starvation (Fig. 5h,i). These data suggest that the contribution of the plasma membrane, and consequently the involvement of annexin A2 in early precursor fusion, is more important for basal, quality-control autophagy than for induced autophagy. Lastly, analysis of the compartments highlighted by the tandem LC3 reporter revealed that, in addition to the reduced number of autophagic vacuoles in the Anxa2 À / À cells, there was a higher proportion of autophagosomes than autolysosomes in these cells (Fig. 5h,j). The slower maturation of autophagic compartments in cells defective in annexin A2 was also confirmed by electron microscopy analysis (Fig. 5c). In contrast to the advanced degradation of cargo in most of autophagic vacuoles in Anxa2 þ / þ cells (compatible with post-lysosomal fusion compartments), cargo was still clearly distinguishable inside autophagic vacuoles (pre-lysosomal fusion) in the Anxa2 À / À cells. In fact, this defective maturation was still observed upon serum removal, despite the fact that autophagosome biogenesis was partially restored under these conditions (Fig. 5i,j). In agreement with these findings, the fraction of long-lived proteins degraded by MA (sensitive to 3MA) upon serum removal was still significantly lower in Anxa2 À / À cells (Fig. 5m). Interestingly, despite reduced autophagic degradation in Anxa2 À / À cells during serum deprivation, total degradation rates of long-lived protein were comparable in both cell groups under these conditions (Fig. 5k). This sustained protein degradation is likely due to compensatory upregulation of other autophagic pathways, because the fraction of lysosomal degradation persistent upon inhibition of MA was elevated in these cells upon serum removal (Fig. 5m). It is possible that the differences in lipid composition (PS and PI) in autophagosomes formed in the absence of annexin A2 determine their lower fusogenic capability with respect to endosomal and lysosomal compartments. Overall, our findings reveal that annexin A2 is required for the homotypic fusion of Atg16L þ vesicles in the formation of preautophagosome structures, and suggest that proper recruitment of phospholipids to the forming autophagosome membrane is required for the later maturation of the autophagic compartments. Discussion During the last few years it has become apparent that the phagophore may arise from multiple membranes, including the ER-Golgi, mitochondria and plasma membrane 1,4,7,9,10,[7][8][9][10]13 . It is possible that in different cell types, the phagophore originates preferentially from different subcellular locations, or that, even within the same cell, the site of origin can change according to the nature of the stressor that induces MA. The plasma membrane origin of the phagophore relies on Atg16L-positive vesicles, which bud inwardly and give rise to phagophores through homotypic fusion or heterotypic fusion with Atg9 þ vesicles after trafficking to the recycling endosomes 4,5,7 . SNARE proteins involved in homotypic fusion have been identified as VAMP7, and its partners, ARF6 and Vtib, whereas the SNARE that controls heterotypic Atg16 þ -Atg9 þ fusion is VAMP3 (ref. 4,7). Annexin A2 is a ubiquitously expressed, soluble protein that has been implicated in multiple biological processes related to membrane trafficking and fusion 17 . Our analyses of the early events involved in MA identified two additional functions for annexin A2. First, annexin A2 participates in the generation of PI-PS-enriched Atg16L-positive vesicles. It was previously shown in liposome-based assays that annexin A2 promotes the recruitment of PI(4,5)P2, cholesterol and glycosphingolipids into liposomes, and also promotes membrane indentations at sites rich in PI(4,5)P2, leading to inward membrane budding and vesiculation 19,20,23 . These functions can be performed by the fulllength protein, as well as by the C-terminal protein core domain. Annexin A2 has also been shown to recruit PS at both the plasma and endosomal membrane, and it has been proposed that annexin A2 has a scaffolding function in the biogenesis of membrane structures. In particular, by organizing lipid microdomains, annexin A2 regulates membrane curvature, providing a driving force for budding 37 . Through proteomic analysis, we have discovered that a subset of annexin A2 þ vesicles are also Atg16L þ . Further proteomic assessment of vesicle content has identified several proteins previously known to be associated with Atg16L þ phagophore precursors, thus establishing the plasma membrane as the site of vesicular origin. Lipidomic analyses have demonstrated, furthermore, that annexin A2 is required for PS recruitment in Atg16L þ vesicles from the plasma membrane. Although we could not directly demonstrate the presence of PI (4,5)P2 in the Atg16L vesicles, it is clear that annexin A2 is required for enrichment of vesicles with the PI (4,5)P2 precursor, PI. The importance of PI and PS enrichment in Atg16L þ vesiculation is highlighted by the 30% reduction in the proportion of Atg 16L vesicles in Anxa2 À / À dendritic cell cytosol. Phospholipid enrichment and associated vesiculation are generated by the ability of annexin A2 to bind PS and PI in a Ca 2 þ -dependent manner, and also by the ability of annexin A2 to bind and organize cytoskeleton proteins for inward budding. Indeed, annexin A2 can bind the barbed ends of F-actin filaments and generate vesicle rocketing 23 . Annexin A2-mediated vesiculation is likely aided by other proteins, such as WASP, which induces actin assembly at the surface of endomembranes, because the WASP-binding partner WIPF1 was identified in our proteomic analysis. Together, our data indicate that annexin A2 generates PS and PI microdomains, and promotes Atg16L þ vesiculation. It remains to be established whether the remaining Atg16L þ vesicles originate from other subcellular compartments in an annexin A2-independent manner. A second annexin A2 function, reported here, is to facilitate homotypic Atg16L þ vesicle fusion, which is required for phagophore formation and elongation. Our in vitro assay demonstrates that vesicles lacking annexin A2 fuse less, and the autophagosomes that do form display further fusion defects when they encounter endosomes and lysosomes. Importantly, this decreased fusion capacity translates in vivo into a decrease in dendritic cell autophagic flux as determined by a reduction in LC3 processing. In Atg16L þ vesicle fusion events, annexin A2 likely serves to bridge disparate vesicles. This role is well characterized for the heterotetrameric complex formed by the association of two molecules of annexin A2 and two of an S100 protein, either S100A10 or S100A11. During the bridging event, each molecule of annexin A2 engages vesicular PS, or other anionic phospholipid, electrostatically, whereas S100A10/11 proteins serve to stabilize the annexin A2 dimer. This function is mediated by the core domain of annexin A2 and requires Ca 2 þ (ref. 17). It is important to note that our model was developed in primary dendritic cells. Compared with other cell types, dendritic cells have the highest rate of plasma membrane turnover, due to their incessant endocytic activity, which is tightly connected to their role in immunosurveillance. Thus, in this cell type it is possible that the contribution of plasma membrane to the origin of phagophore precursors is greater than in other cell types. Indeed, among the total vesicle population prepared from dendritic cell cytosol, a large proportion was Atg16L þ , indicating a cellular commitment to autophagic processes. In conclusion, our data support the concept that annexin A2 plays a central role in the formation and fusion of Atg16L þ vesicles by orchestrating recruitment of phosphoinositides and PS to vesicular membranes, and by coordinating vesicular budding and homotypic fusion. Our data elucidate a key early mechanistic step in MA. Methods Mice and bone marrow cultures. Female, Anxa2 þ / þ and Anxa2 À / À mice on the C57Bl6 background, 8-12 weeks old, were maintained in animal facilities at Weill Cornell Medical College and Albert Einstein College of Medicine. Animal killing and bone marrow harvesting were carried out according to protocols approved by the Animal Institute Committee of both institutions. To obtain BMDCs, the bone marrow was harvested from the femurs and cells were cultured for 7 days in granulocyte-monocyte colony-stimulating factor (10 ng ml À 1 ) in complete DMEM 26 . Immunogold labelling of annexin A2. BMDCs were fixed in 3.7% paraformaldehyde 0.1% glutaraldehyde in dPBS for 20 min followed by permeabilization in 0.1% Triton X100 for 15 min at 25°C. Cells were extensively washed with dPBS and incubated in blocking buffer (1% FBS, 1% BSA in dPBS) for 45 min. BMDCs were then incubated overnight with 4 mg ml À l goat anti-mouse annexin A2 antibody (clone A-15, Santa Cruz Biotechnology) at 4°C, and subsequently with 0.8 mg ml À 1 of ultrasmall gold-conjugated rabbit anti-goat IgG (Electron Microscopy Sciences, Hatfield, PA, USA). Silver enhancement of ultrasmall conjugates was performed using the Aurion R-Gent SE-EM kit (Electron Microscopy Sciences). The cells were briefly post-fixed in 1.0% aqueous osmium tetroxide and processed for transmission electron microscopy (TEM). Transmission electron microscopy. Primary BMDCs were fixed in 2.5% glutaraldehyde, 2% paraformaldehyde in sodium cacodylate buffer 0.1 M, pH 7.4 for 3 h at 4°C. Samples were post-fixed in 1.0% aqueous osmium tetroxide (pH 7.4) for 1 h at 4°C and dehydrated in a series of water/acetone mixtures progressing to 100% acetone. Cells were infiltrated in sequentially increasing concentrations of Embed 812-Araldite (Electron Microscopy Sciences), and embedded in BEEM capsules. Ultrathin sections were stained with uranyl acetate followed by lead citrate, and viewed with a Jeol JEM-1200EX transmission electron microscope (Jeol Ltd., Akishima, Japan) at 80 kV. at 2,000 Â g for 15 min. The supernatant was then centrifuged at 100,000 Â g for 1 h to eliminate residual fragments of the plasma membrane, ER, organelles and Golgi. The fraction of interest, which contained mostly vesicles, was harvested from the supernatant by pelleting at 300,000 Â g for 1 h. Vesicles were labelled with primary anti-annexin A2 (Clone A-15, Santa Cruz Biotechnology) and anti-Atg16L (Clone AB1, Sigma or Clone 1F12, MBL) antibodies, followed by the secondary goat anti-Alexa-488 and rabbit anti-Texas red-conjugated (Jackson ImmunoResearch) in staining buffer containing 220 mM KCl, 5 mM NaCl, 5 mM NaH 2 PO 4 , 0.5 mM MgCl 2 , pH 6.0. Antibody-stained vesicles were sorted on a Becton Dickinson FACS-Aria high-speed cell-sorting flow cytometer. Data acquisition and analysis were performed using the BD FACS Diva software. Proteomic analysis. FACS-sorted annexin A2 þ /Atg16L þ vesicles were lysed in 1% NP40 50 mM Tris/HCl and 150 mM NaCl for 1 h on ice. Protein concentration was determined by the Bradford method (Bio-Rad Laboratories). Proteins (15 mg) were resolved on a 10% SDS-PAGE, and protein bands visualized with the Silver Stain Kit for Mass Spectrometry (Pierce). Each lane was cut into slices, which were destained, and washed with 50 mM ammonium bicarbonate (NH 4 HCO 3 ) and acetonitrile. Samples were then trypsin-digested, and peptides subjected to nano-LC-MS/MS sequencing on a LTQ-Orbitrap Velos HR mass spectrometer. MGF files were generated from the raw data files and searched against the SwissProt Mus musculus database using Mascot (version 2.1.04). Vesicle fusion. Total cytosolic vesicles were prepared from Anxa2 þ / þ and Anxa2 À / À BMDC, as well as JAWS II cells by cavitation and ultracentrifugation, as described above. Vesicles were stained with Atg16L primary antibody; half of each vesicle preparation was stained subsequently with Alexa-488-conjugated mouse anti-rabbit IgG (Jackson ImmunoResearch), and the other half with Tex-asRed-conjugated goat anti-rabbit IgG (Invitrogen). In some experiments, Atg16L green and red FACS-sorted vesicles were incubated with increasing concentrations of Ca 2 þ (0, 2, 20 and 200 mM) for 40 min at 37°C in 220 mM KCl, 5 mM NaCl, 5 mM NaH 2 PO 4 , 0.5 mM MgCl 2 , at pH 6.0. In other experiments, recombinant annexin A2 was added in the fusion buffer of Anxa2 À / À vesicles. In all experiments, single vesicles, double vesicles and fusion structures were enumerated in both green and red channels under an Olympus IX70 inverted microscope equipped with I.P. Lab Spectrum imaging software (Scanalytics Inc). A minimum of 300 vesicles and fusion events were counted in each experiments. For electron microscopy analyses, 10 ml of each sample was applied on Formvar-coated grid, and negatively stained with 1% PTA. Analysis of autophagic flux. Autophagic flux was determined by immunoblot analysis of cells treated with or without with 20 mM NH 4 Cl and 100 mM leupeptin to block lysosomal proteolysis 1 . The difference in LC3 levels between cells treated with or without the inhibitors was used to calculate autophagic flux, whereas the difference in LC3 levels at two times during the inhibition of proteolysis was used as an index of autophagosome formation. Autophagic flux was also measured upon transfection of the cells with the tandem reporter mCherry-GFP-LC3 (Addgene 38 ). Cells were imaged 24 h after transfection and the number of mCherry-positive vesicles (autophagic vacuoles), mCherry and GFP-positive vesicles (autophagosomes), and mCherry-only-positive vesicles (autolysosomes) was calculated after thresholding, using the particle measure tool of the Image J software (NIH). Extraction of lipids and phosphoinositides. Extraction of lipids from the ATG16L þ /Anxa2 þ / þ and from the ATG16L þ /Anxa2 À / À autophagic vesicles was performed using a modification of the protocol developed by Honeyman 39 . Briefly, an acidic HCl solution was used to improve recovery of phosphoinositides, which tend to bind strongly to proteins during the standard extraction with chloroform/methanol solvent. Lyophilized vesicles were solubilized in 100 ml of H 2 O, to which 375 ml of chloroform/methanol/12N HCl (2/4/0.1, v/v) was added. After thorough mixing, 125 ml of chloroform was added, and the solution vortexed for 30 s followed by the addition of another 125 ml of H 2 O. After 10 min of centrifugation at 2,000 Â g, the lower chloroform layer was removed and transferred to a glass tube for evaporation in a vacuum centrifuge. The lipid film was rapidly redissolved in 50 ml of 1:1:0.3 chloroform/methanol/water, and subjected to further analysis by thin layer chromatography (TLC) and nanoLC MS/MS. TLC lipid analysis. Lipid extracts from Atg16L þ /Anxa2 þ / þ and Atg16L þ / Anxa2 À / À vesicles were spotted on silica-coated plates, and developed in a closed jar with chloroform:methanol:water (65:25:1), supplemented with 1-2% phosphoric acid (H 3 PO 4 ) to enable identification of major phospholipids and cholesterol. After the run, the TLC plate was air-dried and developed with iodine vapours. The dark-brown spots corresponding to the lipids of interest were identified using the corresponding lipid standards (PS ¼ phosphatidylserine, PE ¼ phosphatidylethanolamine, PC ¼ phosphatidylcholine, cholesterol and the specific phophoinositides (PI ¼ phosphatidyl inositol, PIP ¼ phosphatidyl inositol monophosphate, PIP2 ¼ phosphatidyl inositol diphosphate and PIP3 ¼ phosphatidyl inositol triphosphate). MS analysis of lipid/phospholipid extracts. Lipid extract samples were diluted 1/10 in 1:1:0.3 (v/v/v) CHCl 3 :MeOH:H 2 O containing 25 mM piperidine 40 . Each sample (10 ml) was loaded into a 2-mm static nanospray emitter (PicoTips, New Objective). Using the static nanospray source at 1.0-1.4 kV, the sample typically lasted as a stable spray for 30 min. A high-resolution mass spectrometer (Orbitrap Velos, Thermo Scientific) was used in negative ionization mode for targeted and untargeted tandem MS (MS/MS), using a 1.5 m/z isolation width, and either collision-induced disassociation, higher collision disassociation or both at energies of 0 and 55%. Data collected at 0% provided the precursor ion m/z values obtained from the Orbitrap or from the Ion Trap. The untargeted approach utilized the Ion Map feature of the Orbitrap where profile MS/MS data (50-2,000 m/z) were collected from precursor ions from 200 to 2,000 m/z, at 2 m/z step size. Data were collected with the Xcalibur package (ThermoFinnigan). Lipidomics. The MS Analysis Tool provided by the Resources at the LIPID MAPS website (http://www.lipidmaps.org/resources/resources.html) was used to identify lipid species from MS/MS fragmentation profiles (Supplementary Data 1). Phospholipids (PI and PS) showing exact matches of their fragment ions with our experimental MS/MS data are shown in Fig. 3. The phospholipids are presented with the class abbreviation preceded by the xx:y, where xx is the total carbon atoms in the fatty acid chains and y is the number of double bonds. Analysis of intracellular protein degradation rates. Degradation of long-lived proteins in cultured cells was measured following 48-h labelling of cells with [ 3 H] leucine (2 mCi ml À l ) at 37°C and pulse-chase experiments 41 . Cells were then extensively washed and placed in chase medium containing excess unlabelled leucine. Aliquots of the medium taken at different times were subjected to acid precipitation, and then filtered through a 0.22-mm to separate the filter-retained fraction (proteins) from the flow-through fraction (small peptides and amino acids). Proteolysis was calculated as the amount of acid-precipitable radioactivity (protein) transformed to acid soluble (amino acids) radioactivity during the incubation. MA-dependent degradation was inhibited using 10 mM 3-methyladenine, and lysosome-dependent degradation was inhibited using a mixture of 20 mM ammonium chloride and 100 mm leupeptine (N/L) 41 . Statistical methods. All numerical results are reported as mean ± s.e.m. Statistical significance of the difference between experimental groups was analysed using the unpaired two-tailed Student's t-test.
2016-05-12T22:15:10.714Z
2015-01-19T00:00:00.000
{ "year": 2015, "sha1": "d24c4cf6e94a654479f50d14c471759f224e86e6", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/ncomms6856.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "2fac22536b0549af68681e59339f743c974d2388", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
53396759
pes2o/s2orc
v3-fos-license
Implication of the inferior vena cava in the generation of reentry in the pectinate muscles Implicación de la vena cava inferior en la generación de reentradas en los músculos pectíneos Atrial fibrillation (AF) is the most common cardiac arrhythmia and its prevalence increases with age. The most dangerous and complex arrhythmias are the result of a phenomenon known as reentry. In experimental studies, the vena cava has been associated with ectopic activity that promotes the generation of reentries. The changes caused by electrical remodeling in an atrial myocyte action potential model (AP), coupled with an anatomically realistic three-dimensional model of human atria with orientation fibers were incorporated in this work. When applying an ectopic focus to the nearby ostium of the inferior vena cava, a relationship between this activity and the generation of reentries in the pectinate muscles is found. A functional reentry repeated in time is favored by the pectinate muscles anatomy, the anisotropic properties and the non-uniform distribution in the three-dimensional tissue. The existence of a preferential conduction pathway facilitates the initiation Introduction Obtaining accurate information about the formation and transmission of the cardiac impulse in normal and pathological conditions has permitted a better understanding of the mechanisms underlying cardiac arrhythmias [1].The cardiac electrical activity models are theoretical schemes of electrophysiological phenomena based on mathematical models and help to facilitate the understanding and prediction of their behavior in various normal and pathological situations.Mathematical modeling and anatomical structures, along with computational simulation, contribute to the detailed analysis and comprehension of the source of reentries that give rise to the atrial arrhythmias of electrical origin since the complexity inherent to this phenomenon makes its study very difficult by using only the experimental approach. In our work, we used a highly realistic (3D) computational model of the human atrium to which the orientation of fibers was added in order to analyze the characteristics and the wave propagation velocity of the action potential under the anisotropic effects of the tissue, and the curvature of the wave front.Our model involves fiber orientation, anisotropic conductivity and electrophysiological heterogeneity for different atrial tissues, which allow a higher-precision reproduction of the electrical behavior of the tissue under normal physiological conditions as well as under electrical remodeling, thus allowing a better analysis of the non-linear propagation dynamics in an excitable medium to understand heart diseases. In this simulation study, we analyzed the way in which a complex atrial structure such as the pectinate muscles (PM) in remodeling condition ----------Keywords: Pectinate muscles, inferior vena cava, anisotropic, atrial arrhythmia, reentry ----------Palabras clave: Músculos pectíneos, vena cava inferior, anisotropía, arritmia auricular, reentrada facilitates the generation of a reentry.In addition, the factors determining the spread of the cellular activation in the cardiac tissues [2] are studied, among which intercellular connections and the spatial arrangement of cardiac fibers are highlighted.Since propagation occurs within a multicellular environment whose properties are anisotropic, the orientation of cardiac fibers determines the manner of conduction in a structure such as the pectinate muscles in which propagation is preferably longitudinal.Several studies [3][4][5], have determined that PM play an important role in the generation and maintenance of reentry and have tried to relate them to the complexity and thickness of this anatomical structure.They conclude that the tissue local thickening observed in the PM facilitates the emergence of sustained circular reentry.The PM offer alternative ways that act as a bridge or as long range connections with a slightly faster conduction velocity giving rise to epicardial breakthrough patterns between different cardiac areas affecting the conduction scheme and therefore, the induction and evolution capacity of the arrhythmias. Given that the vena cava has been implicated as a place of ectopic activity that initiates and perpetuates the atrial fibrillation, we have chosen this structure for its proximity to the PM to analyze its electrical behavior through simulation. Electrical remodeling The Courtemanche [6] cellular electrophysiology model for human atrial was implemented, which reproduces cellular electrical activity under physiological conditions; this model has 21 variables, expressions for 12 transmembrane currents and management of intracellular calcium.For electrical remodeling conditions, the model [7] developed by the same author was modified, and some changes were applied to the model parameters. Since we obtained a total repolarization time of 345 ms under normal conditions and an APD90 (AP 90% of repolarization) of 235 ms in tissue, the action potential duration (APD) of this model is extremely long and is due mainly to the maximum conductance value of the inward rectifier K + current (gk1=0.09nS/pF.)This small value of gk1 yields a high-input membrane resistance (≈174 MΩ) [8]; this is why we modified this value increasing it 250% (gk1=0.225nS/pF), placing it within the range of the measurements made by [9] in atrial cells.Also the maximum L-type Ca2 (gCaL=0.03714)current conductance was modified with a decrease of 70% of the control value; we modified the maximum transitory outward K+ current conductance with a decrease of 50% of the control value (gto=0.0826nS/pF) coinciding with the recommendation of [7] in atrial cells (Figure 1).At the cellular level, the APD in controlling conditions has a duration of 305 ms and in remodeling conditions 137 ms showing a decrease of 55%, very close to the values reported by [10,11] in 1D and 2D, which exclude the fiber orientation. Anatomical model A detailed and realistic geometrical model of human atrium was developed, starting from the clumsy model of [12].The three-dimensional anatomical model obtained includes fiber orientation for both atria (left atrium (LA) and right atrium (RA)) in which the sinoatrial node (SAN), crest terminalis (CT), the fossa ovalis (FO) and its ring, the septum spurium, Bachmann's bundle (BB), twenty pectinate muscles (PM) in the RA free wall (Figure 2), the interatrial septum, left and right appendages (LAPG and RAPG), left and right pulmonary veins, superior and inferior vena cavas, the isthmus of RA, vestibule of the tricuspid valve, vestibule of the mitral valve and the coronary sinus can be highlighted.The surface adjusted to the natural atrial anatomy following histological observations and the details described in experimental studies [13,14]. Fibers Orientation The method we have used is based on previous studies [15].Our model was divided into 42 areas according to the orientation of the main muscle bundles (circular, longitudinal, transverse or oblique) in order to separate tissue areas whose fiber direction is uniform.Once the region was defined, we determined the local vector direction of the fiber, considering the effect of the tissue curvature.To determine this, it was necessary to create an imaginary cylinder that wrapped that tissue, in which a case guideline was traced as the axis of the imaginary cylinder, a line in space sufficiently separated from the tissue as to wrap it up.The tangent lines of the cylinder correspond to the tissue fibers. Conductivity Properties In our model, three regions to establish high, medium and low conductivity were considered. The high conductivity regions corresponded to Bachmann's bundle, crest terminalis and pectinate muscles; the low conductivity regions corresponded to isthmus and SAN region.The other regions were taken as conductivity medium.The oval fossa region was considered non-conductive. The tissue diffusion constants were set so that the conduction velocity was consistent with the experimental data [16,17].The diffusion tensor values obtained for both models were 0.6 for high conductivity, 0.2 for medium conductivity and 0.1 for low conductivity.For both models, anisotropy was set according to the relationship between longitudinal and transverse propagation velocity with 10: 1 at the crest terminalis [18] and 3:1 for the rest of atrial tissue.The longitudinal direction followed the path of the tissue fibers. Numerical and computational methods The monodomain model, which represents the electrical propagation of AP along a threedimensional tissue is described by the following reaction-diffusion Eq. ( 1) [2,19]: Where V m represents the potential in the intracellular space, D i is the anisotropic conductivity tensor, C m is the membrane capacitance, and I ion corresponds to the set of currents describing the ionic state of the cells in the tissue as a function of time and ionic concentrations.An extracellular space with infinite resistance is assumed. With the following boundary conditions (Eq.2): where n is the normal vector to surface. To solve the equation ( 1) of diffusion reaction, a parallel code using the finite element method (FEM) was implemented.A system of linear equations with nonlinear reactive term represented by I ion appears from this discretization.The term reactive is explicitly solved while the temporal equation is solved implicitly. A hexahedral mesh was built from the threedimensional anatomical model that includes 52906 elements and 100554 nodes with a spatial resolution ranging from 300 to 700 µm.Eq. ( 1) was numerically solved using the software EMOS [20].The time step was fixed at 0.0025 ms. Stimulation protocol The stimulation protocol implemented in this model is the standard S1-S2 protocol.Initially, a pulse train (S1) is applied in the region of the SAN node with a basic cycle length (BCL) of 600 ms, a duration of 6 ms and an amplitude of 60 uA in an area of approximately 10 mm2; subsequently, a premature stimulus S2 corresponding to an ectopic focus applied to a small group of cells of about 3 mm2 on the basis of the inferior vena cava is caused.The S2 stimulus was applied in the repolarization phase of the tenth sinus rhythm. Results When applying an ectopic focus at the base of the inferior vena cava, the protocol established was followed.Subsequently, the activation front spreads in the direction of the superior vena cava; the other end goes through the inferior vena cava to the atrioventricular region; the upper end is directed toward the intercaval beam in search of the superior vena cava facilitated by the displacement of the front in the direction of the tissue fibers and the high conductivity in the crest terminalis.The front propagates through the free wall of the right atrium forming a wave that stimulates the basis of the PM generating an anisotropic reentry that repeats in time. Figure 3 shows the propagation sequence of the electrical impulse generated from a focal stimulus triggered in the inferior vena cava to the 182 ms of the coupling interval time for the electrical remodeled atrium model.At 212 ms, it can be seen how the front moves faster toward the superior vena cava due to the direction of the tissue fibers and the high conductivity of the crest terminalis.At 222 ms when the front takes the crest terminalis, it progresses faster reaching the base of the superior vena cava, while the slow front moves in the direction of the mitral valve around the inferior vena cava.This last front curves in the form of a spiral and reaches the PM located in the upper area of the right atrium at 228 ms.This wave front has a convex curvature that causes a reduction in the conduction velocity [21,22].In addition, it has been shown that changes greater than 1 mm in the thickness of tissue produce a "source-sink" imbalance [23] which along with other electrophysiological factors contributes to the front curvature.At 247 ms the front coming from the crest terminalis reaches the fossa ovalis transversally and propagates in the direction of the left septum.At 260 ms, the wave front completely surrounds the inferior vena cava and reaches a repolarization level of -84.45 mV in the source.The front covers the septum of the left atrial and is located at the base of the right pulmonary veins at 282 ms.At 318 ms the front that travels through PM in the lower part of the atrium reaches a free wall generating a new front and therefore, a reentry.The process is repeated indefinitely, exchanging different pectinate muscles. This reentry generates a new activation front that again extends through the same area, thus creating a new reentry caused basically by the anisotropic properties and its non-uniform distribution in the three-dimensional tissue [24]. Discussion Our results characterize the dynamics of the propagation of nonlinear waves in the anatomical structure of the PM in electrical remodeling situation finding that they constitute a structure that promotes the generation of reentry, due to the existence of preferential conduction pathways that are directly related to the crest terminalis, as was stated above [5].Also, they provide a natural anchoring to the front of the reentrant wave.The area where a thickening of the atrial tissue is found forms a region prone to generate a breakdown of the wave front by facilitating the initiation and maintenance of reentries similar to those obtained in simulation studies [4]. On the other hand, our study shows that the appearance of ectopic activity at the base of the inferior vena cava can trigger atrial fibrillation; these results are consistent with previous experimental studies showing the case of an atrial tachycardia originating from within the inferior vena cava [25].We have found a direct relationship between the ectopic activity at the base of the inferior vena cava and the generation of reentries in the PM.We do not have any notice of previous studies that have observed this relationship. It was demonstrated that both the cellular dynamics and anatomy affect the initiation and maintenance of AF, situation which will require detailed understanding of potential ionic mechanisms that determine the dynamics of the rotors because, without doubt, this dynamics underlies the electrical activity in the most lethal arrhythmias [26,27]. Figure 1 Figure 1 Model of potential cellular action during stimulation at 1 Hz under physiological conditions (A) and electrical remodeling (B) Figure 3 Figure 3 Ectopic focus at the base of the inferior vena cava in electrical remodelling conditions.(Potential in mV) Figure 4 Figure 4 AP obtained in the tissue model for node 23618 under physiological conditions (A) and electrical remodeling (B)
2019-03-16T13:05:32.587Z
2015-05-15T00:00:00.000
{ "year": 2015, "sha1": "75e17e0c0d6517bed37cc19cfd93a5d64bb3f71b", "oa_license": "CCBYNCSA", "oa_url": "http://aprendeenlinea.udea.edu.co/revistas/index.php/ingenieria/article/download/21045/18744", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "3ffb9f0b1e6f1a6ac254072e9f6a5ed440d990a0", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
42243324
pes2o/s2orc
v3-fos-license
Extended-Spectrum β-Lactamases Producing Escherichia coli Strains Monitored Over 4 Years in The University Hospital in Košice, Slovakia Corresponding Author: Viera Lovayová Department of Public Health and Hygiene, Faculty of Medicine, University of P.J. Safarik in Košice, Trieda SNP 1, 040 11 Košice, Slovakia Phone: +421915932031 Email: viera.lovayova@upjs.sk Abstract: The beta-lactamases with extended spectrum of activity (ESBL) are medically one of the most important group of enzymes. The presented study provides identification and determination of the spectrum of resistance against different and clinically used antimicrobial drugs in the clinical isolates of Escherichia coli. These isolates had their origin in different departments of the University Hospital L. Pasteur in Košice. The second goal was the detection of beta-lactamase production with extendedspectrum effect (ESBL) and testing of AmpC-type cephalosporinases by several phenotypic tests in clinical isolates. We used both the microdilution method and the method with an active agent, respectively. Samples were positively tested for ESBL with the use of the CLSI disk diffusion method. PCRs were performed with a series of primers designed for the detection of Ambler class A, B and C beta-lactamase genes. About 307 strains of E. coli were investigated. The growth of E. coli resistance to selected antibiotics was present in 83.25% of clinical isolates. There were identified 85 positive isolates in the studied group and the prevalence of the ESBL positive strains of E. coli reached 27.78%. An E. coli strain was isolated with mutations in the promoter region of the AmpC chromosomal gene that is associated with overproduction of the relevant enzyme. We describe a complex ESBL epidemiology. The study revealed a high rate of ESBL-producing E. coli isolates. The blaTEM and blaSHV enzymes dominated in ESBL-positive E. coli isolates in the University Hospital L. Pasteur in Košice. Introduction One of the most serious problems in medicine today is bacterial infection, the role of which has increased dramatically in recent years. The prevalence of extended-spectrum beta-lactamase (ESBL-producing Enterobacteriaceae) is increasing globally and community-onset infections with ESBL-producing Escherichia coli are a major clinical concern in many countries (Laupland et al., 2008). Many genera of gram-negative bacteria possess a naturally occurring, chromosomally mediated βlactamase. These enzymes are thought to have evolved from penicillin-binding Proteins, with which they show some sequence homology. This development is likely due to the selective pressure exerted by β-lactamproducing soil organisms found in the environment (Ghuysen, 1991). The most common cause of resistance to expanded-spectrum cephalosporins in E. coli is the production of Extended-Spectrum β-Lactamases (ESBLs) (Paterson, 2006). In the past decade, CTX-Mtype ESBLs have replaced TEM-and SHV-type ESBLs in Europe, Canada, Asia (Apisarnthanarak et al., 2007), South America (Bonnet, 2004) and North America (Moland et al., 2003) as the most common ESBL type in this species. In addition, the expansion of the active site that allows the increased activity against expanded-spectrum cephalosporins may also result in the increased susceptibility of ESBLs to β-lactamase inhibitors (Jacoby and Medeiros, 1991). ESBLs are not active against cephamycins and most strains expressing ESBLs are susceptible to cefoxitin and cefotetan. However, it has been reported that ESBL-producing strains can become resistant to cephamycins due to the loss of an outer membrane porin protein (Martine´z-Martine´z et al., 1996). The aim of the paper is to identify and determine the spectrum of resistance against a variety of clinically used antimicrobial pharmaceuticals for clinical isolates of the genus E. coli from different departments of the University L Pasteur Hospital in Košice, Slovakia and the detection of β-lactamase production with extended range (ESBL) and clinical isolates tested in multiple phenotype Cephalosporinases types of ESBLs and genotype tests. Materials and Methods In the years 2009-2012 strains of Escherichia coli were isolated from clinical material (urine, swab from the throat, swab from the wound, swab from decubitus, a swab of the cervix, etc.), which were collected from at various departments of the University L. Pasteur Hospital in Košice. Bacterial Isolates A total of 307 samples of the strains were examined. MALDI-TOF MS analysis was performed on a Microflex MALDI Biotyper (Bruker Daltonik) according to a standard sample preparation protocol of Bruker Daltonik (Freiwald and Sauer, 2009 Susceptibility Testing The isolates were tested for antimicrobial susceptibility using the disk diffusion method according to the Clinical and Laboratory Standards Institute (CLSI) guidelines (CLSI, 2009a). In the case of all isolates with a Minimum Inhibitory Concentration (MIC) in at least one test of the 3rd generation Cephalosporins (Cephotaxime, Cephtasidime or Cephoperasone) of more than 1 mg L −1 , a modified Double Disk Synergy Test (DDST) was carried out combined with the method for the ESBL production CLSI determination. Apart from that, the detection of carbapenemases by the modified Hodge test was used. The quantitative susceptibility Minimum Inhibitory Concentration (MIC) of Enterobacteriaceae isolates was determined antibiotic diagnostic The MIDITECH system automated colorimetric test for antimicrobial susceptibility testing. The principle of identification is consistent with the standard microdilution method used to identify antibiotic susceptibility (CLSI, 2006). The test meets the quality requirements of the CLSI standard (Clinical and Laboratory Standards Institute) (CLSI, 2009b;2011). E. coli strain (CNCTC-7374) was used as the control strain in the detection of these genes. Genes coding ESBL enzymes (CTX-M, SHV-type and TEM-type) were studied by PCR and sequencing in all AMC-resistant E. coli isolates with phenotypes consistent with ESBL production on the basis of their resistance to the extendedspectrum cephalosporins whose activity was recovered in the presence of clavulanate (Oteo et al., 2006). Sequencing of TEM, SHV and CTX-M Genes The PCR bla SHV , bla TEM and bla CTX-M products were purified with the PCR SureClean Plus (Applied Biosystems) and sequenced with a Genetic Analyzer 3600 (Life technologies). The nucleotide sequences, deduced amino acid sequences and phylogenetic relationships were analyzed by using the software package (SeqScape v2.7 and MicroSeq v2.2). Isolation and Separation of Plasmid DNA Plasmid DNA was extracted from both donor and transconjugants. A small-scale alkaline lysis method was used as described by (Sambrook et al., 1982). Extracted plasmids were electrophoresed for 2 h in a horizontal 0.8% agarose gel with pH 8.0 TBE buffers (Plaziniski et al., 1985). The gels were stained with ethidium bromide 0.5 µg mL −1 for 20 min. and bands were visualized by UV transilluminator. Lambda DNA digested with Hind 111 and EcoRI was used as the DNA standard marker. Statistical Evaluation For statistical comparison of the results, statistical methods of processing and evaluation of the results were used to compare data processed into the tables and graph (MS Excel 2010, IBM SPSS statistics 19). Results and Discussion Strains isolated from clinical material in the University L Pasteur Hospital in Kosice were compared in terms of susceptibility tests, results of phenotypic tests and tests for detection of genes coding ESBL. In four years 307 isolates of one of the E. coli species had been isolated from precisely the same number of patients with invasive infections. The largest part of the collected material came from urine and wound swabs. 16.94% of E. coli strains were collected from the 18-40 age group, 28.76% of the strains came from the 41-60 age group and 54.49% of the E. coli positive samples came from patients of 61 years of age and older. The median age of patients infected with an ESBLproducing isolate was 59 years (range 18-89 years) and a higher proportion of female patients (51.8%) was infected with ESBL-producing E. coli (Table 1). The nosocomial origin of the majority of monitored bacteria discussed in this study is also supported by the age spectrum of the patients. The incidence of infection was recorded among adults (83.25%) of 41 years of age and older. E. coli resistance to the selected antibiotic also increased with age. The difference is evident even between genders where there is higher antibiotic resistance among women than men. Tendencies in resistance to selected antimicrobials in the monitored period (2009)(2010)(2011)(2012) can be seen in Fig. 1. The results shown in the Fig. 1 indicate that there is no clear tendency that can be traced showing antibiotic resistance deterioration of the E. coli strain within the variety of sampling material used. Differences found in resistance of E. coli strains to the antibiotics between the years are more random in nature and are most likely related to specific mass occurrences of certain E. coli strains with a greater degree of resistance in the particular years and seasons. It can be assumed that the cause of these differences is the use of antibiotics in various clinical circumstances. Resistance to ampicillin was 59.23% on average each monitored year. Gradual growth of E. coli strain resistance to ampicillin was found in 2011 (63.29%) followed by a slight decrease to 57.25% in 2012. Resistance of E. coli strains to Cephalosporins oscillates in the long term and was 39.82% in 2009, 24.29% in 2010 and 23.73% in 2011. In 2012 the percentage dropped to 21.28% while resistance to Quinolone-Ciprophloxacin decreased fast and consistently. In 2009 there were 51,85% cases of E. coli strains resistant to ciprofloxacin recorded, while in 2012 it was 34.35%. Resistance of the E. coli strain to Gentamycin compared to the resistance to Aztreonam in the course of a four-year monitored period was four times lower. Fluctuations were recorded in both cases. On the other hand, the E coli strain was sensitive to carbapenems (meropenem) in all cases. In determining the production of ESBL by means of the dual-disk diffusive test there were 85 positive isolates identified in the tested set. Based on the results, it is clear that the prevalence of the ESBL positive E. coli strains reached 27.78%. When comparing the occurrence of ESBL E. coli positive strains across the genders, the results showed 1.29 times (OR = 1.29) higher risk of ESBL positive E. coli strains occurrence in men compared to women. The occurrence of ESBL positive E. coli strains based on a statistical gender evaluation was insignificant at 95% confidence level, (the statistical p<0.3682). Tests to detect ESBL might be misleadingly positive if the strain produces the narrow spectre β-lactamase (TEM-1,2) simultaneously with the AmpC. In this case a test on the AmpC with Tris-EDTA disk (Munier et al., 2010) is done. According to Bradford (2001) there is a common occurrence of resistance against most of the β-lactamase antibiotics related to the production of various types of β-lactamase antibiotics with extended effect spectrum, hydrolysing Penicillins, Cephalosporins of the first, second and third generation as well as monobactams. Out of a total number of 85 isolates with confirmed ESBL production 60 (70.59%) E. coli clinical isolates were evaluated as multi-drug resistant. ESBL-producers' sensitivity to Cephalosporins was relatively low (24.71%). 93.33% of aminoglycoside gentamicin also showed high efficiency. For all the ESBL producers the sensitivity to Carbapenems (meropenems) has been preserved (100%). These substances belong to the last choice antibiotics and therefore the incidence of antibiotic-resistant clinical isolates against them remains low (Pages et al., 2005). In the therapy of infections caused by ESBL producers these substances are among the most effective medicines (Nordmann and Poirel, 2002). The most precise method used to determine the cephalosporinases production of the AmpC type was the double disk diffusion test which detected three (1.7%) positive clinical isolates. Slightly increased results were recorded also by the authors Sidjabat et al. (2009) who identified 2.8% of the AmpC-producing isolates. Only one sample produced ESBL and AmpC simultaneously. According to Coudron et al. (2000) AmpC type cephalosporinases make bacteria resistant against a wide range of β-lactamase antibiotics including Cefoxitin and other broad spectrum antibiotics including cephalosporins, Aztreonam and combinations of βlactams with β-lactamase inhibitors. By applying new EUCAST/CLSI limits when interpreting the results of examination carbapenems sensitivity, all producers of clinically significant carbapenemases should be detected. Over the course of 4 years, the number of positive E. coli samples sensitive to Carbapenem-Imipenem was, on average, 18.85% of the total 307 samples. The highest number of positive E. coli samples sensitive to Carbapenem-Imipenem was recorded in 2011 (25.31%). Within the monitored period, the number of E. coli samples positive on the Carbapenem-Ertapenem was, on average, was 10.65%, lower than in case of the antibiotic Imipenem. In 2011 the highest sensitivity to antibiotic Ertapenem as compared carbapenems (18.99%) was recorded. Ertapenem, the only representative of Carbapenems yet registered in Slovakia, is ineffective against Pseudomonas aeruginosa and the Enterococci (Edwards et al., 2005). The PCR method has identified the genes responsible for the antibiotic resistance to β-lactam antibiotics. During the monitored period we have proven at least 1 resistance gene out of 307 resistant E. coli strains tested in 85 positive isolates. These positive isolates were subjected to the PCR method to detect the presence of bla TEM , bla SHV, bla CTX-M and AmpC CIT genes coding βlactamase or a wide Spectral β-Lactamase (ESBL). Of the β-lactamase coded by bla genes such as TEM, SHV and CTX genes, 120 (39.1%) bla TEM E. coli genes and 42 (13.7%) bla SHV genes were detected out of 307 Ecoli clinical isolates. About 24 (7.8%) bla CTX-M genes were detected out of the total number of 307 clinical isolates. From a group of β-lactamases, AmpC enzymes hydrolysing β-lactam antibiotics 31 (10.1%) of AmpC CIT genes have been confirmed among clinical isolates of E. coli. Belgian authors Bogaerts et al. (2009), as documented in their paper, found that out of a total number of 83 AmpC positive clinical isolates of E. coli an increased expression of AmpC gene encoded on chromosome was observed in 72 cases. For the remaining 11 E. coli isolates, in which the enzyme overproduction had not been confirmed, the presence of plasmid-encoded AmpC β-lactamases was detected. In accord with the trend of β-lactamase ESBL gene presence, an increase in bla TEM genes was recorded in the 4-year monitored period. In 2009 8 strains, 29.63%, with the bla TEM genes were recorded. An increase to 29 strains, 41.43%, was detected in 2010 and 40 E. coli strains, 50.63%, in 2011. In 2012 the number of strains with the bla TEM genes was 43, 32.82% of the total number of 131 collected E. coli strains. The tendency to increase with sporadic presence has been recorded in E. coli strain bla SHV genes. The increase in AmpC CIT genes in clinical isolates was moderate in the 4-year monitored period, but in 2010 there was a great increase in comparison to the other years with 16 (22.86%) positive strains containing the AmpC CIT gene. In 2012, no incidence was reported concerning AmpC CIT genes in E. coli positive strains. AmpC gene expression is constitutive in E. coli under normal specifications; the β-lactamase is produced in very small quantities. In the course of the analysis of genes encoding βlactamase production in E. coli strains isolated from individual sampling materials, it was found that bla TEM gene was the most prominent in urine samples of 41 patients (41.84%) and wound swabs of 29 patients (36.25%), of which the most frequently resistant E. coli strains were isolated. The slightly higher representation of the bla SHV gene was found in biological samples from urine in 19 patients (19.39%) as well as in wound swabs of 10 patients (12.8%). The overview of genes encoding β-lactamase production in clinical samples coming from different sampling material concerning E. coli is given in Table 2. The highest prevalence of bla TEM , bla SHV and AmpC CIT genes was present in the samples taken from patients with digestive system diseases or urethral venereal diseases. The most resistant isolated E. coli strains came from Surgical Clinic patients and the patients in the Department of Internal Medicine, where most of the patients require antibiotic treatment. Therefore there is an increased risk of the occurrence and development of antibiotic resistance. The risk concerning the patients lying at the surgical ward is higher in the age group of 61 years and over at the value of 1.20 and a 95% level of confidence. The statistical evaluation of the OR for the presence of ESBL E. coli positive strains in the group of men suffering from gastrointestinal disorders compared to women was 0.73 (95% CI) and the results based on the Fisher's test was not significant. When assessing each Department, patients at Surgery Department have 0.44 times higher risk of ESBL positive E. coli strains than the patients at Internal Department. Furthermore the results showed an increase of the ESBL positive strains depending on the gender at Surgery Department, where there is a 63% (0.48; 95% CI = OR) chance of men contracting ESBL positive strains and a 55% (OR 95% CI = 0.39) chance for women staying in the Surgery Department. The level of risk in relation to the sampling material has an increasing trend for women, which is two times higher than the risk of ESBL positive E. coli strains in the urine (OR = 2.489; 95% CI). Statistical analysis confirmed the risk of ESBL positive strains occurrence in the urine in females compared with the previous sample material, which demonstrated a level of significance of 95% (p<0.05). From the anatomical point of view, women's urine may be more contaminated due to the surrounding flora, either rectal or vaginal. Results showed differences not only between genders, but also between different age groups. The level of risk in relation to age and diagnosis shows the tendency of increased risk of contraction with regards to age-it is more prevalent in the age group of 18-40 yearold patients; the proportion of the risk of ESBL E. coli positive strains is double in comparison to other age groups (OR = 2.18; 95% CI). Comparison of the risk tendency in relation to age in the course of the monitored years is shown in Fig. 2. Control of the spread of infectious diseases is probably one of the main tasks of contemporary medicine. It has been shown that there is an increase in prevalence of drug resistance among E. coli isolates in our region and majority of the antibiotic resistances were due to the acquisition of plasmid-carrying antibioticresistance genes. Conjugal transfer of plasmids has greatly contributed to the rapid spread of antibiotic resistance among E. coli isolates. Restriction of use of antibiotics may play a role in decreasing the emergence of resistant bacterial strains. Urine 41 19 6 11 Swab from the wound 29 10 3 7 Swab from cannula 3 1 0 3 Swab 23 7 6 6 Sputum 5 1 2 2 Swab from the throat 5 2 2 0 Other smear 14 2 5 2 The data are in absolute terms Conclusion In the clinical isolates of Escherichia coli-producing ESBL which were collected, a high prevalence of resistance against the majority of tested antimicrobial drugs was detected. The only antibiotics that were effective against all tested isolates were carbapenem and meropenem. The results showed differences not only between the sexes, but also between different age groups. Rate risk in relation to age and diagnosis, pointing out its upward trend with age, more pronounced in the age group 18-40 years, the proportion of the risk of ESBL-positive E. coli strains is double compared to other age groups (OR = 2.18 95% CI). We can say that the resistance of Enterobacteriaceae to β-lactams antibiotic is conditional to production of Extended-Spectrum β-Lactamases (ESBLs) and is a significant problem contemporary medicine. The major reason is the possibility of treatment failure penicillins and cephalosporins, including preparations with an extended spectrum of action, in the case of infections caused by bacteria with the production of those enzymes. This leads not only to other infectious complications and prolong the period of hospitalization, but also represents a threat to life. The treatment of infection caused by resistant bacteria strains increases the direct and indirect costs of treatment several fold. It is necessary early the detection of the present mechanism of resistance and subsequent deployment of adequate antibiotic therapy.
2019-04-01T13:12:04.653Z
2016-06-16T00:00:00.000
{ "year": 2016, "sha1": "fe3485205b7edecff38c11408b218c9c5c76db89", "oa_license": "CCBY", "oa_url": "http://thescipub.com/pdf/10.3844/ajmsp.2016.32.38", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "7db342cbe7ce37fcb2581c496ac5bad5fc70a37a", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology" ] }
7328711
pes2o/s2orc
v3-fos-license
The Circular, Elliptic Three Spin String from the SU(3) Spin Chain We complete the description of the circular, elliptic three spin string on AdS_5 x S^5 having three large angular momenta (J_1,J_2,J_3) on S^5 in the language of the integrable SU(3) spin chain. First, we recover the string solution directly from the spin chain sigma model and secondly, we identify the appropriate Bethe root configuration in the so far unexplored region of parameter space. Introduction Semi-classical analysis of strings propagating on AdS 5 × S 5 has provided a novel approach to investigating the AdS/CFT correspondence, the prime example being the study of strings with several large angular momenta on S 5 . For such strings the classical string energy has an analytical dependence on the parameter λ L 2 where λ is the squared string tension and L the total angular momentum. In addition quantum corrections to the string energy are suppressed as 1 L when L → ∞ [1,2]. The AdS/CFT correspondence [3] relates the energy of a IIB string state with given quantum numbers to the conformal dimension of a singe trace operator of planar N = 4 super Yang-Mills theory with corresponding representation labels, mapping λ to the 't Hooft coupling and L to the number of constituent fields of the operator. This led to the suggestion that the result of the semi-classical string analysis should be reproduced on the gauge theory side by a perturbative calculation of the anomalous dimension followed by the limit L → ∞, λ L 2 fixed -a generalization of the BMN idea. The BMN idea [4] had triggered the development of efficient techniques based on the use of effective vertices for the perturbative calculation of anomalous dimensions of operators of N = 4 SYM [5]. These techniques were later substantially improved by focusing on the dilatation generator of the gauge theory [6,7] but their applicability were in practice limited to short operators or operators carrying at most one large representation label such as BMN-like operators. This limitation was overcome with the discovery that the one loop dilatation generator of N = 4 SYM could be identified as the Hamiltonian of an integrable spin chain [8,9,10]. A connection between gauge theories and spin chains was observed earlier in the context of QCD [11] and recently further integrable structures in QCD were revealed [12]. In the spin chain formulation considering large representation labels translates into going to the thermodynamical limit. When the number of large representation labels exceeds one the spin chain Bethe equations [13] turn into a set of integral equations involving a number of continuum Bethe root densities. In certain cases corresponding to certain sub-sectors of N = 4 SYM it has been possible to solve these equations exactly. The simplest possible closed sub-sector of N = 4 SYM is the SU(2) sub-sector consisting of operators composed of two out of the three complex scalar fields. In the SU(2) sub-sector at one loop level, assuming both of the possible representation labels to be large, two types of solutions of the Bethe equations were found and these were identified as the gauge theory duals of respectively a folded and a circular string in AdS 5 × S 5 having two large angular momenta on S 5 [14,15]. The SU(2) sector remains closed to all loop orders [7] and an extension of the spin chain picture including an appropriate Bethe ansatz was proposed in [16] to three loops, see also [17]. Furthermore, at one and two-loop order there exists a general proof of the equivalence between solutions of the Bethe equations in the thermodynamical limit and solutions of the string sigma model for large conserved charges [18]. Equivalence between semi-classical strings and long operators has also been proved at the level of actions at one as well as at two loop order by matching continuum sigma models derived from respectively the spin chain and the string theory [19,20]. The study of the relation between gauge theory operators and semi-classical strings is less developed in other sub-sectors of N = 4 SYM. The SU(3) sub-sector, consisting of operators built from the three complex scalars of N = 4 SYM is a natural place to start extending the analysis. At one-loop order the dilatation operator restricted to this sub-sector is identical to the Hamiltonian of an integrable SU(3) spin chain, the length L of the spin chain being given by the number of constituent fields of the operators considered. The SU(3) sub-sector is, however, only a closed sub-sector at this order. Beyond one loop one has to consider the larger SU(2|3) sub-sector in order to have a strictly closed set of operators [9,21]. Recently, arguments were given, though, that the SU(3) sector can be considered as closed in the thermodynamical limit [22]. Generic operators in the SU(3) sub-sector are expected to be dual to strings carrying three non-vanishing angular momenta (J 1 , J 2 , J 3 ) on S 5 . The first classical solution of the string sigma model describing such a three-spin situation was provided by Frolov and Tseytlin and had two out of the three spins identical, i.e. (J 1 , J 2 , J 3 ) = (J, J ′ , J ′ ) [1,2]. The corresponding Bethe root configuration of the SU(3) spin chain was identified in [23]. Also fluctuations around the classical solution has been understood from the spin chain perspective [24]. Later numerous other three-spin string solutions were found and classified [25,26]. Briefly stated, three spin string solutions can be classified as being either rational [26], elliptic or hyper-elliptic [25]. The case (J 1 , J 2 , J 3 ) = (J, J ′ , J ′ ) can be reached as a limiting case of the rational as well as of the elliptic situation. In reference [27] the Bethe root configuration corresponding to an elliptic three spin string of circular type was identified in the region of parameter space where J 2 ≈ J 3 , J 1 > J 2 , J 3 . In the present paper we identify the Bethe root configuration in the opposite limit, i.e. J 1 ≈ J 2 , J 3 < J 1 , J 2 . Furthermore, we show how to recover the circular, elliptic three spin string directly from the continuum SU(3) spin chain sigma model, derived in [28,29]. 2 The continuum SU (3) spin chain sigma model Imposing the thermodynamical limit L → ∞ and considering long wavelength excitations, the SU(3) spin chain can be described in terms of the following continuum sigma model action [28,29] Here the four variables θ, ψ, φ, ϕ are the four angles needed to specify a coherent SU(3) spin state and α is an additional overall phase 1 . The variable α is redundant as regards the dynamics of the spin chain but is useful for establishing the connection to the string sigma model where in particular it may play a role when it comes to constraints. The model in eqn. (1) has the conserved angular momenta where we notice that P α is simply the length of the spin chain. The angular variables in eqn. (1) are conveniently chosen so that starting from the string metric involving S 5 and the decoupled time coordinate t ds 2 = −dt 2 + dθ 2 + sin 2 θ dφ 2 3 + cos 2 θ dψ 2 + cos 2 ψ dφ 2 1 + sin 2 ψ dφ 2 2 , with the same sigma model is obtained once the appropriate large angular momentum limit is taken. One can thus make the following identification [28] The Hamiltonian of the model in eqn. (1) is [28] In order that the solutions of the sigma model capture the cyclicity of the trace appearing in the gauge theory operators all variables must be periodic in σ with period 2π and the momentum along the σ-direction should vanish. This momentum is given by For θ = φ = 0 we recover the sigma model describing the continuum limit of the integrable SU(2) spin chain. From this sigma model, one reproduces the two-spin folded and circular string solution whenψ = 0,φ = a and ϕ ′ = α ′ = 0 where a is a constant [19]. In reference [28] it was shown how to recover the circular, rational three spin string solutions of [26] from the continuum SU(3) spin chain sigma model. These solutions follow from the ansatz θ = θ 0 , ψ = ψ 0 with θ 0 and ψ 0 constant and ϕ ′ = m, φ ′ = n and α ′ = p with m, n and p integer. 2 The energy as a function of the spins reads and the condition P σ = 0 turns into In the present paper we are interested in elliptic three spin solutions. Such solutions follow from the ansatzθ =ψ = 0, ϕ ′ = φ ′ = α ′ = 0 andφ = a,φ = b where a and b are constants. With this ansatz the momentum along the σ direction vanishes (cf. eqn. (7)) and the equations of motion take the form One simple solution to the equations is to have ψ constant and b = 0. In this case sin θ = dnvσ. The solution which has our interest can be obtained by replacing this relation by the more general ansatz where γ, β and δ are constants. We then notice that the equations (10) and (11) simplify if δ = γ and if β and γ are related to each other as β 2 = 1 − γ 2 . In particular, the derivative of ψ takes a very simple form The first equation (10) and the second equation Furthermore, the requirement that the angles are invariant under a shift σ by 2π forces v = 2K/π. Making use of the relations (2) and (5) we can now determine the normalized spin where it has been used that v = 2K/π. Let us furthermore define 2ǫ = (J 1 − J 2 )/L. Then according to eqns. (2) and (5) we have Using that γ 2 = j 3 K/E we get a relation between ǫ and j 3 Finally, from eqn. (6) we obtain an expression for the energy as a function of the spins where k is supposed to be expressed via j 3 and ǫ using eqn. (19). The relations (19) and (20) are exactly the relations defining the circular, elliptic three spin string [25,30,27]. For later convenience we note that solving eqn. (19) for k in terms of j 3 to leading order in ǫ and inserting the solution in eqn. (20) we get 3 The discrete SU (3) spin chain. At the discrete level, finding an eigenstate and an eigenvalue of the SU(3) spin chain amounts to solving a set of algebraic equations for the Bethe roots. The Bethe roots come in two different types, reflecting the fact that the Lie algebra SU(3) has two simple roots. Denoting the number of roots of the two types as n 1 and n 2 and the roots themselves as {u 1,j } n 1 j=1 and {u 2,j } n 2 j=1 the Bethe equations read We shall assume that n 1 ≤ L 2 , n 2 ≤ n 1 2 . The SO(6) representation implied by this choice of Bethe roots is given by the Dynkin labels [n 1 − 2n 2 , L − 2n 1 + n 2 , n 1 ]. In terms of the spin quantum numbers, assuming J 1 ≥ J 2 ≥ J 3 this corresponds to [J 2 − J 3 , J 1 − J 2 , J 2 + J 3 ] or J 1 = L − n 1 , J 2 = n 1 − n 2 , J 3 = n 2 . A given solution of the Bethe equations gives rise to an eigenvalue of the spin chain Hamiltonian i.e. a one loop anomalous dimension which is The cyclicity of the trace is ensured by imposing the following constraint Let us define Then the spin quantum numbers are given by (J 1 , J 2 , J 3 ) = ((1 −α)L, (α −β)L, βL). In references [23,27] the above Bethe equations were studied under the assumption that the roots {u 2,j } n 2 j=1 were confined to an interval [−ic, ic] on the imaginary axis and the roots {u 1,j } n 1 j=1 were living on two arches C + and C − , each others mirror images with respect to zero, each symmetric around the real axis and not intersecting the imaginary axis. For c = 0 the corresponding gauge theory operator is the dual of the folded string with two large angular momenta on S 5 [14] and for c → ∞ the operator could be identified as the dual of the circular string with three large angular momenta, (J, J ′ , J ′ ), J > J ′ on S 5 [23]. At an intermediate value of c a critical line β = β crit (α) was located [27] and it was proposed that above the critical line the operator was the dual of the circular, elliptic three spin string of references [25,30]. The proposal was supported by a perturbative calculation in the region β ≈ α 2 , i.e. J 2 ≈ J 3 , J 1 > J 2 , J 3 . Now, it is known that the three spin string with angular momentum assignment (J ′ , J ′ , J) where J < J ′ is characterized by the Bethe roots {u 1,j } n 1 j=1 and {u 2,j } n 2 j=1 being all imaginary [23]. It is therefore natural to expect that something similar should characterize the circular, elliptic three spin string with J 1 ≈ J 2 , J 3 < J 1 , J 2 , i.e. 1 − 2α + β ≈ 0. Below, we shall show that this is indeed the case. The imaginary root solution We assume that the Bethe roots {u 1,j } n 1 j=1 are all imaginary and distributed symmetrically around zero. Furthermore, in an interval of length of O(L) around zero the roots are equidistant, placed at the half-integer imaginary numbers. This sub-set of the root configuration is denoted as the condensate. Outside the condensate the roots are more distant. This distribution of the roots {u 1,j } n 1 j=1 is the one characteristic of the two spin circular string [14]. It ensures that the condition (25) is fulfilled (provided n 1 is odd and L is even -a constraint which should not affect quantities extracted in the thermodynamical limit). The roots {u 2,j } n 2 j=1 are likewise assumed to be imaginary and symmetrically distributed around zero. They are furthermore assumed to be confined to the interval defined by the above mentioned condensate. The possibility of this configuration for the roots {u 2,j } n 2 j=1 was pointed out in [23]. Rewriting the roots as u 1,k = i q 1,k L and u 2,k = i q 2,k L, taking the logarithm of the Bethe equations and imposing the limit L → ∞ one is left with the following set of integral equations [23] where v < s and where ρ(q) and σ(q) are root densities describing respectively the continuum distribution of {q 2,k } n 2 k=1 and the subset of {q 1,k } n 1 k=1 which are positive and lie outside the condensate. The presence of the condensate, located at [−s, s], is reflected by the appearance of the logarithmic terms in the two equations. The densities are normalized as Furthermore, the anomalous dimension can be expressed as [14] γ In order to solve the coupled integral equations (27) and (28) we shall follow the strategy of [23], i.e. we express σ(q) in terms of ρ(q) by means of eqn. (27) and use the resulting expression to eliminate σ(q) from eqn. (28). First of all, let us introduce the resolvent corresponding to the root density σ(q) with W ± (q) = W ± (−q). The resolvent is analytic in the complex plane except for a cut along the interval [s, t]. We notice that σ(q) only enters the equation (28) via the function qW − (q) and the expression for γ via W − (0). Thus, we do not need to determine neither σ(q) nor W (q). We recognize the integral equation (27) as the saddle point equation of the O(n) model on a random lattice [31] for n = −2 with the terms on the right hand side playing the role of the derivative of the potential V (q), i.e. Therefore, we can immediately, following [32], write down a contour integral expression for W − (q) where C is a contour which encircles the cut [s, t] but not the other singularities of the integrand and where the endpoints s and t are determined by the boundary conditions Here, the latter relation is equivalent to the normalization condition (30). Inserting the expression (33) into (34), (35) and (36) we find with the boundary conditions Finally, the integral equation for ρ(q) takes the form 5 Perturbative solution for 1 − 2α + β ≈ 0 As mentioned earlier for 1 − 2α + β = 0 the gauge theory operator in question is known to be the dual of the circular three-spin string of [1,2] which has angular momenta (J ′ , J ′ , J), J < J ′ [23]. In the following we shall show that as we perturb away from 1 − 2α + β = 0, the operator becomes the gauge theory dual of the circular, elliptic three-spin string given by eqns. (19) and (20). Let us define and let us consider ǫ ≪ α, β. In terms of angular momenta we have (J 1 , J 2 , As pointed out in [23], for ǫ = 0, the boundary equation (36) is solved by setting t = ∞. For a small, non-zero value of ǫ consistency of the boundary equations requires that t ∼ 1 ǫ . Expanding the two boundary conditions to leading order in ǫ we get Working at leading order in ǫ, the first of these two equations gives us t as a function of ǫ and the second tells us how s (and v) depend on ǫ. In particular, we see that the correction to s and v must be O(ǫ 2 ). As we shall see we do not need to know the explicit form of these corrections. We furthermore notice that for symmetry reasons, corrections to the integral equation (41) and to the expression for γ, i.e. eqn. (40) can involve only even powers of ǫ. Now, expanding (41) for large t we find that the corrections of order ǫ 2 cancel out due to the boundary conditions and we are left with A similar cancellation of order ǫ 2 terms takes place in the expression for γ and we get 3 . The two equations (46) and (47) thus to the given order in ǫ take the same form as for t = ∞ and we can proceed using a solution strategy similar to the one employed in that case. The new element then consists in correctly taking into account the modified boundary conditions. Following [23] we introduce the new variables with dxρ(x) ≡ dξρ(ξ). In these variables the integral equation (46) takes the form where ν is related to v by v = 2sν 1 + ν 2 . (50) The integral equation (49) is of the type characteristic of the O(n) plaquette matrix model studied in [33] and an explicit expression for ρ(ξ) valid for any n can be written down by contour integral techniques. However, since we do not need all the information stored in ρ(ξ) and since the present case corresponds to n = 1 which is one of the so-called rational points of the O(n) model [31,34,35] we shall proceed along the lines of [23], using a method developed in [35]. We introduce a resolvent F (z) by This object is analytic in the complex plane except for a cut along the interval [−ν, ν] and it has the following asymptotic behaviour as z → ∞ The constant p plays a very central role since γ can be expressed as Using the definition of F (z) one can now write the boundary conditions (29), (30) and (35) as Furthermore, by using analyticity arguments as in [35,23] one can show that the function ω(z), defined by fulfills the following cubic equation where Now, by considering the first derivative of eqn. (59) we get the following expression for p in terms of s, t, ǫ and β Furthermore, from the second derivative of eqn. (59) we get an expression for t as a function of ǫ and β t = 1 16πǫ (1 − β)(1 + 3β). Finally, inserting eqns. (62) and (63) in the expression (54) for γ we see that the s-dependence very neatly cancels out and we are left with where we have replaced β by j 3 , cf. eqn. (43). This is precisely the result expected for the circular, elliptic three spin string, cf. eqn. (21). It would of course be interesting to reproduce the equations (19) and (20) from an exact solution of eqn. (41). Conclusion The continuum SU(3) spin chain sigma model in principle contains all information about the O(λ ′ ) classical energy of strings with three angular momenta (J 1 , J 2 , J 3 ) on S 5 in the limit L = J 1 + J 2 + J 3 → ∞, λ ′ = λ L 2 fixed. Its most general equations of motion are, however, rather involved, cf. [28,29]. It is therefore of interest to put forward possible simplifying ansätze which lead to non trivial solutions. Previously, it was shown how to recover from the spin chain sigma model the simple rational three spin string of [26]. In the present paper we have presented an ansatz which leads to the circular, elliptic three spin string of [25,30,27]. The most generic three spin string solutions are parametrized in terms of hyper-elliptic integrals. It would be interesting to understand how these solutions are encoded in the spin chain sigma model. Furthermore, it might be that the continuum spin chain sigma model could reveal solutions overlooked in the string theory analysis so far. In the language of the discrete SU(3) spin chain a given three spin string solution is characterized by a certain Bethe root configuration. For the circular, elliptic three spin string with angular momentum assignment (J 1 , J 2 , J 3 ) = ((1 − α)L, (α − β)L, βL) it follows from the analysis of [27] that the Bethe root configuration has to be of a different type for β < β c (α) and β > β c (α) where β = β c (α) denotes a line of critical points in parameter space. In [27] the appropriate Bethe root configuration for β > β c (α) was identified. We propose that the imaginary root configuration of section 4 constitutes the appropriate Bethe root configuration for β < β c (α). Clearly the expression (64) for the one loop anomalous dimension as a function of the spins supports this proposal. In particular, we thus expect that the imaginary root solution should cease to exist for β → (β c (α)) − . Certainly, it would be interesting to understand the mechanism behind this phenomenon in the spirit of the understanding of the singular limit β → (β c (α)) + [27]. Likewise it would be interesting to determine the exact location of the critical line. This would require an exact solution of the integral equation (41) or of the corresponding integral equation of [27]. We note in passing that neither for the rational three spin string, nor for the hyper-elliptic one the relevant Bethe root configuration is known. A recently initiated line of investigation, relying on the observation that the SU(3) sub-sector may be considered as closed in the thermodynamical limit, is the generalization of the SU(3) spin chain picture to include higher gauge theory loop orders [22]. A spin chain description going beyond one loop order was proposed for the SU(2) sub-sector in [16]. The corresponding Bethe ansatz implied that inclusion of higher loop orders required only a rather simple modification of the one loop integral equation. In [22] it was assumed that inclusion of higher loop corrections in the SU(3) sub-sector lead to a similar modification of the one loop Bethe equations and the evaluation of higher loop corrections was carried out for the gauge theory dual of a circular three spin string with angular momentum assignment (J, J ′ , J ′ ), J ′ < J. An exact solution of either of the earlier mentioned integral equations would allow an extension of the analysis to the case of the more general circular, elliptic three spin string. The study of higher loop corrections has so far revealed a disagreement between semi-classical string analysis and perturbative gauge theory at three loop order for all cases treated, i.e. for folded and circular two spin strings [16], a certain class of so-called pulsating strings as well as for the above mentioned special three spin string [22] 4 . A possible explanation for this discrepancy was proposed in [16] and elaborated in [17]. Whereas the analysis of the circular, elliptic three spin string is not expected to change the picture as regards the presence of the discrepancy it will provide additional data that might help in ultimately resolving it.
2014-10-01T00:00:00.000Z
2004-06-21T00:00:00.000
{ "year": 2004, "sha1": "b615d6af4652d7ddb1c7c0d628565e19e40c4210", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.physletb.2004.06.099", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "b615d6af4652d7ddb1c7c0d628565e19e40c4210", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
255878014
pes2o/s2orc
v3-fos-license
Cyclic Buckling Characterization of an Individual MWCNT Using Quantitative In Situ TEM Axial Compression Carbon nanotubes (CNTs) are extremely conductive and flexible, making them ideal for applications such as flexible electronics and nanoelectromechanical systems. However, in order to properly apply them in such devices, their long-term durability must be assessed. In the present study, we demonstrate cyclic loading of a thick MWCNT (175 nm) under axial compression, observed in situ under a transmission electron microscope (TEM). The force was applied via controlled displacement, while real-time TEM videos of the deformation process were gathered to produce the morphological data. The in situ observations combined with force–displacement curves revealed the onset of buckling instabilities, and the elastic limits of the tube were assessed. The MWCNT retained its original structure even after 68 loading–unloading cycles, despite observed clues for structural distortions. The stiffness of the tube, calculated after each loading cycle, was in a 0.15 to 0.28 TPa range—comparable to the literature, which further validates the measurement set-up. These in situ tests demonstrate the resilience of CNTs to fatigue which can be correlated with the CNTs’ structure. Such correlations can help tailoring CNTs’ properties to specific applications. The high in-plane stiffness of CNTs is derived from the rigidity of the C-C bonds, and the tensile Young's modulus can reach even 1 TPa [6,7]. Under compression, however, the mechanical stiffness of the tube can only hold the structure upright until it reaches a point of instability, where the bending stiffness decreases dramatically and the tube geometry changes abruptly. This phenomenon is named Euler buckling, and was previously demonstrated in the literature for both single-wall carbon nanotubes (SWCNTs) and multiwall carbon nanotubes (MWCNTs) [16][17][18][19][20]. The consensus is that MWCNTs, with their multi-layer structure allowing for more efficient stress distribution, are more suitable for applications requiring resistance to compressive stresses [18,21]. According to Euler's column formula, P critical = π 2 EI/(LK) 2 (where E is the Young's modulus, I is the moment of inertia-proportional to the radius squared, L is the length of the column, and K is the boundary condition coefficient), the critical load for buckling P critical sustainable by thicker CNTs, with a larger moment of inertia, is higher [22]. Another phenomenon unique to MWCNTs under compression is the rippling effect, which is manifested in the appearance of a wave-like distortion in the inner arc of a bent nanotube, and in a significant reduction in the bending stiffness [23][24][25][26]. Rippled and buckled phases also result in a decrease in electrical conductivity [27,28]. Therefore, the durability and flexibility of CNTs against mechanical compression are the primary criteria to evaluate the performance of CNT-based nanoelectromechanical devices and their lifetime reliability. Although rippling and buckling behavior of CNTs have been extensively modeled [24,[29][30][31][32][33][34], experimental measurements remain very challenging due to the small dimensions, thus characterizations are incomplete [18]. An effective method to evaluate the nonlinear response of a single CNT against compression is to conduct quantitative mechanical testing using in situ transmission electron microscope (TEM). The greatest advantage of the in situ technique is that the entire operation can be seen live, while mechanical data is collected simultaneously. By correlating the load-displacement (F-D) data to the onset of buckling and rippling, one can obtain useful insights into the compression-induced instabilities of CNTs. Studies conducted in the last couple of decades investigated buckling and fracture modes of MWCNTs in TEM, employing an AFM apparatus [19,20,25,35] or a piezoelectrically driven nanoindenter [16,17]. These experiments revealed that CNTs exhibit reversible deformation in response to repeated compression [16,17,20] and that the buckling mode is dependent on their aspect ratio [19,25]. Furthermore, they reported large variations in the Young's modulus values for different types of MWCNTs, ranging from 0.2 to 1.075 TPa [16,17,19,20,25,[35][36][37]. It has been shown that the mechanical properties of CNTs are highly affected by geometry (e.g., diameter, length, alignment), crystalline structure, and concentration of defects [36,37]. Such factors, in turn, depend on the chosen growth method. For example, the arc-discharge method produces highly crystalline CNTs with a modulus of 1 TPa, compared to 0.1-0.4 TPa for chemical vapor deposition (CVD)grown CNTs [25]. Another example is the bamboo-like CNT (bCNT), whose structure differs from that of regular CNTs by containing separate hollow compartments and bamboo knots that grow along its axis. bCNTs can be grown using different synthesis methods (see review [38]) and their structure typically contains high defect densities, resulting in a modulus smaller than 0.2 TPa [35]. One strategy to improve the quality of MWCNTs produced using CVD processes is via a post-growth treatment such as thermal annealing [39]. Heat treatment in an inert environment at 2200 to 2800 • C eliminates defects in the microstructure, improving the bending modulus to 1 TPa. In order to assess and possibly even enhance the performance of nanoelectromechanical devices using CNTs, long-term mechanical durability must also be evaluated. Some studies have investigated the durability to compression of an individual thin MWCNT (outer diameters ranging from 13 to 38 nm [16,17,19,20,25,35]). However, durability to compression over a large number of loading cycles has not been assessed. In addition, there is a lack of studies on thick MWCNTs (greater than 100 nm). In this study, we present extensive compression cyclic loading through an in situ TEM method, in order to evaluate the mechanical durability and flexibility of an individual thick bamboo-like MWCNT (outer diameter of 175 nm). We analyze the compression-induced buckling instabilities, and we elucidate the morphological and structural distortions that might lead to failure. This is a crucial requirement for flexible electronic devices and NEMS, as they need to maintain their performance over multiple iterations. Materials and Methods A reoccurring problem in common experimental setups for individual CNT compression [16,19,20,35] is that the tubes are not, in fact, straight nor aligned on the substrate, which leads to bending rather than compression. Furthermore, the CNTs are often deposited on a substrate, rather than grown, resulting in poor adhesion. Herein, plasma-enhanced chemical vapor deposition (PECVD) was used to grow straight and vertically aligned MWCNTs (VACNTs) directly onto a special-purpose substrate, exhibiting great adhesion. The wedge substrate consisted of a long and tall ridge geometry, on which thin films can be deposited and nanoparticles can be grown. The VACNTs were grown on a silicon-wedge substrate using DC/RF PECVD (Black Magic 2, Aixtron, Germany) [40], at 700 • C using a nickel catalyst (3 nm) and C 2 H 2 :NH 3 20:80 sccm feedstock, for 1 h (the full process of preparing the wedge and growing the VACNT can be found in the Supplementary Materials). In this synthesis, ammonia is used as a hydrogen-rich reducing agent [41]. The PECVD growth method was chosen, as the electric field in the plasma aligns the nanotubes as they grow, resulting in straight, vertical tubes-crucial for an accurate mechanical compression measurement. In addition, the fabrication process does not require focused ion beam (FIB), thus protecting the material from gallium ion irradiation damage, enabling analysis of their intrinsic mechanical behavior [42,43]. The substrate was fixed on the TEM holder, perpendicular to the electron beam ( Figure 1a). sion [16,19,20,35] is that the tubes are not, in fact, straight nor aligned on the substrate, which leads to bending rather than compression. Furthermore, the CNTs are often deposited on a substrate, rather than grown, resulting in poor adhesion. Herein, plasma-enhanced chemical vapor deposition (PECVD) was used to grow straight and vertically aligned MWCNTs (VACNTs) directly onto a special-purpose substrate, exhibiting great adhesion. The wedge substrate consisted of a long and tall ridge geometry, on which thin films can be deposited and nanoparticles can be grown. The VACNTs were grown on a silicon-wedge substrate using DC/RF PECVD (Black Magic 2, Aixtron, Germany) [40], at 700 °C using a nickel catalyst (3 nm) and C2H2:NH3 20:80 sccm feedstock, for 1 h (the full process of preparing the wedge and growing the VACNT can be found in the supplementary). In this synthesis, ammonia is used as a hydrogen-rich reducing agent [41]. The PECVD growth method was chosen, as the electric field in the plasma aligns the nanotubes as they grow, resulting in straight, vertical tubes-crucial for an accurate mechanical compression measurement. In addition, the fabrication process does not require focused ion beam (FIB), thus protecting the material from gallium ion irradiation damage, enabling analysis of their intrinsic mechanical behavior [42,43]. The substrate was fixed on the TEM holder, perpendicular to the electron beam ( Figure 1a). The in situ nanomechanical test used a flat nanoindenter to compress an individual MWCNT and displayed the process of deformation in the TEM simultaneously ( Figure 1). The experiments were performed in an FEI Tencai 20 TEM (Hillsboro, OR, USA) with a Bruker (Hysitron, Minneapolis, MN, USA) PicoIndenter 95 (PI-95) TEM holder. The TEM was operated at 200 keV with a field-emission electron source, in bright-field mode. Medium magnification was utilized to have a complete view of the tube and to reduce the radiation and knockout damage. The PicoIndenter used a 3D piezoelectric drive which allowed for precise positioning inside the TEM perpendicular to the target MWCNT (Figure 1c). Important to note is that the tube was slightly tilted, but occupied the same plane as the loading axis as evidenced by both occupying the same focal plane. Additionally, the adjacent tubes ( Figure 1c) did not participate in the loading process as they were in a different z-plane to the targeted tube (slightly under-focused, as can be seen by the fringes around it). The force was applied along the axis of the tube, in displacement-controlled mode, at a displacement rate of 5 nm/s, and included 200 datapoints per second of force and normal displacement. Videos were recorded using digital capture of a Gatan One View camera (Gatan Inc., Pleasanton, CA, USA) at 10 frames per second and 4K resolution. The Medium magnification was utilized to have a complete view of the tube and to reduce the radiation and knockout damage. The PicoIndenter used a 3D piezoelectric drive which allowed for precise positioning inside the TEM perpendicular to the target MWCNT ( Figure 1c). Important to note is that the tube was slightly tilted, but occupied the same plane as the loading axis as evidenced by both occupying the same focal plane. Additionally, the adjacent tubes ( Figure 1c) did not participate in the loading process as they were in a different z-plane to the targeted tube (slightly under-focused, as can be seen by the fringes around it). The force was applied along the axis of the tube, in displacement-controlled mode, at a displacement rate of 5 nm/s, and included 200 datapoints per second of force and normal displacement. Videos were recorded using digital capture of a Gatan One View camera (Gatan Inc., Pleasanton, CA, USA) at 10 frames per second and 4K resolution. The load-displacement data and the real-time videos were recorded and synchronized using the frame grabber feature of TriboScan software (Hysitron, Minneapolis, MN, USA). The measured displacement was validated using synchronized imaging. The mechanical properties of the MWCNT were studied using repeated deformation cycles in compression, and individual compression tests, each with an increased maximum displacement to assess the limits of the tubes' flexibility. The critical force for buckling (P cr ) was determined from the F-D curves by calculating the crossing point between two different slopes of the linear fits in the pre-and post-buckling regions, using OriginLab software (Origin Pro 2016, OriginLab Corporation, Northampton, MA, USA). The pre-buckling region was identified by a linear rise in force and the post-buckling region was identified by the slope change indicating softening transition. The error range of the P cr values was drawn from the linear-fit calculations, as given by Origin. The P cr values were further validated using the synchronized video of the F-D curve and the real-time imaging, by identifying the point at which lateral deflection began (see supplementary Figure S5). The P cr of each cycle was then put in Euler's column formula to calculate the Young's modulus. Based on error propagation of P cr errors, the error range of the modulus was calculated for each cycle. After eight initial cycles, the tube underwent further cyclic loading, compressing first to predetermined maximum displacement and then retracting each time to half its value. In total, 68 cycles of loading and unloading were carried out on a single tube. Post-fracture TEM images were taken for further analysis of the resulting crack. Results and Discussion The chosen tube had an outer diameter of 175 nm and an inner diameter of 65 nm, which corresponds to a wall thickness of 55 nm (Figure 1d). The interwall spacing was 0.34 nm and the number of graphene walls was approximately 161. The tube length was 4.8 µm. Observing the morphology, we can see a tubular structure with bamboo-like compartments and amorphous carbon deposits around the tube (see supplementary Figure S4). The presence of the catalyst particle at the top of the tube suggests tip-growth [38]. The in situ TEM experiments clearly showed the axial buckling process of the individual MWCNT and demonstrated its elastic behavior throughout numerous cycles. Figure load-displacement data and the real-time videos were recorded and synchronized usi the frame grabber feature of TriboScan software (Hysitron, Minneapolis, MN, USA). T measured displacement was validated using synchronized imaging. The mechanic properties of the MWCNT were studied using repeated deformation cycles in compre sion, and individual compression tests, each with an increased maximum displacement assess the limits of the tubes' flexibility. The critical force for buckling (Pcr) was dete mined from the F-D curves by calculating the crossing point between two different slop of the linear fits in the pre-and post-buckling regions, using OriginLab software (Orig Pro 2016, OriginLab Corporation, Northampton, MA, USA). The pre-buckling region w identified by a linear rise in force and the post-buckling region was identified by the slo change indicating softening transition. The error range of the Pcr values was drawn fro the linear-fit calculations, as given by Origin. The Pcr values were further validated usi the synchronized video of the F-D curve and the real-time imaging, by identifying t point at which lateral deflection began (see supplementary Figure S5). The Pcr of each cyc was then put in Euler's column formula to calculate the Young's modulus. Based on err propagation of Pcr errors, the error range of the modulus was calculated for each cyc After eight initial cycles, the tube underwent further cyclic loading, compressing first predetermined maximum displacement and then retracting each time to half its value. total, 68 cycles of loading and unloading were carried out on a single tube. Post-fractu TEM images were taken for further analysis of the resulting crack. Results and Discussion The chosen tube had an outer diameter of 175 nm and an inner diameter of 65 n which corresponds to a wall thickness of 55 nm (Figure 1d). The interwall spacing w 0.34 nm and the number of graphene walls was approximately 161. The tube length w 4.8 μm. Observing the morphology, we can see a tubular structure with bamboo-like com partments and amorphous carbon deposits around the tube (see supplementary Figu S4). The presence of the catalyst particle at the top of the tube suggests tip-growth [38]. The in situ TEM experiments clearly showed the axial buckling process of the ind vidual MWCNT and demonstrated its elastic behavior throughout numerous cycles. Load-displacement (F-D) curves of the eight cycles are presented in Figure 3a. The experiments began with the diamond tip approaching the tube from the bottom, while slightly negative forces were observed due to electrostatic attraction forces. As the tube underwent compression, the force initially increased linearly with the displacement, indi-cating the elastic region. When the critical load for buckling was reached, the linear rise transitioned to a force plateau (Figure 3b-d, P cr in the F-D curves). At that point the tube failed by buckling and started to deflect laterally (bend), as can be seen in the TEM stills ( Figure 3(b2,c2,d2)). The P cr values of cycles 1, 4, and 8 are 1.16 µN, 1.38 µN, and 0.75 µN, respectively. In the post-buckling regime, the F-D curves exhibited a reduction in stiffness (decreased slope) due to lower rigidity in bending [24,25,44], and the system deviated from the ideal linearly elastic response of the pre-buckling regime. This nonlinear response in thick MWCNTs is dictated by the nonlinear mechanics of rippling, exhibiting a bending modulus that increases with deformation [24]. Additionally, 'pop-in' events (sudden displacement bursts) are observed in the F-D curves (Figure 3b-d, marked in arrows in the F-D curves) and can be directly correlated to the onset of deformation seen in the TEM (deflection of the tube as was seen in the TEM, Figure S5). These 'pop-in' events indicate the movement of inherent defects in the tube and the onset of buckling instabilities. Such defects may include atomic vacancies, Stone-Wales defects, and amorphous regions [45][46][47]. Drops in load can also be attributed to a failure of the inner graphene walls, as will be discussed later. The F-D curves also show a reverse hysteresis behavior, where the force of the unloading segment is, at times, higher than the loading one. This is ascribed to the backward movement of the transducer at large displacements, which results in higher stiffness [48]. Load-displacement (F-D) curves of the eight cycles are presented in Figure 3a. The experiments began with the diamond tip approaching the tube from the bottom, while slightly negative forces were observed due to electrostatic attraction forces. As the tube underwent compression, the force initially increased linearly with the displacement, indicating the elastic region. When the critical load for buckling was reached, the linear rise transitioned to a force plateau (Figure 3b-d, Pcr in the F-D curves). At that point the tube failed by buckling and started to deflect laterally (bend), as can be seen in the TEM stills (Figure 3(b2,c2,d2)). The Pcr values of cycles 1, 4, and 8 are 1.16 µ N, 1.38 µ N, and 0.75 µ N, respectively. In the post-buckling regime, the F-D curves exhibited a reduction in stiffness (decreased slope) due to lower rigidity in bending [24,25,44], and the system deviated from the ideal linearly elastic response of the pre-buckling regime. This nonlinear response in thick MWCNTs is dictated by the nonlinear mechanics of rippling, exhibiting a bending modulus that increases with deformation [24]. Additionally, 'pop-in' events (sudden displacement bursts) are observed in the F-D curves (Figure 3b-d, marked in arrows in the F-D curves) and can be directly correlated to the onset of deformation seen in the TEM (deflection of the tube as was seen in the TEM, Figure S5). These 'pop-in' events indicate the movement of inherent defects in the tube and the onset of buckling instabilities. Such defects may include atomic vacancies, Stone-Wales defects, and amorphous regions [45][46][47]. Drops in load can also be attributed to a failure of the inner graphene walls, as will be discussed later. The F-D curves also show a reverse hysteresis behavior, where the force of the unloading segment is, at times, higher than the loading one. This is ascribed to the backward movement of the transducer at large displacements, which results in higher stiffness [48]. Remarkably, even under such large-scale deformation, the compressed nanotube did not suffer catastrophic damage and recovered to its original geometry once the load was released. The structural resilience of CNTs can be attributed to the large in-plane rigidity of graphene sheets, as well as their low rigidity in bending [24,44], and their hollow, large aspect-ratio geometry. In addition, CVD-grown MWCNTs contain many types of defects [47], which may allow them to accommodate high strains [16,25,47,49]. The defects in the tube are arranged in an incoherent fashion such that they separate under tensile stress, and slide reversibly under compressive stress [49]. The MWCNT did not fail after the initial eight cycles and was further analyzed in cyclic loading to investigate its durability to mechanical compression. The tube was subjected to 60 more cycles, totaling 68 cycles. Figure 4a presents the final cyclic compression loading experiment, consisting of 20 deformation cycles to a displacement of 800 nm and then retracting to 400 nm. The loading curve shows an increase in maximum value in the initial four cycles, then reaching a steady state where the maximum and minimum loads do not change between the cycles. It may suggest transformation of the inherent defects in the tube by their rearrangement in the initial stages of the experiment. Once the defects are rearranged, the tube is mechanically stable and, thus, a steady state is reached. We then compared the maximum load values in the steady state to the maximum load values in the initial eight cycles of the previous experiment (Figure 2b). The comparison is shown in Figure 4b, with the blue circles representing the cyclic loading steady state and the black rectangles representing the initial eight cycles from Figure 2b. The graphs show that even after numerous loading cycles, the force feedback is almost identical. The experiments demonstrate the ability of the tube to withstand repeated deformations, as well as its elastic response under compression. This is a crucial requirement for flexible electronic devices, as they need to maintain their performance over multiple iterations. calculate the modulus via Euler's column formula. (b-d) Snapshots and the corresponding F-D curves of individual cycles, 1 (b), 4 (c) and 8 (d). The curves show the critical point of buckling and 'pop-in' events (marked with black arrows), which reveal the onset of buckling instabilities or failure of the inner graphene walls. The Pcr values of cycles 1, 4, and 8 are 1.16 µN, 1.38 µN, and 0.75 µN, respectively. TEM images were gathered in bright-field mode at 200 kV. Remarkably, even under such large-scale deformation, the compressed nanotube did not suffer catastrophic damage and recovered to its original geometry once the load was released. The structural resilience of CNTs can be attributed to the large in-plane rigidity of graphene sheets, as well as their low rigidity in bending [24,44], and their hollow, large aspect-ratio geometry. In addition, CVD-grown MWCNTs contain many types of defects [47], which may allow them to accommodate high strains [16,25,47,49]. The defects in the tube are arranged in an incoherent fashion such that they separate under tensile stress, and slide reversibly under compressive stress [49]. The MWCNT did not fail after the initial eight cycles and was further analyzed in cyclic loading to investigate its durability to mechanical compression. The tube was subjected to 60 more cycles, totaling 68 cycles. Figure 4a presents the final cyclic compression loading experiment, consisting of 20 deformation cycles to a displacement of 800 nm and then retracting to 400 nm. The loading curve shows an increase in maximum value in the initial four cycles, then reaching a steady state where the maximum and minimum loads do not change between the cycles. It may suggest transformation of the inherent defects in the tube by their rearrangement in the initial stages of the experiment. Once the defects are rearranged, the tube is mechanically stable and, thus, a steady state is reached. We then compared the maximum load values in the steady state to the maximum load values in the initial eight cycles of the previous experiment (Figure 2b). The comparison is shown in Figure 4b, with the blue circles representing the cyclic loading steady state and the black rectangles representing the initial eight cycles from Figure 2b. The graphs show that even after numerous loading cycles, the force feedback is almost identical. The experiments demonstrate the ability of the tube to withstand repeated deformations, as well as its elastic response under compression. This is a crucial requirement for flexible electronic devices, as they need to maintain their performance over multiple iterations. To validate this experimental methodology, we characterized the stiffness properties of the tube after each cycle and compared those to Young's moduli of MWCNT found in the Nanomaterials 2023, 13, 301 7 of 12 literature. By applying Euler's column formula for the buckling load P critical = π 2 EI/L e 2 , we can estimate Young's modulus, E [17,22]. In this formula, I is the cross-sectional moment of inertia of the MWCNT, defined as I = π (d o 4 − d i 4 )/64, where d o and d i are the outer and inner diameters of the tube, respectively. The effective length of the nanotube is expressed by L e = KL, where L is the actual length and K is the effective-length factor accounting for the end conditions. Based on the end conditions of the experiment, K values and, thus, the calculated modulus can vary. We therefore determined that the most suitable end condition for our experiment is a nanocolumn fixed at the top and free at the base ('fixed-free'), as the tube is strongly attached to the silicon substrate but is free to slide on the indenter's surface. These conditions set K = 2 and P critical = π 2 EI/4L 2 . The modulus values for each cycle, as estimated through the critical buckling load seen in the F-D curves (P cr in Figure 3(b1,c1,d1)), are presented in Figure 5b. Notably, the modulus values for the MWCNT calculated with the fixed-free condition, ranging from 0.15 to 0.28 TPa, are in good agreement with the lower range of the values reported in the literature [16,17,19,20,25,35,50], thus validating the experimental methodology. It is expected that the CVD-grown CNT exhibits lower stiffness than tubes grown with the arc-discharge method, due to a lower degree of crystallinity [25,51]. Furthermore, the compartmentalized bamboo-like structure of our tube introduces discontinuities into the layer structure, which act as sites of mechanical weakening [35]. load vs. max displacement curve-comparison between the maximum load values in the steady state to the maximum load values in the initial eight cycles of the previous experiment. The blue circles represent the cyclic loading steady state, and the black rectangles represent the eight cycles from Figure 2b. The CNT mechanical response is highly reproducible. To validate this experimental methodology, we characterized the stiffness properties of the tube after each cycle and compared those to Young's moduli of MWCNT found in the literature. By applying Euler's column formula for the buckling load Pcritical = π 2 EI/Le 2 , we can estimate Young's modulus, E [17,22]. In this formula, I is the cross-sectional moment of inertia of the MWCNT, defined as I = π (do⁴ − di⁴)/64, where do and di are the outer and inner diameters of the tube, respectively. The effective length of the nanotube is expressed by Le = KL, where L is the actual length and K is the effective-length factor accounting for the end conditions. Based on the end conditions of the experiment, K values and, thus, the calculated modulus can vary. We therefore determined that the most suitable end condition for our experiment is a nanocolumn fixed at the top and free at the base ('fixed-free'), as the tube is strongly attached to the silicon substrate but is free to slide on the indenter's surface. These conditions set K = 2 and Pcritical = π 2 EI/4L 2 . The modulus values for each cycle, as estimated through the critical buckling load seen in the F-D curves (Pcr in Figure 3(b1,c1,d1)), are presented in Figure 5b. Notably, the modulus values for the MWCNT calculated with the fixed-free condition, ranging from 0.15 to 0.28 TPa, are in good agreement with the lower range of the values reported in the literature [16,17,19,20,25,35,50], thus validating the experimental methodology. It is expected that the CVD-grown CNT exhibits lower stiffness than tubes grown with the arc-discharge method, due to a lower degree of crystallinity [25,51]. Furthermore, the compartmentalized bamboo-like structure of our tube introduces discontinuities into the layer structure, which act as sites of mechanical weakening [35]. The results reveal two competing effects which influenced the tube's stiffness-interwall sp 3 bridging (blue arrows) and structural distortion (red arrows). The mechanical stiffening may be explained by the creation of interwall sp 3 bonding via electron irradiation which strengthens the bonding of interior walls. The decrease in modulus is attributed to structural distortion caused by the out-of-plane and weaker sp 3 hybridization deforming the layers themselves, making them more susceptible to plastic deformation at higher stresses. (c) The bent nanotube underwent repeatable tensile and compressive forces, marked by the arrows, which in time distorted its inner structure and reduced its stiffness. The results reveal two competing effects which influenced the tube's stiffness-interwall sp 3 bridging (blue arrows) and structural distortion (red arrows). The mechanical stiffening may be explained by the creation of interwall sp 3 bonding via electron irradiation which strengthens the bonding of interior walls. The decrease in modulus is attributed to structural distortion caused by the out-of-plane and weaker sp 3 hybridization deforming the layers themselves, making them more susceptible to plastic deformation at higher stresses. (c) The bent nanotube underwent repeatable tensile and compressive forces, marked by the arrows, which in time distorted its inner structure and reduced its stiffness. However, these values might be a slight under-estimation, considering two more factors. First, the tube was not perfectly straight in the beginning of the experiment. Second, the catalyst particle was not in full contact with the indenter and rotated towards the direction of bending. As a result of these two factors, the mechanical testing was not entirely uniaxial, but rather included bending moments that reduced the ability of the tube to carry load, reducing the critical force. The true modulus should therefore be higher-closer to the mid-range stated in the literature. In Figure 5 we see a dynamic interplay between two competing effects related to continuous exposure to electron beam irradiation and mechanical forces. During experiments, one intermittently dominated over the other, causing opposite outcomes. The electron irradiation induced sp 3 interwall bridging between the graphene layers ( Figure 5a). This increased the interlayer stiffness while reducing the in-plane stiffness of the discrete graphene layers. Additionally, the applied tension and compression forces induced local plastic deformations within the tube, as is shown in the fracture analysis ( Figure 6). However, these values might be a slight under-estimation, considering two more factors. First, the tube was not perfectly straight in the beginning of the experiment. Second, the catalyst particle was not in full contact with the indenter and rotated towards the direction of bending. As a result of these two factors, the mechanical testing was not entirely uniaxial, but rather included bending moments that reduced the ability of the tube to carry load, reducing the critical force. The true modulus should therefore be higher-closer to the mid-range stated in the literature. In Figure 5 we see a dynamic interplay between two competing effects related to continuous exposure to electron beam irradiation and mechanical forces. During experiments, one intermittently dominated over the other, causing opposite outcomes. The electron irradiation induced sp 3 interwall bridging between the graphene layers ( Figure 5a). This increased the interlayer stiffness while reducing the in-plane stiffness of the discrete graphene layers. Additionally, the applied tension and compression forces induced local plastic deformations within the tube, as is shown in the fracture analysis ( Figure 6). The observed stiffening effect, between cycles 1 to 4 and 5 to 7 (Figure 5b, indicated by the blue arrows), is a possible outcome of irradiation annealing caused by the highvoltage electron beam used in the TEM, as the used energy (200 KeV) is known to damage graphitic structures [52]. The irradiation leads to a movement of defects and the creation of sp 3 interwall bridging [23,[53][54][55] via heating the sample through energy absorption, and rupturing bonds through electron excitations. Furthermore, high-energy particles can transfer momentum to nuclei, displacing atoms to the interstitial lattice site ('knock-on'). Such conditions can switch the C-C bond configuration, from in-plane sp 2 into a tetrahedral sp 3 C-C bond, between two adjacent layers (Figure 5a), and vice versa. Normally, the van-der-Waals interactions between the walls of the CNT allow them to slide against each other, but as irradiation increases the formation of covalent bonds between the walls, this substantially increases the shear resistance to sliding, which increases the shear modulus [53,56,57]. The presence of sp 3 bonds also improves the buckling resistance to axial loading by facilitating mechanical participation of the inner walls in the MWCNT, allowing the transfer of load to the inner shells [53,55,57]. Simulations have shown that the modulus can be increased by 25% by sp 3 bridging [58], which fits well with the 19% increase in modulus value observed between cycles 1 to 4 ( Figure 5b). Additionally, a high density of sp 3 bonding results in high-stress transfer to the neighboring walls, allowing them to share the load and causing a near-planar fracture [55], as we observed in our experiment (Figure The observed stiffening effect, between cycles 1 to 4 and 5 to 7 (Figure 5b, indicated by the blue arrows), is a possible outcome of irradiation annealing caused by the highvoltage electron beam used in the TEM, as the used energy (200 KeV) is known to damage graphitic structures [52]. The irradiation leads to a movement of defects and the creation of sp 3 interwall bridging [23,[53][54][55] via heating the sample through energy absorption, and rupturing bonds through electron excitations. Furthermore, high-energy particles can transfer momentum to nuclei, displacing atoms to the interstitial lattice site ('knockon'). Such conditions can switch the C-C bond configuration, from in-plane sp 2 into a tetrahedral sp 3 C-C bond, between two adjacent layers (Figure 5a), and vice versa. Normally, the van-der-Waals interactions between the walls of the CNT allow them to slide against each other, but as irradiation increases the formation of covalent bonds between the walls, this substantially increases the shear resistance to sliding, which increases the shear modulus [53,56,57]. The presence of sp 3 bonds also improves the buckling resistance to axial loading by facilitating mechanical participation of the inner walls in the MWCNT, allowing the transfer of load to the inner shells [53,55,57]. Simulations have shown that the modulus can be increased by 25% by sp 3 bridging [58], which fits well with the 19% increase in modulus value observed between cycles 1 to 4 ( Figure 5b). Additionally, a high density of sp 3 bonding results in high-stress transfer to the neighboring walls, allowing them to share the load and causing a near-planar fracture [55], as we observed in our experiment (Figure 6d). Considering the length of exposure time to the electron beam during the video recording of the in situ mechanical tests (17 min), our findings are consistent with the electron irradiation stiffening mechanism. In contrast, after cycles 4 and 7 we observed a sharp decrease in the Young's modulus (Figure 5b, red arrows). This loss of stiffness can be explained by a few phenomena. Firstly, the effects of electron irradiation on C-C hybridization, where in-plane sp 2 double bonds are converted into weaker out-of-plane sp 3 single bonds, result in a reduction of the in-plane stiffness of the discrete graphene layers. Furthermore, the sp 3 configuration distorts the in-plane layered structure and may act as a weakening point. Secondly, this inner distortion was enhanced by repeated deformation, while the external shape of the tube recovered after the load release. The graphitic layers of the nanotube experienced significant stretching on the outer side of the bending region and significant compression on the inner side (Figure 5c). At later cycles, (cycles 4 to 8), the larger displacements increased the mechanical loads, which were then more likely to cause plastic deformations within the tube, as can be seen in Figure 6. The onset of structural failure can be identified by sudden drops in the force in the mechanical data (Figure 3(b1,c1), marked by black arrows), which may imply that graphene sheets failed in the tensile region. The correlating TEM images indeed reveal points of structural distortion in the tensile region and in the inner columnar void of the tube (Figure 6b, marked with the red arrows), which started to be seen in cycle 4. In cycle 8 (Figure 6c), ruptures were visible in the tensile region and wave contours were seen on the compressive side. These structural distortions can explain the large drops in the modulus values between cycles 4 and 5 and between cycles 7 and 8. Since we used medium magnification in the TEM to view the entire tube, we could not directly observe the broken graphene layers nor the ripples. However, the structural distortion was clearly visible ( Figure 6). After 68 cycles of repeated elastic deformation, the nanotube failed due to fatigue (Figure 6d). The breaking process of the MWCNT was too rapid for the TEM camera to capture. However, its mechanism can possibly be inferred from the mechanical and imaging results. The fatigue of the nanotube occurred due to the distortion of atomic layers in the bent region, manifesting as ripples of the layers on the compressed side, and the breakage of the layers on the tensile side. The resulting fracture mode corresponds to catastrophic brittle failure due to sp 3 crosslinking, as explained earlier. Comparison between the images of cycle 8 and post-fracture indicated the location of fracture evolution (Figure 6c,d, 'point of fracture'). The images point to the middle of the tube, where the most stress was concentrated, suggesting a collapse of the columnar void along the central axis. Conclusions To summarize, the long-term durability of an individual thick MWCNT to axial compression was investigated utilizing an in situ TEM compression test. The in situ TEM investigation demonstrated the ability of the thick tube to withstand repeated deformation, as well as its elastic response under compression. The tube buckled, yet did not develop more severe plastic deformation such as kinks, allowing it to return to its original geometry. This is a crucial requirement for flexible electronic devices, as they need to maintain their performance over multiple iterations. The buckling behavior was characterized via F-D curves and morphological images, with the tube exhibiting an average critical force for buckling of 1.15 µN. The Young's modulus was calculated after each cycle using Euler's column formula, to a range of 0.15 to 0.28 TPa. Based on the moduli values observed, we concluded that the tube structure was distorted by electron beam irradiation and local plastic deformations. Furthermore, the morphological analysis showed the fracture evolution through cyclic loading, pointing to ruptures in the tensile region and a collapse in the inner void. Even though these structural distortions were evident after cycle 4, the tube endured many more loading-unloading cycles (68 in total). In this case, it may indicate that even after the fracture of some layers, there were enough layers in the thick tube to compensate for this loss and still provide adequate compression resistance. Additionally, the cross-links between the inner and outer walls of the MWCNT provided enhancement of interwall strain transfer which also contributed to higher durability. It is important to note that the electron irradiation in the TEM altered the intrinsic properties of the tube, which is a downside of the methodology. It is possible to minimize the irradiation effects without compromising the imaging conditions, by reducing the acceleration voltages of the TEM (e.g., 80 kV) and using aberration correctors. The structure-property relations, studied using in situ TEM, can be used to control CNTs' properties effectively. By employing the acquired knowledge and controlling the synthesis process, it is possible to produce certain types of CNTs that have desirable properties for each application. Our study exemplifies that when high stiffness is not a priority, but durability and flexibility are needed, thick PECVD-grown tubes can be a better choice. Supplementary Materials: The following supporting information can be downloaded at: https://www. mdpi.com/article/10.3390/nano13020301/s1, Figure S1: Fabrication process of wedge substrates; Figure S2: SEM images of silicon sedge substrates; Figure S3: SEM images of the grown VACNTs; Figure S4: TEM image of the targeted VACNT prior to mechanical testing; Figure S5: Validation of the critical force for buckling. Video S1: In situ TEM compression experiment-cycle 8 (1000 nm maximum displacement). References [40,59] are cited in the supplementary materials. Data Availability Statement: The data presented in this study are available upon request from the corresponding author.
2023-01-17T19:10:11.579Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "3a5b669d9dac4a72ab15c5ade17dea6732c821f6", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-4991/13/2/301/pdf?version=1673419899", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e7f4af9b04f52e55206334f433699add2b57164b", "s2fieldsofstudy": [ "Materials Science", "Engineering", "Physics" ], "extfieldsofstudy": [ "Medicine" ] }
81986100
pes2o/s2orc
v3-fos-license
Eosinophils in anti-neutrophil cytoplasmic antibody associated vasculitis Background Anti-neutrophil cytoplasmic antibodies associated vasculitides (AAV) are characterized by autoimmune small vessel inflammation. Eosinophils are multifunctional cells with both pro-inflammatory and immunoregulatory properties. Tissue activated eosinophils secrete cyto- and chemokines and form extracellular traps (EETs), they release free granules and produce reactive oxygen species. The role of eosinophils is well established in eosinophilic granulomatosis with polyangiitis (EGPA) but very little is known about their role in granulomatosis with polyangiitis (GPA) and microscopic polyangiitis (MPA). Methods The expression of surface markers CD11c, CD11b, CD16, CD35, CD62L, CD64, CD88, Siglec-8 and CD193 and reactive oxygen species production by peripheral blood eosinophils were studied using flow cytometry. Fluorescence microscopy was used to visualize the release of eosinophil extracellular DNA traps (EETs). 98 GPA and MPA patients and 121 healthy controls were included in the study. Results Both GPA and MPA patients had decreased frequency of eosinophils in peripheral blood compared with healthy controls (p < 0.0001), which could not solely be explained by corticosteroid treatment. The patient’s eosinophils showed increased surface expression of the Fc receptors CD16 (p < 0.0001) and CD64 (p = 0.0035) as well as CCR3 (CD193) (p = 0.0022). Decreased expression was found of the complement receptors CD35 (p = 0.0022), CD88 (p < 0,0001) as well as CD11c (p < 0,0001), CD11b (p = 0.0061) and Siglec-8 (p = 0,0015). Moreover, GPA and MPA eosinophils, showed decreased capacity to produce ROS (p < 0.0001). ANCA stimulation of eosinophils from GPA and MPA patients after C5a priming enhanced EETosis (p = 0,0088). Conclusions The percentage of eosinophils were decreased in peripheral blood in GPA and MPA patients and showed altered surface marker expression and function. The enhanced EETosis after ANCA stimulation, suggests that eosinophil can contribute to the autoantibody driven inflammatory process. Electronic supplementary material The online version of this article (10.1186/s41927-019-0059-6) contains supplementary material, which is available to authorized users. The antigens of the autoantibodies are proteinase 3 and myeloperoxidase, that primarily are found in primary granules in neutrophils and peroxidase positive lysosomes in monocytes. Both monocytes and neutrophils are frequently found around the inflamed vessel walls and are thought to be the main effector cells. Primed neutrophils in AAV patients can be stimulated by ANCA through binding to membrane bound PR3 or MPO and in response to this they produce reactive oxygen species (ROS), de-granulates and form neutrophil extracellular traps (NETs). However, it is not known why ANCA is formed or what primes PMNs in vivo. Since eosinophils express PR3 [2] and eosinophil peroxidase (high structural homology to MPO) on their surface, ANCA might bind and activate also this cell type. Eosinophils have for a long time been considered as nonspecific cytotoxic cells playing a role in parasitic infections and allergy. However, during the last couple of years a more complex view has evolved, stating eosinophils as multifunctional cells with for instance immunoregulatory properties [3]. At baseline, eosinophils are present in several tissues, as bone marrow, adipose tissue and gastrointestinal tract [4]. They contain more than 30 different pre-synthesized and stored proteins in their cytoplasmic granule and express receptors for proinflammatory cytokines, chemokines, lipid mediators, complement factors and immunoglobulins [5,6]. Eosinophils could also selectively suppress Th1 cells via a constitutive expression of indoleamine 2,3-dioxygenase, an enzyme important for tryptophan catabolism [7] and they have proven to be essential for the survival of plasma cells by supplying necessary cytokines into the plasma cell niches [8]. Extracellular DNA traps formation was first described in neutrophils but is now considered as a common mechanism for the innate immune system. A main difference between NETs and EETs (Eosinophil Extracellular DNA Traps) is the association of intact granules to DNA in EETs [9]. Moreover, viable eosinophils can form EETs by releasing mitochondrial DNA and granule derived proteins [10]. We hypothesize that eosinophils are hyper reactive in situations of chronic inflammation, e.g. AAV, and can rapidly be recruited to tissues by for example innate lymphoid cell in response to microbes or other triggers. The activated eosinophil will accentuate the inflammation by releasing cyto-and chemokines, ROS production, deposition of free granules and EET formation. Moreover, neutrophils, monocytes and cells within the adaptive immune system will be recruited and give rise to additional tissue damage and pathology. EGPA is characterized by eosinophilia and necrotizing eosinophil inflammation but very little is known about the role of eosinophils in GPA and MPA. Hence, the aims of this study are to characterize eosinophils from GPA and MPA patients, in regard to function and activation in order to understand their role in the disease. Patients and controls Non-dialysis dependent patients with MPA and GPA were recruited to the study at their scheduled visit at the outpatient clinics of Nephrology or Rheumatology, Skåne University Hospital, Lund, Sweden. The diagnosis was determined using to the method described by Watts et al. [11]. 98 GPA and MPA patients were included in the study: 74 patients with GPA, and 24 patients with MPA. Birmingham Vasculitis Activity Score version 3 (BVAS3) [12] was used to assess disease activity. Clinical characteristics are presented in Table 1. ANCA levels and specificity was performed with ELISA at Wieslab AB, Malmö, Sweden. One hundred twenty-one controls, ages 21 to 72, were collected from healthy blood donors at the Blood center in Lund and from healthy volunteers. The Regional Ethics Board in Lund, Sweden (LU) approved the study and written informed consent was obtained from all participants. Flow cytometry The expression of selected surface markers on phagocytes was analyzed using flow cytometry. Briefly, heparinized peripheral blood (4-6 mL) was lysed, by adding 45 mL 0.84% ammonium chloride and incubated for 10 min. The lysed blood was centrifuged for 10 min at 250 g. The cells were washed once with PBS and after centrifugation resuspended in 100 μl PBS with 0.5% BSA. The cells were divided into two tubes and incubated for 20 min with antibody mix 1 and 2 respectively (Mix 1: CD10-PECy7, CD14-V500, CD16-APC-H7, CD88-PE, CD49d-APC, CD62L-FITC, CD11b-v450, CD11C-PerCPCy5.5 and Mix 2: CD10-PECy7, CD14-PerCPCy5.5, CD16-APC-H7, CD35-FITC, CD49D-APC, CD64-v450, CD193-v500, Siglec-8-PE. All antibodies were from BD Biosciences except CD11c and Siglec-8 that were purchased from BioLegend. The cells were then washed by adding 3 mL PBS and centrifuged for 3 min at 250 g and resuspended in 25 μL PBS and analyzed using a FACSCanto II and the DIVA software (Becton Dickinson Biosciences, New York, USA). Doublet cells were excluded by plotting FSC height against FSC area and the single cells were divided into monocytes, lymphocytes and granulocytes based on FSC and SSC plots. CD14 positive cells were excluded from the granulocytes and eosinophils were selected as CD16 − / CD10 − and CD49d + , Siglec-8 + and/or CD193 + cells. 30,000 events in the granulocyte gate were recorded. Surface expression were measured as mean fluorescence intensity (MFI) by calculating the geographic mean for the respective peak. Oxidative burst Production of reactive oxygen species (ROS) in peripheral blood eosinophils was investigated using the PhagoBurst assay, (Glycotope Biotechnology, GmBH, Germany), according to the manufacturer's protocol, after ex vivo activation with phorbol-12-myristate-13-acetate (PMA) or opsonized E. coli. After the fixation of the cells, they were labelled by a Siglec-8-PE antibody (BioLegend) and analyzed by flow cytometry. At least 15.000 PMN were collected based on forward and side scatter properties. Eosinophils were defined as Siglec-8 + granulocytes (Additional file 1). After fixation and lysing (according to the manufacturer's protocol), it was possible to select eosinophils also by their forward and side scatter properties. No patient with ROS deficiency was observed. Isolation of blood eosinophils and neutrophils and detection of extracellular DNA traps Eosinophils and neutrophils were purified from peripheral blood collected in EDTA tubes for ETosis experiments. The cells were isolated using Histopaque 1119 (Sigma) followed by Percoll (GE Healthcare) gradient following the manufacturers protocols [13]. The eosinophils were thereafter separated from the granulocytes using MACS Eosinophil Isolation Kit (Miltenyi Biotech) according to manufacturer's instruction. The cells that were removed during the eosinophil purification step were regarded as neutrophils. Viability at isolation was determined with Trypan blue. Cytospin preparations stained with May-Grünwald Giemsa were used to determine the cell purity. (Additional file 2A and B). The purified neutrophils and eosinophils were used to measure NET/EET production after stimulation with PBS, TNFα, C5a and PMA (see below). Isolation and stimulation of blood eosinophils for EET analysis after ANCA stimulation To investigate if ANCA can stimulate eosinophils to produce EET we purified eosinophils using the MACSX-press® Eosinophil Isolation kit (Miltenyi Biotech) according to the manufacturer's instructions. By using this kit, the eosinophils can be isolated without the density centrifugation steps and resulted in less pre-activation of the eosinophils. Briefly, erythrocytes are aggregated and sedimented, while non-targeted cells are removed by immunomagnetic depletion. The eosinophils remain in the supernatant and are carefully collected into another tube. Viability at isolation was determined with Trypan blue. Cytospin preparations stained with May-Grünwald Giemsa were used to determine the cell purity. (Additional file 2C). The eosinophils from five healthy controls and five GPA or MPA patients were seeded on coverslips as described above. As the number of eosinophils were limited, especially in samples taken from patients, we chose C5a over TNFα to prime the eosinophils prior to IgG stimulation. The eosinophils were primed for 15 min with PBS or C5a (150 ng/mL) at 37°C and 5% CO 2 . After priming, the eosinophils were stimulated by addition of purified IgG from ANCA positive patients (250μg/mL) (one MPO-ANCA and one PR3-ANCA), purified IgG from a healthy control (250μg/mL), PBS (negative control) and PMA (positive control -PMA was not used in combination with C5a) and incubated for 180 min at 37°C and 5% CO 2 . After the incubation the glass were treated as described above. Statistical analysis All statistical analyses were performed on GraphPad Prism 8.0 software (GraphPad Software, San Diego, CA, USA). Correlations were determined by Spearman's correlation test and linear regression analysis. Mann-Whitney U-test was used for two group comparisons. Kruskal-Wallis and Dunn's multiple comparisons test for three or more groups. When comparing NET and EET from the same donor Wicoxon matched-pair signed rank test was used. All p-values were considered significant at p < 0.05. AAV patients have decreased numbers of eosinophils Ninety-eight patients with GPA (n = 74, 76%) and MPA (n = 24, 24%) were included in the study and clinical and demographic data at time of sampling are shown in Table 1. Most patients were PR3-ANCA positive (n = 62, 63%), 30 patients were MPO-ANCA positive (30%) and five were ANCA negative. ANCA specificity from one patient was missing and one was double positive. The majority of the patients were in remission (n = 76). Twenty-two patients had an active disease, with a median activity score according to the BVAS3 of 5 (range 1 to 16). The frequencies of neutrophils, eosinophils and basophils were measured in peripheral blood of patients and controls by flow cytometry. In line with our previous results [14], the patients have increased percentage of PMNs in peripheral blood (p < 0.0001), represented by an increased percentage of neutrophils (p = 0.0037, Fig. 1) In addition we observed a decreased percentage of eosinophils compared with healthy controls (p < 0.0001). No difference was observed in the percentage of basophils (p = 0.06). Since there were no white blood cell counts available for the healthy controls, we are not able to compare absolute numbers. Corticosteroid treatment has been reported to affect the number of eosinophils in peripheral blood and we saw a weak but significant correlation between corticosteroid treatment, prednisone in all of our cases, and the absolute number of eosinophils (r 2 = 0.088, p = 0.008) (Fig. 2a). However, the decreased levels of eosinophils could not completely be explained by corticosteroid treatment as no correlation was found in the group with active disease (r 2 = 0.077, p = 0.27) (Fig. 2b) and no significant difference of the percentage of eosinophils was observed between patients with active disease with or without corticosteroid treatment (p = 0.5728). When dividing all patients into 3 groups based on corticosteroid dose (0, > 0 to 5, and > 5 mg/day) the only difference found was between the group without corticosteroids and the group with a daily dose above 5 mg (Fig. 2c). To see if disease activity or diagnosis influenced the frequencies the patients were divided into active disease (BVAS3 ≥ 1) or inactive disease (BVAS3 = 0). Patients with active disease had significantly lower frequencies of both eosinophils and basophils compared to patients with inactive disease and HBD, but no difference were found between GPA and MPA patients (Additional file 3). The patients have increased surface expression of the low affinity FcγRIII (CD16, p < 0.0001), the high affinity FcγRI (CD64, p = 0.0035) and the eosinophil eotaxin receptor CCR3 (CD193, p = 0.0002), and decreased expression of the complement receptors CD35 (p = 0.0022), CD88 (p < 0.0001) as well as CD11b (p = 0.0061), CD11c c When the all patients were divided into three groups based on their prednisone dose, we only found a significant difference between the 0 mg/day group and the one with > 5 mg/day (p = 0.0007). Kruskal-Wallis test and Dunn's multiple comparisons test was used to calculate the level of significance between the three groups (p < 0.0001) and Siglec-8 (p = 0.0015) (Fig. 3). No differences were observed in the surface levels of CD62L. When the patients were divided into active (BVAS3 ≥ 1) or inactive disease (BVAS3 = 0) the patients with active disease expressed lower levels of CD88 (p = 0.0033), CD11c (p = 0.0020) and CD62L (p < 0.0001) compared with patients in remission (Fig. 4). No difference was seen between the GPA and MPA groups (Additional file 4). Decreased ROS production in eosinophils from AAV patients ROS production is one of the major effector functions of phagocytes in their anti-microbial defense. Moreover, ROS play an important regulatory role of both the innate and adaptive immune system [15,16]. Previous studies have shown that neutrophils from AAV patients have decreased intracellular ROS production. In this study we can show that this is true also for eosinophils. Peripheral whole blood from patients (n = 98, Table 1) and controls (n = 121) were stimulated with PMA (protein kinase C activator) or opsonized E.coli and intracellular ROS production was measured by flow cytometry using the Phagoburst kit. Eosinophils from GPA and MPA patients showed a significantly decreased ROS production both when stimulated with PMA (p < 0.0001) and E.coli (p < 0.0001, Fig. 5) compared with healthy controls. Fig. 3 The level of surface expression on eosinophils of a CD16, b CD64, c CD35, d CD193, e CD62L, f CD88, g Siglec-8, h CD11b and i CD11c was measured in healthy blood donors (HBD) and anti-neutrophil cytoplasmic antibodies associated vasculitides (AAV) patients using flow cytometry and reported as geometric mean fluorescence intensity (MFI). Two-sided Mann-Whitney test was used to calculate the level of significance. The horizontal lines indicate the median values. CD16, CD64 and CD193 were found to be elevated, CD35, CD88, Siglec8, CD11b and CD11c were found to be downregulated in AAV patients Eosinophils form extracellular traps more easily than neutrophils Extracellular traps released from neutrophils and eosinophils are thought to play an important role in the defense against pathogens and in inflammatory processes. Eosinophils and neutrophils were purified from peripheral blood of healthy donors (n = 7) and stimulated with PBS (negative control) TNFα, C5a or PMA (positive control). Eosinophils were more prone to release extracellular traps than neutrophils, when stimulated with TNFα (p = 0.0006) or C5a (p < 0.0001) but no significant difference was observed using PMA stimulation (p = 0.1970) ( Fig. 6a and b). Eosinophils also produced more EETs in the negative control where they were incubated with PBS alone. ANCA stimulation enhanced extracellular trap formation in eosinophils from patients AAV are characterized by ANCA autoantibodies that binds mainly to PR3 or MPA. Eosinophils express PR3 [2] and eosinophil peroxidase (high structural homology to myeloperoxidase) on their surface, indicating that ANCA could bind and stimulate also this cell type. To investigate if ANCA could affect the release of EETs, eosinophils were purified using the MACSExpress eosinophil kit from five AAV patients (3 GPA and 2 MPA) and healthy controls (n = 5). The eosinophils were primed with either PBS or C5a followed by incubation with PBS (negative control), PMA (positive control), IgG from healthy controls or IgG from ANCA patients (one PR3-ANCA and one MPO-ANCA). Stimulation of eosinophils from Fig. 4 The level of surface expression on eosinophils of a CD16, b CD64, c CD35, d CD193, e CD62L, f CD88, g Siglec-8, h CD11b and i CD11c was measured in healthy blood donors (HBD) and compared to both active and inactive anti-neutrophil cytoplasmic antibodies associated vasculitides (AAV) patients using flow cytometry and reported as geometric mean fluorescence intensity (MFI). Kruskal-Wallis test and Dunn's multiple comparisons test was used to calculate the level of significance between the three groups. Values are reported as median ± IQR the patients with C5a followed by ANCA gave a higher level of EETosis (p = 0,0088, Fig. 6c). This was not seen among healthy controls. EETs from these stimulations are shown in Fig. 6d. Eosinophil purification using the MACSExpress kit generated less pre-activated eosinophils from healthy blood donors as the percentage of EETs in the negative control (PBS incubation) was much lower (22% versus 5.5% in Fig. 6a and c respectively). Nonetheless, stimulation of eosinophils with C5a alone generated more EETs (61% in Fig. 6a and 24% in Fig. 6c) compared to NETs in experiments done on neutrophils (0.4% in Fig. 6a). Discussion Eosinophils are multifunctional cells with both pro-inflammatory and immune-regulatory properties that have been suggested to regulate local tissue immunity, repair and remodeling [17]. Moreover, eosinophils seem to be important for B cell activation and the homing and survival of plasma cells [18]. There is some evidence that eosinophils have a protective role in autoimmunity e.g. Finlay et al. showed that activated eosinophils conferred protection against experimental allergic encephalomyelitis [19]. Here we show that the percentage of eosinophils were decreased in peripheral blood in GPA and MPA patients and showed altered surface marker expression and function. Moreover, ANCA stimulation enhanced EETosis in GPA and MPA patients a phenomenon not seen in healthy controls. The altered surface expression and low ROS production in eosinophils is true for both patients in remission and with active disease and no difference were seen between GPA and MPA patients. The results are in line with our previous findings for neutrophils in AAV patients [14]. Glucocorticosteroid treatment has been reported to affect the number of eosinophils in peripheral blood, by inducing apoptosis and inhibiting pro-survival signals by cytokines e.g. IL-5 and GM-CSF [20,21]. In line with earlier observation we saw a weak correlation between corticosteroid treatment and the number of eosinophils in blood (r 2 = 0.088, p = 0.008). However, the decreased levels of eosinophils could not completely be explained by corticosteroid treatment as no significant correlation was observed between patients with active disease with or without corticosteroid treatment (p = 0.27). Two patients were newly diagnosed and both of them had low levels of eosinophils already before treatment was started. Even if the pro-apoptotic effect of steroids on eosinophils is widely accepted, there are reports suggesting that it depends on the activation status of the eosinophils [22]. This could be one explanation to why we could not detect any correlation in patients with active disease. Moreover, dexamethasone has been reported to increase eosinophil differentiation and proliferation from CD34 + hematopoietic stem cells [23]. Why GPA and MPA patients have decreased frequencies of eosinophils in peripheral blood is not known. A possible explanation could be that activated eosinophils are recruited to sites of inflammation and play an active role in the pathogenesis of AAV. Mononuclear cells have been reported to be the most frequent cell type in interstitial infiltration of ANCA-positive renal biopsies from patients with suspected systemic vasculitis, however eosinophils and neutrophils were found in 20 and 27% of the cases respectively [24]. An important feature of eosinophils is the binding of antibodies and of complement proteins via specific receptors, events that will induce degranulation and antibody mediated cellular cytoxicity and eventually killing of invading microbes or host cells [25]. Here we showed that AAV patients have increased surface expression of the low affinity FcγRIII (CD16) as well as the high affinity FcγRI (CD64), a phenomenon described earlier in patients with allergic conditions [26]. In several conditions, including EGPA, an increased expression CD69 and CD11b and shedding of CD62L are markers of activated eosinophils [25,27]. In this study we found less CD62L on eosinophils from patients with active disease but lower levels of CD11b. Recently, Lingblom et al. described a naturally occurring CD16 + eosinophil subset in peripheral blood, able to suppress T cell proliferation [28]. In line with our observation, these CD16 + eosinophils also showed decreased expression of some surface markers including CD88 and CD11a, whereas surface molecules associated with T cells suppression e.g. PD-L1, CD54 were up-regulated. The importance of this eosinophil subset in AAV needs further investigation. NET depositions have been observed in kidney biopsies from AAV patients. We and others, have previously shown that ANCA could stimulate NET formation by neutrophils. Moreover, sera from AAV patients seem to degrade NETs more slowly than healthy controls [29]. Interestingly, Kraaij et al. recently showed that ANCAs did not influence NET formation by neutrophils from a healthy donor, but autologous serum induced increased formation of NETs in neutrophils from three out five studied patients [30], suggesting that it is not ANCA per se that induces enhanced NETs formation rather the combination of ANCA and neutrophils from AAV patients. In line with these results, we show that addition of ANCA increased EETs formation in eosinophils from AAV patients. How ANCA induce formation of extra cellular traps in neutrophils and eosinophils is not known. The high affinity IgG receptor FcγRI (CD64) has been shown to be induced on blood eosinophils by IFN-γ [27] and human FcγRI expressed in mice was sufficient to trigger autoimmune arthritis [31]. The high CD64 expression on eosinophils, that is further increased in AAV patients, may explain the increased EET formation in eosinophils from AAV patients compared to healthy controls when stimulated with ANCA. An increased influx and following EET formation in the inflamed tissue could partly contribute to the lower levels of circulating eosinophils. EETosis in the tissue could also mean deposition of intact eosinophilic granules (Additional file 5) in the tissue further increasing the inflammation [9]. The presence of free eosinophil granules in tissue from AAV patients still needs to be proven. ROS are major effector molecules in inflammatory processes and tightly linked to EET formation [9,10]. (See figure on previous page.) Fig. 6 a Eosinophils were more prone to produce extracellular traps compared to neutrophils from the same donor (n = 7 healthy blood donors). Eosinophils and neutrophils were purified from peripheral blood using density centrifugation followed by MACS eosinophil isolation kit. The cells that bound to the magnetic beads are regarded as neutrophils and the ones that did not as eosinophils (Additional file 2). The cells were seeded on poly L-lysin coated cover slides, stimulated with PBS, PMA, TNF or C5a for 3 h and stained using DAPI and anti-nucleosome antibodies. The percentage of cells that had released neutrophil/eosinophil extra cellular traps (NETs/EETs) were counted under the microscope. The Wilcoxon matched pairs signed rank test was used to calculate the level of significance. b Shows a representative immunofluorescence experiment: DAPI is shown in blue and nucleosomes in red. Eosinophils are shown in i, iii and v and neutrophils in ii, iv and vi. In i and ii the cells were incubated with PBS, in iii and iv with C5a and in v and vi with PMA for 3 h. Cells stimulated with TNFα did not differ visually from the cells stimulated with C5a. c Eosinophils from anti-neutrophil cytoplasmic antibodies associated vasculitides (AAV) patients produced more EETs after stimulation with C5a and ANCA IgG compared to stimulation with C5a and IgG purified from healthy blood donors (HBD). This difference was not seen in eosinophils purified from HBD. Eosinophils were purified from peripheral blood using Eosinophils MACS Xpress kit and put on poly L-lysin coated cover slides. They were then primed for 15 min with C5a or PBS followed by the addition of purified IgG from ANCA positive patients or IgG from HBD and a 3 h incubation. They were thereafter stained using DAPI and anti-nucleosome antibodies. The percentage of cells that had undergone NET/EETosis were counted under the microscope. The two-sided Mann-Whitney test was used to calculate the level of significance. d Shows eosinophils from a GPA patient purified with MACSXpress kit that were incubated for 3 h at 37°C and 5% CO 2. In i) the cells were incubated with PBS, ii) with PMA, iii) cells that were primed with C5a for 15 min followed by purified IgG from a healthy blood donor and iv) cells that were primed with C5a for 15 min followed by purified IgG from an ANCA positive patient. DNA is visualized with DAPI (blue) and nucleosomes visualized with an anti-nucleosome antibody labelled with Alexafluor 594 (red). The EETs do generally not contain many granular proteins as the granules are released intact during EETosis (Additional file 5) During the last decade, an increasing amount of data support an immune modulating role for monocyte and granulocyte produced ROS [15,[32][33][34], as ROS can affect redox sensitive pathways [35]. We have previously reported that PMN and monocytes from AAV patients had decreased capacity to phagocytose and impaired ROS production, which was associated with disease activity [14]. In this study we show that AAV eosinophils have decreased capacity to produce ROS compared with healthy controls. Even though eosinophils seem to produce sufficient amounts of ROS to form EETs, it may have an impact on immune regulation. Conclusions The frequency of eosinophils was decreased in peripheral blood in AAV patients and they showed altered surface marker expression and function. They also produce less ROS when stimulated with opsonized E.Coli or PMA. Moreover, eosinophils produce EETs when stimulated with TNFα or C5a and addition of ANCA further increase the number of EETotic cells, suggesting that eosinophil can contribute to the autoantibody driven inflammatory process in AAV. Additional files Additional file 1: Production of reactive oxygen species (ROS) in eosinophils. Cell aggregates were excluded based on forward scatter height and area properties, then granulocytes were gated based on their forward and side scatter. Eosinophils (in red) were defined as Siglec-8 + granulocytes. It was possible to select eosinophils also by their forward and side scatter characteristics. Intracellular production of ROS was measured as the geometric median fluorescence intensity in eosinophils (red) as a comparison typical graphs of ROS production in neutrophils are shown to the left (green). The two top histograms show unstimulated (PBS) cells and the bottom two histograms show cells activated with phorbol-12-myristate-13-acetate (PMA). (PDF 141 kb) Additional file 2: Cytospin preparations of purified neutrophils and eosinophils stained with May-Grünwald Giemsa. In the first set of experiment neutrophils and eosinophils were isolated using Histopaque 1119 (Sigma) followed by Percoll (GE Healthcare) and thereafter the eosinophils were separated from the granulocytes using MACS Eosinophil Isolation Kit (Miltenyi Biotech). The cells that were removed from during the eosinophil purification step were regarded as neutrophils (A) and the ones that remained as eosinophils (B). In the second part of the experiment eosinophils were purified using the MACSXpress® Eosinophil Isolation kit (Miltenyi Biotech) (C). (PDF 199 kb) Additional file 3: The percentage of eosinophils of polymorphonuclear leukocytes (PMN) and basophils of the leukocytes populations is shown when the patients are divided into active and inactive (A and B) or into GPA or MPA (C and D). Patients with active disease had lower levels of both eosinophils and basophils but no difference were seen comparing GPA and MPA. Kruskal-Wallis test and Dunn's multiple comparisons test was used to calculate the level of significance between the three groups. Values are reported as median ± IQR. (PDF 86 kb) Additional file 4: The level of surface expression on eosinophils of A CD16, B CD64, C CD35, D CD193, E CD62L, F CD88, G Siglec-8, H CD11b and I CD11c was measured in healthy blood donors (HBD) and compared to anti-neutrophil cytoplasmic antibodies associated vasculitides (AAV) patients, divided into GPA and MPA patients, using flow cytometry and reported as geometric mean fluorescence intensity (MFI). Kruskal-Wallis test and Dunn's multiple comparisons test was used to calculate the level of significance between the three groups. Values are reported as median ± IQR. No difference was seen between the GPA and MPA groups. (PDF 95 kb) Additional file 5: Light microscopy picture of two eosinophils that has formed EETs after incubation with PMA for 3 h at 37°C and 5%CO 2 . The white arrow indicates the web formed by the DNA and the black arrows indicate the intact granules that remains around the plasma membrane remnants. (PDF 263 kb) Funding This work was funded by Reumatikerfonden (Swedish Rheumatism association), Anna-Greta Crafoord's Foundation for rheumatological research, Konung Gustaf V:s 80-årsfond, The Swedish Kidney Foundation, Royal Physiographic Society of Lund and Alfred Österlunds Stiftelse. All of the funding bodies support the project financially as it was described in the project plan but have not been involved in the design of the study, data collection, analysis of data, interpretation of data or writing the manuscript. Availability of data and materials Raw data files from flow cytometry datasets used in the current study are available from the corresponding author on reasonable request. Authors' contributions This study was designed by all the authors. TH, ÅJ and ÅP performed the experiments, SO collected the clinical data, TH and ÅJ analyzed the flow cytometry data, TH, MH and ÅJ interpreted the experimental data. ÅJ, ÅP and TH were major contributors in writing the manuscript. All authors read and approved the final manuscript. Ethics approval and consent to participate The Regional Ethics Board in Lund, Sweden (EPN2008/110) approved the study and written informed consent was obtained from all participants. Consent for publication Not applicable.
2019-03-08T19:35:17.548Z
2019-03-08T00:00:00.000
{ "year": 2019, "sha1": "327ecb694efaf123f70b59111ff43f29d19a91d1", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s41927-019-0059-6", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "327ecb694efaf123f70b59111ff43f29d19a91d1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
220631926
pes2o/s2orc
v3-fos-license
Survival after parotid gland metastases of cutaneous squamous cell carcinoma of the head and neck Purpose Malignant tumours in the parotid gland can originate either from the gland itself or as a result of metastatic spread of other tumours, such as cutaneous squamous cell carcinomas (CSCC) of the head and neck area. The aim of this study was to analyse and compare the clinical behaviour of primary as well as CSCC metastatic parotid cancers with special emphasis on therapy and oncologic outcome. Methods Clinical and histopathological data of 342 patients with parotid gland malignomas surgically treated in a tertiary referral centre between 1987 and 2015 were retrospectively assessed. Oncologic outcomes of all cases with CSCC metastasis of the parotid gland (n = 49) were compared to those of primary parotid gland carcinomas (n = 293). Results Mean age at diagnosis was 72.3 years for CSCC patients versus 56.8 years in patients with primary parotid carcinoma. A total of 83.7% of CSCC patients were male, compared to 48.8% in the group of primary carcinomas. Forty-five out of 49 CSCC patients underwent total parotidectomy and neck dissection (91.8%). A total of 93.9% out of all CSCC patients received adjuvant radiotherapy. Five-year overall survival (OS) was 32.6% in CSCC patients versus 77.2% in primary parotid carcinoma patients. Conclusion As compared to primary parotid cancers, we could show that patients suffering from CSCC metastases to the parotid gland presented with significantly higher age and worse survival. Introduction Salivary gland carcinomas (SGC) account for less than 1% of all cancer types in Europe [1]. SGC are most frequently localised in the parotid gland, although the proportion of malignant to benign tumours in the small salivary glands is higher [2]. According to the huge diversity of tumour subtypes and the low incidence, appropriate treatment remains challenging. Twenty subtypes of SGC have been defined by the World Health Organisation yielding different histological and molecular characteristics [3]. Mucoepidermoid carcinoma is the most common subtype [4,5]. Due to possible facial nerve involvement, parotid gland carcinomas (PGC) can be challenging for head and neck surgeons. The biological aggressiveness of PGC varies considerably between the different entities. For example, the overall survival ranges between 95-100% for low-grade adenocarcinoma [6] and 23-50% in high-grade mucoepidermoid carcinoma cases [7]. Prognosis is significantly impaired by locoregional lymph node metastases [4]. Complete tumour removal (R0) is the most effective treatment for PGC. Elective treatment of the N0 neck remains a controversial issue. Radiotherapy can be used as adjuvant therapy in patients with risk factors [2]. Moritz Friedo Meyer and Philipp Wolber contributed equally to this work. Squamous cell carcinomas (SCC) of the parotid gland have a worse prognosis as compared to other malignant tumours of the parotid gland, such as adenoid cystic, mucoepidermoid, and acinic cell carcinomas [8]. Tumorigenesis of squamous cell carcinoma of the parotid gland [9] is still under discussion: While some might consider primary SCC of the salivary glands as being non-existent, the vast majority of patients report on a previous cutaneous squamous cell carcinoma (CSCC) in the head and neck area [10,11], typically 1 year after onset of disease [12]. Therefore, these parotid tumours are in fact representing CSCC-derived lymph node metastases [13]. Eighty percent of all CSCC are found in the head and neck region [14]. High exposure to ultraviolet (UV) and ionising radiation as found in Australia was reported to foster the formation of CSCC [11]. The objective of our study was to analyse and compare the clinical behaviour of primary PGC and CSCC metastatic parotid cancers with special emphasis on therapy and oncologic outcome. Methods All patients with histologically proven malignant tumours of the parotid gland who underwent combined surgery and radiation therapy or surgery alone at the Department of Otorhinolaryngology, Head and Neck Surgery of the University Hospital Cologne, Germany, between January 1987 and December 2015 were retrospectively assessed thus identifying all cases of metastatic parotid CSCC. Clinical data were retrieved from patients' medical records, histology reports, and radiographic imaging. TNM staging was performed according to the 8th edition of the American Joint Committee on Cancer (AJCC) [15]. Demographic data as well as oncological outcomes were compared between metastatic CSCC of the parotid gland and primary parotid gland tumours. Therapy All clinical cases had been discussed at a multidisciplinary tumour board meeting prior to treatment. Before surgery, a fine needle aspiration of the mass was performed. In case of suspected malignancy, an intraoperative frozen section procedure was performed and surgery was extended to a total or radical parotidectomy and neck dissection. Patients with clinically and radiologically negative neck nodes were treated with selective neck dissection level [16,17]. Preoperative clinical facial nerve palsy and obvious tumour infiltration of the facial nerve intraoperatively resulted in resection of the facial nerve and reconstruction in selected cases. Additional adjuvant radiation therapy was indicated in cases of high-grade carcinoma (G3 or G4), adenoid cystic carcinoma, positive resection margins, cervical lymph node metastasis, and perineural invasion. These patients received a daily fraction of 1.8-2.0 Gy five times a week by a linear accelerator (LINAC, 6 MV-photons). The ipsilateral cervical lymph node levels (levels I-V) received 50 Gy while the parotid gland region and tumour affected levels of the neck have been irradiated with 60-65 Gy. All patients underwent regular follow-up examinations every 3 months in the first year, every 6 months for the subsequent 3 years, and annually from the fourth year onward. Residents' registration offices were consulted for information regarding residential status or death. Statistical analysis The overall survival rates were assessed using the Kaplan-Meier method for incomplete observations. The log-rank test was then used to detect correlations between prognostic factors and outcome. A p value of < 0.05 was considered statistically significant. All statistical tests were performed using SPSS (IBM SPSS Statistics 25.0, IBM, New York City, NW, USA). Results A total of 342 patients suffering from malignant tumours of the parotid gland were identified. Forty-nine out of these were diagnosed with metastatic CSCC of the parotid gland. CSCC Mean age for CSCC patients (n = 49) was 72.3 years (30-93 years) (Fig. 2) with a male to female ratio of 5:1. The age of CSCC patients was thus significantly higher than the age of patients with PGC (p = 0.012). Table 1 depicts the clinical data including the type of therapy. Of note, six patients who underwent a lateral parotidectomy refused any extended tumour surgery. Three patients refused a further adjuvant therapy. Mean follow-up was 31 months. Five-year overall survival rate was 32.6%, i.e. yielding a significantly worse outcome as compared to PGC patients irrespective of lymph node metastasis (p < 0.001). No significant survival difference could be detected between patients with sole involvement of the parotid gland (CSCC_N-) compared to patients with additional neck lymph nodes CSCC_N+ (p = 0.109). Nevertheless, 19.9% 5-year overall survival in the group of patients with additional lymph node metastases (CSCC_N+) was even less favourable as compared to patients with only parotid gland metastasis(s) (CSCC_N-) with an overall survival of 38.1%. Even the unfavourable group of PGC with positive neck lymph nodes (PGC_N+) showed a significantly better prognosis as compared to CSCC without additional cervical lymph nodes (CSCC_N-) (p = 0.008) (Fig. 1). Discussion In contrast to other previously published studies, this study focuses on malignancies of the parotid gland and distinguishes between primary and secondary tumours with respect to clinical and therapeutic characteristics as well as 5-year overall survival. PGC were mainly classified as adenocarcinoma NOS, mucoepidermoid, adenoid cystic, and acinic cell carcinoma. A total of 77.2% 5-year overall survival rate is comparable to previously published results [18,19]. In the CSCC group, the majority of patients were male. This is consistent with already published data of PGC [20]. The age distribution of the CSCC patients with parotid involvement presented here also agrees with data from previously published patient cohorts thus confirming that older patients are particularly affected by that disease [20]. Primary CSCC were most often located in the area of the auricle, temple, and forehead. This is in accordance with previous reports [12,21]. Creighton and colleagues showed that CSCC preferentially metastasise to the forehead (85%), periauricular area (76%), and in 30% to the scalp, cheek, and infraauricular region [21]. Hirshoren et al. further demonstrated that the majority of CSCC originating from the scalp, auricle, and cheek area metastasise to the parotid gland [12]. Despite multimodal therapeutic strategies, the 5-year OS remained poor in CSCC patients (32.6%) as compared to PGC (77.2%). These results are in line with previously published data of other authors [11,20,22] and are due to a generally higher tumour stadium as a consequence of lymph node metastasis in the CSCC group. It is noteworthy that even PGC patients having loco-regional metastasis had a better 5-year OS as compared to CSCC patients irrespective of neck node metastasis (CSCC_N-and CSCC_N+). Cervical metastases were demonstrated to significantly worsen the prognosis of CSCC patients [11,20]. However, in our study, we could not find a significant difference in 5-year overall survival for CSCC patients without further neck lymph node metastases (CSCC_N-) compared to CSCC with neck lymph node metastases (CSCC_N+). It should be discussed how the overall survival in this group could be improved: On the one hand, studies indicate that an improvement in diagnosis and consistent implementation of adequate staging and timely initiation of therapy can improve overall survival. Deilhes et al. demonstrated that 37% of patients were not diagnosed until the disease was in an advanced stage, indicating a lack of CSCC identification. For the remaining 69 patients, 7% did not receive treatment within 3 months of the CSCC being identified, 62% had an incomplete histological report, and 37% had incomplete treatment [23]. On the other hand, an escalation of therapy in order to improve overall survival seems reasonable. But at least, all patients with advanced CSCC, like in our study, had received both radical surgery as well as adjuvant radiotherapy. Increasing the radicality of the surgery might lead to a better survival. Coombs et al. concluded that more extensive surgery, including lateral temporal bone resection, could improve the local control rate in cases of advanced disease [24]. For better overall survival, immunotherapy might also be added to standard therapy in an adjuvant or neoadjuvant setting in the future. Current drug therapy options were examined in a palliative setting by several authors. Montaudie et al reported on cetuximab as monomodal therapeutic option in unresectable palliative CSCC patients (n = 58, mean age 83.2 years) [25]. The overall response rate (ORR) was 53% and 42% after six and 12 weeks, respectively. The authors conclude that cetuximab delays disease progression [25]. In a review by de Lima et al., the authors summarised studies on CSCC drug therapy. Again, the application of cetuximab was discussed in combination with checkpoint inhibitors [26]. Checkpoint inhibitors could serve as a therapeutic alternative in case of recurrent CSCC yielding parotideal metastases. Compared to platinum-based chemotherapy, modern immunotherapeutic strategies are considered as being better tolerated especially in elderly patients. Recently, the PD-1-blocking antibody cemiplimab was approved by the FDA and EMA for advanced CSCC treatment. However, detailed guidelines for indication are still missing which might be-at least in part-due to a lack of appropriate clinical studies for patients with recurrent or metastasised CSCC [27]. Steeb et al. reviewed the previous studies and experiences using checkpoint inhibitors in advanced CSCC and concluded that cemiplimab and pembrolizumab immunotherapy could result in a response rate of 40-55% in a first-line palliative setting [27][28][29]. These promising results might be due to a high immunogenicity of CSCC [30]. However, the exact setting or composition in which immunotherapy should be applied remains a matter of debate. The retrospective character of our study and potentially associated selection bias as well as the relatively low number of patients with CSCC limits clinical validity. Conclusions The present study retrospectively evaluated 342 patients with primary PGC (n = 293) and CSCC metastatic cancer to the parotid gland (n = 49) thus yielding a significantly worse prognosis for metastasised CSCC despite an intense multimodal therapeutic effort (radical surgery and adjuvant radiotherapy. Acknowledgements Open Access funding enabled and organized by Projekt DEAL. Data availability All data are available on request from the corresponding author. Compliance with ethical standards Conflict of interest The authors declare that they have no conflict of interest. Ethical approval All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. Code availability (software application or custom code) Not applicable Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
2020-07-18T23:20:21.442Z
2021-01-05T00:00:00.000
{ "year": 2021, "sha1": "037d50dd45a46063facb80791a93412c4be23f37", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10006-020-00934-8.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "5b97a5f2871942e5558900e04221420ece89cd59", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
73682160
pes2o/s2orc
v3-fos-license
The Life Struggle of Female Characters in the Novels of Abidah El Khalieqy ( A Feminism Study ) This research aimed to obtain an in-depth picture and understanding of the life struggle of female characters in the novels written by Abidah El Khalieqy. A qualitative descriptive research design was used with a feminist approach. The data in this research was the result of a study of the novels Perempuan Berkalung Sorban and Geni Jora by Abidah El Khalieqy. The procedure used in analyzing research data was content analysis. The data validity analysis used the triangulation technique. Based on the results of research and discussion, it was found that the struggle to fight injustice was a struggle against 1) marginalization of women; and 2) subordination of women. A literary work can be viewed as a portrait of human life. In the literary work, the author presents the model of life and social conditions of the characters which include social structure, social relations, social contradictions, kinship relations, strong group dominance of the weak and other social life sides like real life. Thereby, appreciating and understanding literary works are the same as living and understanding human and their life in all aspects which in fact can be studied by human-related disciplines. Literary works are the media used by authors in conveying their ideas. As a medium, literary work becomes a bridge that is conveyed to readers. Novels as one of the media in the ideological struggle at the cultural level can be a significant basis for understanding feminism. Novel is created with a variety of purposes about the existence of women in various cultural contexts as well as with the various perspectives of women and their world. The placement of women's positions in the lower and less powerful places exists because of patriarchy, a system that allows men to dominate women in all social relationships. In a patriarchal culture, there is no world for women outside marriage. The institution of marriage is a cultural space created with a strong myth for men's interests. Now, women need to be quite happy and relieved, because they are no longer merely seen as an object of physical beauty to be seen and enjoyed, but they have been seen as a multi-dimensional and qualified humannoble. Feminist literature is rooted in an understanding of the inferiority of women. The key concept of feminists is equality between the dignity of women and men. The rise of many Indonesian women authors in recent years, the rise of female readers and the frequent presence of female characters in Indonesian literature deserve to be observed in the context of applying feminist literary criticism. In this study, the author chose the work of Abidah El Khalieqy. Some of Abidah El Khalieqy's works are best seller. The results of her writings had gained national recognition, especially for the category of female authors. In her works, Abidah more dominantly raised feminist issues with the background of boarding school life. This research focused on the novel text which were limited to the feminism problem in the novels Perempuan Berkalung Sorban (2001) and Geni Jora (2004) written by Abidah El Khalieqy. Both novels featured attractive, complex and dynamic female characters that deserved to be investigated. The selection of the Perempuan Berkalung Sorban and Geni Jora novels by Abidah el Khaileqy as the object of research was based on actual and relevant issues with contemporary life that would be useful for organizing a better future, especially for women. In addition, the novel Perempuan Berkalung Sorban and Geni Jora by Abidah el Khaileqy could be used as teaching material as a model of gender-based literary learning. This study examined several issues related to women's view of life, forms of injustice against women and the struggle of women leaders. This study aimed to see the position of women in the middle of patriarchal cultural system that existed in the novel based on feminism perspective. This research was conducted to get a deeper description and understanding about the life struggle of female characters in the novels of Abidah El Khalieqy. Feminism was born in the early 20th century pioneered by Virginia Woolf (2013) that feminism is derived from the word femine (woman) meaning a woman (singular) which aims to fight for the rights of women (plural) as a social class. Furthermore, Thornham (2000: 38) considered women to be inside and outside all the symbolic structures that make up identity. Women are out of the nation because they themselves can not claim a national identity. They are outside the class because they do not have a class marker. In the material sense, women are confined in the private sphere excluded from social power, but their ideological power is much greater. Gender differences will not be a problem as long as it does not give rise to gender inequalities. However, gender differences have generated gender inequality, especially to women. Manifestations of gender inequality occur in various forms in the economic, political, social and culture (Fakih, 2003: 12). On the other hand, Sugihastuti and Suharto (2010: 18) said that feminism is a movement of equality between men and women in all fields, political, economic, educational social or organized activities that defend women's rights and interests. Feminism is an awareness of oppression and extortion of women in society both in the workplace and at home. Sugihastuti and Suharto (2010: 18) say that feminism is a movement of equality between men and women in all fields, political, economic, educational, social or organized activities that defend women's rights and interests. In the development, injustice happen to women merged in the literary work because the dominance of patriarchal culture is the production and acceptance of literary works in the hands of men. Most literary works are written and also criticized by men (Hellwig, 2003: 10-11). In a literary world full of imaginary, characters of male characters are portrayed as someone who possesses heroic traits and of course, the depictions of women fit their imaginations as well. A female hero can be a heroine if it is in accordance with the concepts that men have determined. Critics of feminist literature have an important goal of feminist literary criticism, which is to help the reader understand, describe, interpret and judge the works written by authors (Djajanegara, 2000: 27). Thus, feminist discourses have changed the approaches to all literature, and that the integration of women's voices has developed. Although it seems that up to now feminist literary criticism has not made as much change as needed yet. The subsequent development is feminist literary critics also paid attention to women as writers. Feminism Study To date, various juridical activities and instruments have been established to support the realization of Gender Equality and Justice (Kesetaraan dan Keadilan Gender) in Indonesia. The government's commitment through the Ministry of Women's Empowerment to bring gender equality and justice into reality is very high. The Government of Indonesia ratified by law No. 23 of 2004 on the Elimination of Domestic Violence (PKDRT) which provides protection for victims of violence and criminal sanctions for the perpetrators. The development of women's studies related to the paradigm underlying the struggle or demand for gender concerns in Indonesia is outlined below: (1) the concept of Women in Development (WID), (2) the Gender and Development Concept, (3) the Concept of Gender Women Empowerment (PUG ) Or Gender Mainstreaming (Cleves, 2007: 21-30). In Nugroho's view (2008: 40), gender inequality is a system and structure for men and women to be victims of the system. This view illustrates that gender is one of the key social structures of contemporary culture and is characterized by power struggles and injustices. Gender and inequality hierarchies are maintained among other factors, with meaning and belief systems, and this alters results through representation. Representation is built through language, image, social practice, material and symbol dimensions. Still on the similar perception, Sugihastuti (2000: 113-121) gives an overview of the role of women in everyday life, whether at home or outside, every woman has her own choice and is responsible for her own desires. The role of the female character is part of the main task that must be performed as a woman. There are various roles that she has from birth to adulthood, such as being a wife, as a mother, and as a parent. The roles that she possessed are part of her life as a woman. Women's Struggle Gender studies related to the concept of women's empowerment promote gender equality and empower women as an effective way to combat poverty, hunger and disease. This was done to meet the practical and strategic needs of women. The fulfillment of women's practical needs involves the need for women to carry out social roles to meet short-term needs such as improving living standards, improving health services, providing employment, illiteracy eradication and so forth. Marginalization of Women According to Muniarti (2004: 20), marginalization is placing or shifting women to the periphery. Women are imaged weak, lacking or irrational, lacking or daring, so inappropriate or unfit to lead. As a result, women are always secondary whenever there is a chance to lead. What is meant by marginalization is the process of impoverishing women so that women cannot act and express because the role of women is shifted to the margins or marginalization of women positions socially in the acquisition of economic and political resources both in the domestic and public sphere. 2. Subordination toward Women Sunarto (2009: 139) states that subordination is an effect arising from the dominant position of men over women acquired through physical violence, coercion, structural violence (done by social institutions and economic power), and symbolic violence. Thus, subordination is defined as a social process in the life of a society that raises a policy on women so that it does not consider that women are important which position women and their work lower than men. Gender and Stereotype Stereotypes are the labeling or marking of a particular group. A stereotypical view is a standard image of an individual or group that is inconsistent with existing empirical reality. Negative labeling (stereotypes) generally generates gender inequality. Gender-role stereotypes are common categories that describe views and beliefs about women and men (Santrock, 2003: 374). Thus, stereotyping is the labeling or negative labeling of a people with all forms of representation of the beliefs of society. Gender and Violence Many kinds and forms of crimes can be categorized as gender violence, including: the form of female rape, including rape in marriage; Acts of beatings and domestic violence; Forms of genital mutilation; Violence in prostitution; Violence in pornography; Violence in KB settlement; Veiled violence; Sexual violence (Fakih, 2003: 20). Violence against women is an attack carried out because of the gender assumption that makes women as victims of domestic violence, whether physical, psychic, economic neglect, and sexual violence 5. Workload Nugroho (2008: 47) argues that women's gender roles in the public's assumption is managing the households so that many women bear the domestic workload more and longer than men. In fact, for the poor families, the burden must be borne by women in addition to working outside so that they should bear a double workload. For community groups with sufficient economic levels, domestic workloads are often assigned tohousemaids METHOD This research used descriptive qualitative design. The researchers described systematically, factually, and accurately about the studied facts and causal relationships of the phenomena. Data obtained in this study were through explanation, speech figures or attitude, or the thought that Abidah El Khalieqy would like to convey through his novels. The study relied on inter subjectivity assumptions and generally created the meaning and "reality" between researchers and participants. The data in this research were the result of study of novel document of Perempuan Berkalung Sorban and Geni Jora by Abidah El Khalieqy in the form of words, sentences in the form of utterances, description of figures, and inter-character dialogue which indicated the existence of feminism that happened to the analyzed novel figures. The data analysis procedure used in this research was content analysis. Content analysis also analyzed not only the manifest nature of a text as Mayring (2000) says: "Content analys is not only the manifest content of the material-as its name suggests". When referring to Mayring, content analysis also emphasizes on the data interpretation. Data validity analysis used triangulation technique. There are two forms of triangulation to check data in theoretical triangulation. The Life Struggle of Female Characters in the Novels of Abidah El Khalieqy 1. The Life Struggle of Female Characters in the NovelPerempuan Berkalung Sorban a. The Struggle against Marginalization toward Women The struggle against the marginalization of women in the economic field as a process of marginalization of women's position socially with the aim of impoverishing women so that women cannot act and expression of the role that is shifted from where it can be achieved. It was then opposed by Annisa figure in the novel Perempuan Berkalung Sorban. In the economic field, Annisa wanted to be able to work and had her own income. By working, each at the beginning or the end of the month she would receive a salary. She did not want to depend on men. " Annisa held that if she went to the office, her clothes were neat and tidy unlike Lek Sumi who was all day in the kitchen, her body smelled and her clothes were oversized. If Annisa went to the office, everyone looked at her respectfully, did not cover their nose if she passes as they closed their nose near Lek Sumi because of the smell of onions and shrimp paste. After marrying Samsudin, the man of his parents' choice, Annisa in her household did not have the right to organize finances, organize shopping, and all household necessities, as well as her personal needs. Financial affairs submitted to his second wife, namely Mbak Kalsum. For this reason, Annisa then talked to Samsudin so that he split the money in a fair way just as the sunnah of polygamy. Annisa demanded her right to get her birthright as a wife. After I think about it, I talked to Samsudin as well so that he split the money in a fair way just as the sunnah of polygamy. He said he would show me justice someday. (Perempuan Berkalung Sorban,p.118) In the political sphere, women can have the same rights as men to express opinions, both for themselves, within the scope of the family and the larger area, the political rights in society and the state. In the domestic sphere, women also have the right to refuse, accept, or initiate in intimacy with their husbands. A wife also has the right and freedom to refuse and accept. (Perempuan Berkalung Sorban,p. 139) Annisa's body always held rejection, as well as her soul while in touch with Samsudin. Annisa never felt ready when invited to have sex with Samsudin. When women were not ready, for Annisa it hurt, even abused. However, Annisa has not dared to refuse. She is gathering strength. One day later, Samsudin will be shocked, also many people would be shocked to see what Annisa would do in the future. b. The struggle Against Subordination towardWomen In the novel Perempuan Berkalung Sorban there was subordination that had placed the female characters in positions not important and never taken into account. The various forms of subordination that were handed down to women made the Annisa figure as the main character in the novel to bring up the ideas of gender equality championed by it. Annisa was fighting for her right to subordination, in determining decisions in her life like men. Annisa had proved herself that she was capable of doing great things, as high as anything she could achieve, even beyond what men could achieve. Annisa wanted to be a free woman, not in the confines of anyone, not even a man. She wanted to be a smart woman, as her name implies, not to be a foolish woman that could be lowered by men. Annisa wanted to be able to answer questions asked by her brother Rizal about the differences between humans and frogs and other reasons not to catch the females in trouble Annisa had a desire to ride a horse. Her desire to learn to ride a horse had surpassed the supreme tone of his father's anger. Annisa was forbidden to ride a horse because she was female, the ability to ride a horse only deserves to be owned by a man. However, Annisa thought otherwise that she had the same right to be able. To realize that desire, she trained to ride the horse with her uncle, namely Lek Khudhori. "Yes, why, Pak? I can't? Kak Rizal also learned to ride horses. " (Perempuan Berkalung Sorban, p. 7) Annisa thought that whatever happened she must be able to learn to ride a horse. She would still learn to ride horses. Annisa's dream of being able to ride a horse was to equate her as Aisha or Princess Budur who could lead a war army, to be like Tjut Njak Dhien who was also great, also wanted to be great as Queen Balqis or Hindun bint Athaba. Moreover, as a leader means that other people would be subservient. In fact, the mighty men became obedient behind her. c. Struggle Against Stereotypes about Women The form of struggle against stereotypes against women was to change the negative labeling of her people. Labeling as a form of stereotypes about women who are always considered stupid even loudly thrown at her brother. She called Rizal the 'pinhead'. The pinhead means a fool. The negative stigma that women are stupid was opposed by Annisa. d. The Struggle Against Violence towardWomen In the novel Perempuan Berkalung Sorban there are various struggles against violence toward women such as the struggle against physical violence, the struggle against sexual violence and the fight against emotional violence. This can be seen in the following description. 1) The Struggle against Physical Violence Physical violence that occurred in the novel Perempuan Berkalung Sorban which happened to Annisa as the main character of women in this novel is a follow-up of sexual violence perpetrated by Samsudin, her husband. I was about to scream, but lost quickly with the palm of his hand that silenced my mouth. (Perempuan Berkalung Sorban, p. 97) "Fine! Fine! ... But marriage can not be based on a single disease. Therefore I will punch you. Do you hear now?" "Second, you have hurt me for so long and tried to transmit your disease to me. But Allah guards me from your tyranny. So to Allah I give a worthy reply to your violence and the third ... " (Perempuan Berkalung Sorban, The quotation above illustrates the physical struggle or resistance to physical violence against women. Annisa could not retaliate through physical resistance. What Annisa could do was to protest against any kind of harsh treatment of Samsudin against her. Annisa whose physical, mind, and heart was tortured felt no longer strong against Samsudin, she was about to divorce Samsudin. Annisa's anger and hatred for Samsudin reached its peak when the physical violence that happened to her was also experienced by Mbak Kalsum, her co-wife , and Fadilah, Samsudin's daughter with Mbak Kalsum. 2) The Struggle against Sexual Violence Annisa's struggle to fight sexual violence was when she always got contumely from her husband, Samsudin. Samsudin always asked for his rights without seeing the condition of his partner. Samsudin always insisted, even raped Annisa every time of having sex. "... Indeed up to now I have not dared to express the rejection verbally, because I myself have not got clarity about the law ...." (Perempuan Berkalung Sorban,p. 139) "... But if I hurt his heart, I can not stand his savagery, is that justified by religion?" (Perempuan Berkalung Sorban,p.169) Annisa attempted to counteract abusive behavior and abuse against her through attitude and behavior. However,, Annisa has not dared to express the rejection of Samsudin request verbally because she has not got clarity about the law in Islam. Annisa's body always held rejection, as well as her soul while in touch with Samsudin. Annisa never felt ready when invited to have sex with Samsudin. When women were not ready, for Annisa it hurt, even abused. 3) The Struggle against Emotional / PsychicViolence Annisa's form of resistance in her struggle against psychic/emotional violence was by opposing all kinds of psychic violence she experienced frontally, directly to the target object. For example, at the beginning of the story Annisa got a statement that offended her, namely statement and question from Pak Joko, Nisa's Indonesian teacher who compared himself to Lek Khudhori. Annisa considered that Lek Khudhori could not be compared to anyone, with any party. She replied to a statement from Mr. Joko that she would be really angry and hoped not to try to be wiseacre about Lek Khudhorianymore. "No! Lek Khudhori can not be compared to anyone, with any party, also with the father. You must know that and do not try to be knowledgeable about him. I will be really angry! " (Perempuan Berkalung Sorban,hal. 58) Lek Khudhori became a role model and model for Annisa. He could not be compared to anyone, not even Mr. Joko, Nisa's Indonesian teacher or her own husband, Samsudin. Only Lek Khudhori can keep me at ease and happy. The statement that Annisa threw to avenge her heartache. e. Struggle against Workload forWomen Overloaded or double burden due to stereotypes and subordination caused the woman's role to increase (being a working woman or a career) then her charged domestic role must be arranged to divide it proportionally. The struggle must be manifestly realized in the household. Women who participate in struggling for the equality of workload in the household are voiced by Annisa. Since childhood, Annisa considered that in matters of domestic work it is not fair. The duties of a woman are numerous, there are washing, cooking, ironing, mopping, sweeping, breastfeeding, feeding, bathing her child, and more. Unlike men, Mom, just one, going to the office. (Perempuan Berkalung Sorban,hal. 14) According to Annisa, the duties or responsibilities that women have to do in the domestic sphere are numerous, including washing, cooking, ironing, mopping, sweeping, feeding, breastfeeding, bathing their children, and others. Unlike men, only one, ie go to the office. For Annisa, the Woman's nature is just pregnant, giving birth, and breastfeeding her child. For the business of washing, cooking and educating the child are husband's responsibility as well. Associated in terms of having sex, both husband and wife have the right to feel pleasure in the relationship of husband and wife. So, it is not the wife's duty only, but also the obligation of the husband to make the wife happy. a. The Struggle against Marginalization toward Women the Novelof Geni Jora The root of gender inequality is related to patriarchal culture. In fact, not a few women who are qualified in various fields in the public space, not just adept at playing her role as mother and wife. In the political sphere for example, women can have the same rights asmen. "As in a fairy tale, the queens of Malikah, Khatun, they appeared little by little from the soft moans of yellowing pages in ancient books ... they appeared ... abdicating the throne from mother to daughter." (Geni Jora,hal.28) In the field of politics and government, women have equal opportunity to occupy certain positions, for example being the leader of the kingdom that does not have to be dominated by men. The queen of Malikah, Khatun became ruler of the royal throne to then abdicate the throne to her daughter, not to the son who became her crown prince b. The struggle Against Subordination towardWomen In the Geni Jora novel there was a subordination that placed the female characters in unimportant positions and had never been taken into account. The various forms of subordination that were handed down to women made the Kejora angry against men so that the ideas of gender equality were championed by her. Kejora needed to fight for her rights and freedom in determining decisions in her life as a woman like men. As a student, as a santriwati, as a colllegian, I sat facing each one. I installed my hearing in focusing my eyesight. I absorbed knowledge with my brain and my fuad. I learned science to meet the nutritional growth of my life. So I stand now, in front of you my ustad (Geni Jora, p.48). The above quotation illustrates how the figure of the Kejora struggled the subordination done by her grandmother. Kejora did the struggle by continuing to welcome the future, that is by continuing to study diligently. It can be seen from the way she learned to read books, holy books and speeches of kyai as well as paid attention to teachers' explanations. There is nothing in vain from rebellion. And nothing lasts from injustice. It always bears rebels with different types and models. And I think eradicating injustice is with a mirror on the face of the protagonist. (Geni Jora,page268) There are still many women undergoing subordination so that their position can not be equal with men. Women are always not considered important. It is evident that men, especially those who are still in a patriarchal environment, are always considered to be superior and first class. Unlike women, the condition of women who are in a superior position has never been taken into account. However, in today's development it can not be applied to women because women today tend to rebel. Their rebellion was not merely a resistance. However, the rebellion is a form of genderstruggle. Kejora wanted to be an independent woman just like free men without any restraint. However, it was with politeness and good way. She wanted to run her like a man running his life with work and education on the same portion. Based on this, Kejora indicated that she did not want to be a victim of subordination that had curbed and discriminated women. c. Struggle Against Stereotypes about Women in the Novel of Geni Jora One form of this stereotype is sourced from a gender perspective. There are so many stereotypes that occur in society that are attached to women generally so that they cause complicating, limiting, impoverishing, and disserving women. "Big fish in a little sea," replied Firouz calmly. "Pity and shame!" (Geni Jora,p 29) Negative labeling (stereotypes) in the view of feminism is aimed at women. The above quote is the opposite; men who scoffed at women's ideas about gender equality were opposed by men in discordant voices, the voice of less confident men in women-themed environments. Women who have 'stupid' stereotypes turn things around to narrow-minded men, jealous of women. So women are the pearls of gold, and the jewels of the world, the objects of interest to be envied, assaulted, and robbed of primitive male desires. Kitchen and miniskirts are no longer the trademarks of wives. (Geni Jora,p.156) Stereotypes about women that women are tempting faith creatures with their 'mini skirts'. Mini skirts can be interpreted as cheap, paid women, easily accessible in many places by the masses. For Kejora, the trademark is no longer valid. For her, women can be more than that. Able to be anything, higher than what men canachieve. d. The Struggle against Violence toward Women in the Novel of Geni Jora In the novel of Geni Jora, there are various struggles against violence against women such as the struggle against physical violence, the struggle against sexual violence, and the fight against emotional violence. This can be seen in the followingdescription.. 1) The Struggle against PhysicalViolence Ayeda and Fashafasha were true female warriors who took part in fighting for Palestinian land over Israeli occupation and its allies. For Ayeda and Fashafasha, martyrdom is a dream, a desire, and an ambition. "But we're not just in between," Fasha corrected, smiling, "we're part of them. Martyrdom is a dream, a desire, and an ambition. That's all my dream, "explains Fashafasha. (Geni Jora, p37) The quotation above illustrates the physical struggle or movement of the Palestinian resistance against the Israeli arbitrariness that occupied Palestine so that their freedom was forcibly taken away. For that, all the people were constantly moving Mossad, elderly people, young people, children and even women fight for their homeland, Ayeda and Fashafasha are equally active in Harakah Al-Muqawwamah Al Islamiyah, abbreviated as 'Hammas'. Hammas had a sophisticated, highly confidential operating method, and is capable of striking and very strikingoperations. 2) The Struggle against SexualViolence Fighting for gender justice is a tough task, because gender issues are an intense issue, for everyone who is emotionally involved. There are a lot of resistance when the struggle for gender inequality is activated because it is also a matter of challenging the privileges that a person gets from genderinequality. "No!" I shrieked loudly, "what will Uncle do to me! Let go of my hands! Let go! "I yelled that made Uncle bling wanted to run away, but where? Grandmother was at the door of the room and found us alone together. (Geni Jora,p. 111) That Lala being harassed by his two uncles made Kejora disliked because that behavior made Lala looked miserable. Therefore, Kejora wanted to help her. Then she slammed the door till it was crackling to surprise his two uncle and managed to escape Lala. Sexual harassment by uncle was also experienced by Kejora. Because of the actions of her uncle, Kejora tried to fight him byshouting. 3) The Struggle against Emotional/PsychicViolence Emotional violence was also experienced by Kejora. Kejora experienced emotional violence because of Zakky's behavior. Zakky, Kejora's boyfriend, was a masher. It was shown by the behavior he did. Zakky's behavior when meeting Lala, the way he looked at her, touching Lala's fingers even he would meet her in Jogjakarta in December illustrated Zakky's masher attitude that made Kejora jealous and sad. It is deeply described asfollow: "…..Independent. Trying to adapt politely and move. If men love to hunt there is nothing wrong women like the same thing. " (Geni Jora,p. 22) Emotional violence has affected the psychological condition of the victim. In the Geni Jora novel the emotional violence raised psychological conditions of inner pressure and jealousy towards men. Such conditions were experienced by Kejora's mother and Kejora. Kejora's mother who was made second wife by his father so that less attention from his father made her soul depressed. Kejora was very opposed to the practice of polygamy. She did not want to have the same fate as her mother, as a victim of male abuse. It was stated when she was talking to Najwa, Zakky'ssister. "What if Zacky did polygamy, what is your reaction?" Ask Najwa "I will do polyandry in illegal ways". (Geni Jora,p.192) The above quotation was the thought of the Kejora about justice delivered when answering the Najwa question. The right of equality and justice in the opposite sex relationship specifically was having more than spouses as well as Muslim men who were allowed to practice polygamy. Kejora fought it in a similar way, but not the same. Kejora would do polyandri using legal ways, namely to divorce Zakky, then remarried with a movie star that was more handsome than Zakky. Polyandry or not, the important thing was fair. If men could do it, women could do it as well. Therefore, Kejora opposed polygamy. Her struggle for equality appeared in her thinking that women could practicepolyandry. e. The Struggle against Women'sWorkload Overloaded or double burden due to stereotypes and subordination caused the woman's role increase (being a working woman or a career) then her charged domestic role must be arranged to divide it proportionally. The struggle must be manifestly realized in the household. The female characters in the Geni Jora novel who had championed the workload in the household were voiced by theKejora. Nadia Masid, like Nishwa or Qadisha, is a today's generation who has enjoyed technological progress. Going to college and abroad for conferences. She is no longer a woman with her hands clasped with flour mixed tajin and kuskus. Nor was the woman with waving djellaba bringing a basket of dirty clothes going to the Onila River. (Geni Jora,page26) The quotation quote above shows the struggle of equality in a double workload for women. She illustrated Nadia Masid, Nishwa, and Qadisha as the present generation who had enjoyed technological progress and could go to college and abroad to attend conferences. She was not a woman with her hands wrapped in flour mixed with tajin and kuskus. Nor was the woman with waving djellaba bringing a basket of dirty clothes going to the Onila River. It showed the struggle of equality in the acquisition of educational opportunities without the limitation of space and time for women so that she could develop herself and escape from manpower with a double workload. CONCLUSION Based on the results of the research and discussion, it was found that the struggle to fight the injustice are the struggleagainst:
2018-12-26T19:03:10.856Z
2018-07-26T00:00:00.000
{ "year": 2018, "sha1": "043c14e4e198806f5179aa378c88284daec20fe1", "oa_license": null, "oa_url": "https://knepublishing.com/index.php/KnE-Social/article/download/2730/5885", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "f2846682c3fbf6fb5972ca640f3752c06393184d", "s2fieldsofstudy": [ "Art" ], "extfieldsofstudy": [ "Art" ] }
89684275
pes2o/s2orc
v3-fos-license
PHYTOPHARMACOLOGICAL PROPERTIES OF MELOTHRIA MADERASPATANA: A REVIEW A medicinal plant plays a vital role in traditional systems. It is necessary to study the pharmacology activity of individual plants for treating diseases. Melothria maderaspatana Linn. belongs to the family of Cucurbitaceae, mostly present in South India, and it shows biological activities such as antibacterial, antioxidant, larvicidal, antiulcerogenic, antidiabetic, hypolipidemic, antihypertensive, immunomodulatory, and antihepatotoxic for treating various diseases which are discussed in this review paper. INTRODUCTION Herbal plants are used in traditional medicine systems because of its medicinal values. These plants are the source of raw material in pharmaceutical industries. Melothria maderaspatana Linn. Cogn. is one among them. It belongs to the family Cucurbitaceae. It is an annually monoecious herb which was found in India at hilly region. The myths of medicine claim that it is a good diuretic stomachic, gentle claims anti-inflammatory, antipyretic sudorific, and antiflatulent besides its use in biliousness and vertigo. A preliminary study was conducted to characterize phytochemicals present in M. maderaspatana, a plant drug used in traditional medicines [1]. It is called Musumusukkai in Tamil [2]. It is used in Siddha medicine against a variable disease [3]. An ethnobotanical study of medicinal plants used in Villupuram regions of Tamil Nadu was conducted [4]. This review paper deals with the pharmacological studies which have been exploded. ANTIBACTERIAL ACTIVITY Harshiny et al. synthesized silver nanoparticles using leaf extract of M. maderaspatana and conjugate ceftriaxone. Results showed conjugated ceftriaxone with silver nanoparticles have better antioxidant and antimicrobial effect as compared to unconjugated nanoparticles [5]. Riyazullah et al. conducted the study that showed soil and environment were major factors which have tendency to affect the activity of medicinal plants. They collected M. maderaspatana from India and Sri Lanka and tested their antibacterial and antifungal activity using different organic extracts and result proved that ciprofloxacin used as a standard for antibacterial activity and clotrimazole used as a standard for antifungal activity [6]. Hemamalini and Varma proved antimicrobial activity of methanolic leaf extract and petroleum ether extract and results showed that methanolic extract was more effective [7]. ANTIOXIDANT ACTIVITY Harshiny et al. confirmed the antioxidant activity of M. maderaspatana by 2,2-diphenyl-1-picrylhydrazyl (DPPH) assay [5]. They studied antioxidant activity of Melothria on sham-operated and uninephrectomized DOCA-salt-induced hypertensive rats. They concluded that M. maderaspatana showed antioxidant activity [8]. This study showed that aqueous extract of M. maderaspatana was evaluated in vitro antioxidant activity by radical scavenging assays against DPPH, hydrogen peroxide, hydroxyl radical, and ABTS and result proved that Melothria extracts effectively scavenge all radicals [9]. Examined the antioxidant activity using a methanolic leaf extract to evaluate DPPH assay and results showed that EC 50 value was <10 µg/ml [10]. They evaluated different fractions of Melothria and concluded ethyl acetate fraction showed a better activity. Confirmation was done by measuring the flavonoid content using total phenolic content and DPPH assay [11]. Studied antioxidant activities from roots, stems, leaves, and fruits of M. maderaspatana using acetone and methanol extracts and results showed methanolic extract gave a higher yield than acetone extract [12]. They studied free radical scavenging activity of Melothria and found that the leaves were showing maximum dose-dependent activity [13]. LARVICIDAL ACTIVITY Chitra et al. tested the larvicidal activity of silver nanoparticles were synthesized using leaf aqueous extract against Culex quinquefasciatus and Aedes aegypti. Result showed synthesized silver nanoparticles have predominant larvicidal activity [14]. ANTIULCEROGENIC ACTIVITY Gomathy et al. investigated the precautionary effect of ethanolic extract of M. maderaspatana against in domethacin-induced gastric ulcer in rats. Results proved that the ethanolic extract of Melothria has the ability to decrease acidity and increase mucosal defense in gastric area [15]. ANTIDIABETIC ACTIVITY Srilatha and Ananda investigated in vitro antidiabetic activity of the phenolics and extract such as phloroglucinol and quercetin and results proved that it can be used as an antidiabetic nutraceutical [16]. Balaraman et al. evaluated antihyperglycemic effect of M. maderaspatana in the streptozotocin (STZ) diabetic rats and compared with activity Coccinia indica [17]. Petrus tested the antidiabetic activity of M. maderaspatana [18]. effect of aqueous extract of M. maderaspatana was conducted in highfat diet-induced rats and results showed a significant hypolipidemic effect [20]. ANTIHYPERTENSIVE EFFECT Veeramani et al. investigated the antihypertensive effect of M. maderaspatana an identified phytochemicals such as caffeic, vanillic, ferulic, p-coumaric, coumarin, and gallic acid from active fraction by gas chromatography-mass spectrometer [21]. The antihypertensive activity of ethanolic extract of M. maderaspatana was studied on shamoperated and uninephrectomized DOCA-salt hypertensive rats and concluded methanolic extract showed antihypertensive effect [22]. IMMUNOMODULATORY ACTIVITY Thabrew et al. studied the effect of aqueous extract of M. maderaspatana on human complement system and the results concluded that the effects were dose dependent [23]. ANTIHEPATOTOXIC ACTIVITY Jayatilaka et al. studied the potency of an aqueous extract of M. maderaspatana and Osbeckia octandra. They found that M. maderaspatana works more effectively in protecting the liver against CCl 4 -induced dysfunction [24]. Veeramani et al. tested the renal defensive effect of C 4 H 8 O 2 (ethyl acetate) fraction of M. maderaspatana leaf on uninephrectomized DOCA-salt hypersensitive rats. They found that it controls the renal damage and also plays a role in controlling blood pressure [25]. Hepatocyte damage was induced by galactosamine and tert-butyl hydroperoxide. The protective effect of aqueous extract of M. maderaspatana against the damage was tested. They found that there was a decrease in activity during post-treatment with increase in time of exposure to the toxin [26]. OTHER PROPERTIES Iman et al. tested M. maderaspatana for antiplatelet activity. Various solvents with high polarity (i.e. methanol, chloroform, ethyl acetate, and hexane) and aerial parts of plants are used to prepare the extract. Results showed antiplatelet activity in all solvents except in chloroform, only 50% activity were shown after comparing to Aspirin [2]. Jayatilaka et al. tested the efficacy of M. maderaspatana on CCl 4induced changes in drug-metabolizing enzyme activity. They concluded that the aqueous extract of plants showed decreased CCl 4 -mediated reductions in aniline hydroxylase and p-aminopyrine N-demethylase activities [27]. Researchers studied the effect of ethyl acetate fraction of M. maderaspatana (EAFM) on membrane-bound ATPase in DOCAsalt-induced hypertensive rats. Results showed that the administration of EAFM having a good blood pressure control and protects against deranged activity of membrane-bound ATPase in DOCA-salt-induced hypertensive rats [28]. Raja et al. studied the effect of M. maderaspatana leaf-tea consumption on blood pressure, lipid profile, anthropometry, fibrinogen, bilirubin, and albumin levels in patients with hypertension. They concluded that there was a gradual decrease in BP and also beneficial effects in others [29]. Subramani et al., synthesized silver nanoparticles (AgNP) using M. maderaspatana and evaluated their antibacterial activity. The silver nanoparticles thus acquired showed highly potent antibacterial activity toward Gram-positive (Bacillus cereus) and Gram-negative (Escherichia coli, Pseudomonas aeruginosa, and Klebsiella sp.) microorganisms [30]. CONCLUSION Plants are the most important source for exploring potentially useful structural compounds for developing new therapeutic drugs [31]. In recent years, the use of natural herbal products has enhanced worldwide attentions. Many herbal products are claimed to assist in a healthy lifestyle [32]. M. maderaspatana is widely available in South India has been used to treat various diseases. The present review reports the various pharmacological potentials which are explored by various researchers. The active exploration of natural sources has provided new developments based on the understanding of complex mechanisms. Such exploration will lead to a safe and effective pharmacological treatment.
2019-04-02T13:08:24.419Z
2017-08-01T00:00:00.000
{ "year": 2017, "sha1": "bffd027e31c98020cc85e5c3c16a7be710a47c06", "oa_license": "CCBYNC", "oa_url": "https://innovareacademics.in/journals/index.php/ajpcr/article/download/18964/11996", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "7bc7ccbd93bd7cd36e73af831caceb4b787d856e", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology" ] }
15601362
pes2o/s2orc
v3-fos-license
Baryon Resonance Phenomenology The Japan Hadron Facility will provide an unprecedented opportunity for the study of baryon resonance properties. This talk will focus on the chiral nonanalytic behaviour of magnetic moments exclusive to baryons with open decay channels. To illustrate the novel features associated with an open decay channel, we consider the ``Access'' quark model, where an analytic continuation of chiral perturbation theory is employed to connect results obtained using the constituent quark model in the limit of SU(3)-flavour symmetry to empirical determinations. Introduction The Japan Hadron Facility will present new opportunities for the investigation of baryon resonance properties. In particular, access to the hyperons of the baryon decuplet will be unprecedented. This talk serves to highlight the novel and important aspects of QCD that can be explored through an experimental program focusing on decuplet-baryon resonance phenomenology. To highlight the new opportunities, it is sufficient to address the magnetic moments of the charged ∆ baryons of the decuplet. The magnetic moments of these baryons have already caught the attention of experimentalists and hold the promise of being accurately measured in the foreseeable future. Experimental estimates exist for the ∆ ++ magnetic moment, based on the reaction π + p → π + γ ′ p. The Particle Data Group 1 provides the range 3.7-7.5 µ N for the ∆ ++ magnetic moment with the two most recent experimental results of 4.52 ± 0.50 ± 0.45 µ N 2 and 6.14 ± 0.51 µ N 3 . In principle, the ∆ + magnetic moment can be obtained from the reaction γ p → π o γ ′ p, as demonstrated at the Mainz microtron. 4 An experimental value for the ∆ + magnetic moment appears imminent. Recent extrapolations of octet baryon magnetic moments 5,6,7 have utilized an analytic continuation of the leading nonanalytic (LNA) structure of Chiral Perturbation Theory (χPT), as the extrapolation function. The unique feature of this extrapolation function is that it contains the correct chiral behaviour as m q → 0 while also possessing the Dirac moment mass dependence in the heavy quark mass regime. The extrapolation function utilized here has these same features, however we move beyond the previous approach by incorporating not only the LNA but also the next-to-leading nonanalytic (NLNA) structure of χPT in the extrapolation function. Incorporating the NLNA terms contributes little to the octet baryon magnetic moments, however it proves vital for decuplet baryons. The NLNA terms contain information regarding the branch point at m π = M ∆ − M N , associated with the ∆ → N π decay channel and play a significant role in decuplet-baryon magnetic moments. Leading and Next-to-Leading Nonanalytic Behavior We begin with the chiral expansion for decuplet baryon magnetic moments. 8 The LNA and NLNA behaviour is given by where q i and I i 3 are charge and isospin respectively, and G j (j = π, K) is given by H describes the meson coupling to decuplet baryons and C describes octetdecuplet transitions. We take H = −2.2 and C = −1.2. Here we omit the Roper 8 as this transition requires significant excitation energy and is strongly suppressed by the finite size of the meson source. The octet-decuplet mass splitting δ N is assigned its average value We take f π = 93 MeV and f K = 112 MeV. The function F (δ, m, µ) has the form a m > | δ | , Hence the LNA and NLNA behaviour of decuplet magnetic moments is given by where The values for the above chiral coefficients, Eqs. (5), describing the strength of various meson dressings of the ∆ baryons, are summarized in Table 1 for the four ∆ baryons of the decuplet. Analytic Continuation of χPT It is now recognized that in any extrapolation from the heavy quark regime (where constituent quark properties are manifest) to the physical world, it is imperative to incorporate the quark-mass dependence of observables predicted by χPT in the chiral limit. However, as results are often obtained using methods ideally suited for heavy quark masses, it is imperative for the extrapolation function to correctly reflect the behaviour of the physical observable in the heavy quark mass regime as well. An extrapolation function for the ∆-baryon magnetic moments satisfying these criteria is where µ 0 and β are parameters optimized to fit results obtained near the strange quark mass and Γ(m π ) is taken from the chiral expansion for decuplet where m (0) K , F π and F K are constants defined to ensure that each term A, B, C and D, vanishes in the chiral limit. Utilizing the following relations provided by χPT the four terms of Eq. (7) vanish in the chiral limit provided Figure 1 presents a plot of each of the four terms of Eq. (7) (without the chiral coefficient pre-factors) as a function of m 2 π . Figure 2 presents a plot of the four terms summed with the appropriate weightings of Table 1 for each of the four charge states of the ∆. The extrapolation function of Eq. (6) is designed to reproduce the leading and next-to-leading nonanalytic structure expressed in Eq. (7) for expansions about m π = 0. Eq. (6) may be regarded as an analytic continuation of Eq. (7), preserving the constraints imposed by chiral symmetry and introducing the heavy quark mass regime behaviour to the extrapolation function. The LNA behaviour of Eq. (7) is complemented by terms analytic in the quark mass with fit parameters µ 0 and β adjusted to fit additional constraints on the observable under investigation. Hence the extrapolation function guarantees the correct nonanalytic behaviour in the chiral limit. Further as m π becomes large, Eq. (6) is proportional to 1/m 2 π . As m π 2 ∝ m q over the applicable mass range, the magnetic moment extrapolation function decreases as 1/m q for increasing quark mass, precisely as the Dirac moment requires. This extrapolation function therefore provides a functional form bridging the heavy quark mass regime and the chiral limit. Results The method employed to obtain our theoretical predictions is analogous to that presented in our previous analysis of octet baryon magnetic moments. 5 We take the established input parameters, the strange-constituent and strange-current quark masses (M s and c m phys These formulas are used to obtain two magnetic moment data points near the SU(3)-flavour limit where u, and d quarks take values near the s-quark mass. To fit Eq. (6), which is a function of m π , to the magnetic moments given by the CQM in Eq. (11) with constituent-quark masses M u = M d = M i (i = 1, 2), we relate the pion mass to the constituent quark mass via the current quark mass. 5 Chiral symmetry provides where m phys q is the quark mass associated with the physical pion mass, m phys π . From lattice studies, we know that this relation holds well over a remarkably large regime of pion masses, up to m π ∼ 1 GeV. The link between constituent and current quark masses is provided by where M χ is the constituent quark mass in the chiral limit and c is of order 1. Using Eq. (13) this leads to The link between the constituent quark masses M i and m π is thus provided by where M s − c m phys s = M χ encapsulates information on the constituent quark mass in the chiral limit, and c m phys s provides information on the strange current quark mass. We use the ratio c The name indicates the mathematical origins of the model: Analytic Continuation of the Chiral Expansion for the SU(6) Simple Quark Model. Table 2. Theoretical predictions for the charged ∆ baryon magnetic moments. The fit parameters µ 0 and β are given for each scenario. The only known experimental value for the ∆ baryon magnetic moments is the ∆ ++ moment, recent measurements provide 2 µ ∆ ++ = 4.52 ± 50 ± 45 µ N and µ ∆ ++ = 6.14 ± 51 µ N . Table 2. The interesting feature of these plots is the cusp at m 2 π = δ 2 N which indicates the opening of the octet decay channel, ∆ → N π. The physics behind the cusp is intuitively revealed by the relation between the derivative of the magnetic moment with respect to m 2 π and the derivative with respect to the momentum transfer q 2 , provided by the pion propagator 1/(q 2 + m 2 π ) in the heavy baryon limit. Derivatives with respect to q 2 are proportional to the magnetic charge radius in the limit q 2 → 0, If we consider for example ∆ ++ → p π + with |j, m j = |3/2, 3/2 , the lowestlying state conserving parity and angular momentum will have a relative Pwave orbital angular momentum with |l, m j = |1, 1 . Thus the positivelycharged pion makes a positive contribution to the magnetic moment. As the opening of the p π + decay channel is approached from the heavy quark-mass regime, the range of the pion cloud increases in accord with the Heisenberg uncertainty principle, ∆E ∆t ∼ . Just above threshold the pion cloud extends towards infinity as ∆E → 0 and the magnetic moment charge radius diverges. Similarly, (∂/∂m 2 π )G M → −∞. Below threshold, G M becomes complex and the magnetic moment of the ∆ is identified with the real part. The imaginary part describes the physics associated with photon-pion coupling in which the pion is subsequently observed as a decay product. It is the NLNA terms of the chiral expansion for decuplet baryons that contain the information regarding the decuplet to octet transitions. These Figure 3. The extrapolation function fit for ∆ ++ and ∆ + magnetic moments. The magnetic moments given by the CQM either side of the SU(3)-flavour limit are indicated by dots (•) and the theoretical prediction for each baryon is indicated at the physical pion mass by a star (⋆). The only available experimental data is for the ∆ ++ and is indicated by an asterisk ( * ). The proton extrapolation 5 (dashed line) is included to illustrate the effect of the open decay channel, ∆ → N π, in the ∆ + extrapolation. The presence of this decay channel gives rise to a ∆ + moment smaller than the proton moment. transitions are energetically favourable making them of paramount importance in determining the physical properties of ∆ baryons. The NLNA terms serve to enhance the magnitude of the magnetic moment above the opening of the decay channel. However, as the decay channel opens and an imaginary part develops, the magnitude of the real part of the magnetic moment is suppressed. The strength of the LNA terms, which enhance the magnetic moment magnitude as the chiral limit is approach, overwhelms the NLNA contributions such that the magnitude of the moments continues to rise towards the chiral limit. The inclusion of the NLNA structure into octet baryon magnetic moment extrapolations is less important for two reasons. The curvature associated with the NLNA terms is negligible for the N and Σ baryons and small for the Λ and Ξ baryons. More importantly one can infer the effects of the higher order terms of χPT, usually dropped in truncating the chiral expansion, through the consideration of phenomenological models. If one incorporates form factors at the meson-baryon vertices, reflecting the finite size of the meson source, one finds that transitions from ground state octet baryons to excited state baryons are suppressed relative to that of χPT to finite order, where point-like couplings are taken. In χPT it is argued that the suppression of excited state transitions comes about through higher order terms in the chiral expansion. As such, the inclusion of NLNA terms alone will result in an overestimate of the transition contributions, unless one works very near the chiral limit where higher order terms are indeed small. For this reason octet to decuplet or higher excited state transitions have been omitted in previous studies. 5,6,7 In the simplest CQM with m u = m d the ∆ + and proton moments are degenerate. However, spin-dependent interactions between constituent quarks will enhance the ∆ + relative to the proton at large quark masses, and this is supported by lattice QCD simulation results. 10 As a result, early lattice QCD predictions based on linear extrapolations 10 report the ∆ + moment to be greater than the proton moment. However with the extrapolations presented here which preserve the LNA behavior of χPT, the opposite conclusion is reached. We predict ∆ + and proton magnetic moments of 2.58 µ N and 2.77 µ N respectively. The proton magnetic moment extrapolation 5 is included in Fig. 3 as an illustration of the importance of incorporating the correct nonanalytic behaviour predicted by χPT in any extrapolation to the physical world. An experimentally measured value for the ∆ + magnetic moment would offer important insights into the role of spin-dependent forces and chiral nonanalytic behaviour in the quark structure of baryon resonances. Conclusion An extrapolation function for the decuplet baryon magnetic moments has been presented. This function preserves the leading and next-to-leading nonanalytic behaviour of chiral perturbation theory while incorporating the Dirac-moment dependence for moderately heavy quarks. Interesting nonanalytic behavior in the magnetic moments associated with the opening of the π N decay channel has been highlighted. It will be interesting to apply these techniques to existing and forthcoming lattice QCD results, and research in this direction is currently in progress. Experimental value exists only for the ∆ ++ magnetic moment where the two most recent results are µ ∆ ++ = 4.52 ± 0.50 ± 0.45 µ N and µ ∆ ++ = 6.14 ± 0.51 µ N . These values are in good agreement with the prediction of 5.39 µ N given by our AccessQM as described above. Arrival of experimental values for the ∆ + and ∆ − magnetic moments are eagerly anticipated and should be forthcoming in the next few years. More importantly, these techniques may be applied to the decuplet hyperon resonances where the role of the kaon cloud becomes important. We look forward to new JHF results in this area in the future.
2014-10-01T00:00:00.000Z
2002-11-11T00:00:00.000
{ "year": 2002, "sha1": "a37930e6f6086504635bd219971d109e59c987c9", "oa_license": null, "oa_url": "http://arxiv.org/pdf/nucl-th/0211027", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "5cce3274e20faffee869d2f7ab3b894664338f00", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
119196397
pes2o/s2orc
v3-fos-license
Multifield dynamics of supersymmetric Higgs inflation in SU(5) GUT We study the Higgs inflation model realized in the supersymmetric SU(5) grand unified theory (GUT), focusing on its multifield dynamics and prediction of cosmological observables. The requirement for GUT symmetry breaking during inflation imposes tight constraints on the model parameters. We find, nevertheless, with an appropriately chosen noncanonical Kahler potential the model is in excellent agreement with the present cosmological observation. The effects from multifield dynamics is found to be minor and thus, unlike other similar supersymmetric implementation of nonminimally coupled Higgs inflation, the prediction of this model is robust against multifield ambiguities. I. INTRODUCTION The quest for a concrete particle theory realization of cosmological inflation continues to be a major theoretical challenge. Current experiments put stringent bounds on the amplitude of the tensor mode primordial fluctuations. To quote the results of the BICEP2/Keck Array and Planck collaborations [1], the primordial tensor-toscalar fluctuation ratio is constrained to be r 0.002 < 0.09 (1) (Planck TT+lowP+lensing+ext+BKP) at 95% confidence. The simple chaotic inflation models with a quadratic or quartic inflaton potential are disfavored by the observation. Instead, the R 2 -inflation model [2,3] and the nonminimally coupled Higgs inflation model [4,5], among others, have (re)surfaced as viable accounts of the early Universe. In particular, the Bezrukov-Shaposhnikov scenario [5] of nonminimally coupled Higgs inflation is an attractive proposal. This scenario is economical as there is no need to introduce a new field of unknown origin; the Higgs field that already exists in the Standard Model (SM) is responsible for inflation. The model also has strong predictive power as the physics at the inflationary scale is related to that of the collider scale through the renormalization group flow [6]. There is a controversy on the unitarity problem associated with the large nonminimal coupling required in this type of scenario [7][8][9]. This danger may be avoided, e.g. by considering the cutoff scale as field-dependent [10]. The Bezrukov-Shaposhnikov scenario of Higgs inflation assumes that the SM is valid all the way up to the energy scale of inflation. However, it is widely believed that the grand unification takes place at the energy scale of M GUT ∼ 10 16 GeV and there the physics is supposed to be described by grand unified theory (GUT). The tensor- * kawai@skku.edu † kimjinsu@skku.edu to-scalar ratio of r ≈ 0.05, for example 1 , implies that the Hubble parameter during inflation can be as large as H ∼ 10 14 GeV; this is closer to the GUT scale than to the electroweak scale, and thus inflation may be more appropriately discussed in the framework of GUT than in the SM. In view of the elegant gauge coupling unification in the presence of supersymmetry, a natural beyond-the-SM extension of the Bezrukov-Shaposhnikov scenario would be in supersymmetric GUT. Implementation of the nonminimally coupled inflationary scenario in supersymmetric GUT has been discussed e.g. in the supersymmetric SU (5) model 2 [13] and the supersymmetric Pati-Salam model [14]. By construction, these nonminimally coupled models employ a noncanonical Kähler potential to circumvent the supergravity η problem and at the same time to suppress the tensor mode fluctuations to be compatible with the observation (1). This type of model involves multiple scalars and in principle, the prediction for cosmological observables depends on the trajectory of the inflaton in the multidimensional field space. As pointed out in [15][16][17], there is a danger of tachyonic instabilities in undesired directions of the field space, but the instabilities may be removed and the inflaton trajectory can be controlled by further noncanonical terms in the Kähler potential [18] (see also [19][20][21][22]). Thus the trajectory of the inflaton is in general sensitive to the noncanonical terms, and naturally the prediction for the cosmological observables depends on the Kähler potential. Conversely, the current observational constraints may be used to restrict the parameter space of the Kähler potential [23]. In this paper we study the multifield dynamics of supersymmetric GUT-embedded nonminimally coupled Higgs inflation. Our main focus is on the model based on the minimal supersymmetric SU (5) GUT; this is the simplest Higgs inflation model in supersymmetric GUT 1 The BICEP2/Keck Array/Planck joint analysis [11] gives r = 0.048 +0.035 −0.032 at 68% confidence. 2 Nonminimally coupled SU (5) GUT Higgs inflation without supersymmetry is discussed in [12]. that involves symmetry breaking of a GUT group, and hence it serves as a prototype of GUT-based Higgs inflation models. The scenario was discussed in [13] using a crude single-field approximation. The purpose of the present paper is to analyze it appropriately as a multifield inflationary model and reinvestigate its predictions. This model differs from the similar supersymmetric Higgs inflation models implemented in the next-to-minimal supersymmetric Standard Model (NMSSM) [16] or the supersymmetric seesaw model [19][20][21], by the form of the Kähler metric, which is essential for the phenomenological consistency of the GUT model as the GUT symmetry needs to be broken. We investigate cosmological consequences of this feature. In the next section we start by reviewing the supersymmetric Higgs inflation model based on the SU (5) GUT [13]. For the sake of concreteness we focus on the two-field case arising from the supersymmetric minimal SU (5) GUT, and describe its construction in detail. We analyze the model in Sec. III and present the numerical results. In Sec. IV we conclude, with brief discussions on our results. The technicalities of the multifield inflationary dynamics are summarized in Appendix. A. II. HIGGS INFLATION IN SU (5) GUT We recall that the Georgi-Glashow SU (5) GUT [24] consists of the gauge field in 24, the GUT Higgs field in 24, the SM Higgs field in 5, N F (the flavor multiplicity) fermion fields in 10 and N F fermion fields in 5, in the representations of SU (5) GUT . The gauge and the SM Higgs fields decompose into the representations of the SM gauge group SU (3) c × SU (2) L × U (1) Y as Here, µ = 0, 1, 2, 3 is the spacetime index, a = 1, 2, 3 is the SU (2) L index, α = 1, 2, 3 is the color index, Φ T is the colored (triplet) Higgs and Φ D is the SM (doublet) Higgs. Since the color symmetry is unbroken in the SM vacuum, Φ T = 0. The SM Higgs vacuum expectation value (VEV) is Φ D = 246 GeV. The GUT Higgs field breaks the GUT symmetry down to the SM symmetry by giving GUT scale masses to the X, Y , X, Y fields. In the representations of SU (3) c × SU (2) L , the fermion fields are In the supersymmetric SU (5) GUT, there are two Higgs doublets H 1 ≡ H d and H 2 ≡ H u ⊃ Φ D . The field contents of the minimal SU (5) model are one vector supermultiplet in 24 and 5 kinds of chiral supermultiplets: • Σ in 24 (GUT Higgs), • H in 5 (including H u ), • N F families of χ ij in 10 (including Q, u c and e c ), • N F families of η i in 5 (including L and d c ). The inflationary model we discuss involves only Σ, H, and H; we will neglect the vector multiplet 24 and the chiral multiplets χ ij (10) and η i (5) below. A. Superpotential for SU (5) GUT Higgs inflation We consider the GUT superpotential given by where µ, ρ, m, λ are real constant parameters. The scalar component of Σ is a traceless 5×5 matrix Σ i j . For the (almost) canonical Kähler metric the potential constructed from the second and the third terms of (4) has three distinct vacua: The first vacuum corresponds to the unbroken SU (5), the second corresponds to the spontaneous symmetry breaking SU (5) → SU (4) × U (1), and the third one to SU (5) → SU (3) × SU (2) × U (1). Obviously, for the SM particle physics to be realized after inflation we need the last configuration of the Σ field. We use a singlet chiral superfield S to write It can be easily verified that Tr( in which the bosonic parts of H c and H u are Φ T and Φ D in (2), the superpotential becomes In the GUT scale the SM Higgs field is almost massless, requiring The color Higgs fields must have vanishing VEV, H c = H c = 0 since the color symmetry is unbroken throughout the history. They are expected to have GUT scale masses and hence The conditions (9) Since S = 0, we must have Denoting v ≡ S , the conditions (9) and (12) give In the Higgs doublets the charged components can be consistently set to zero, Recalling that the contraction of the SU (2) doublets uses the SU (2) invariant iσ 2 = 0 1 This is the superpotential we shall use for the inflationary model. B. The cubic Kähler model For successful cosmological inflation the inflaton potential needs to satisfy at least the following three conditions: (i) sufficiently flat so that slow roll takes place; (ii) exhibits no tachyonic instabilities in the directions orthogonal to the desired trajectory; (iii) the inflaton trajectory settles at the SM vacuum after the slow roll. The difficulty to achieve (i) within supergravity is known as the η problem. One way to circumvent the η problem is to use a noncanonical Kähler potential 3 , and here we use, following [13], the Kähler potential K = −3 ln Φ, where The reduced Planck mass M P = 2.44 × 10 18 GeV is set to unity. As we shall see, the conditions (ii) and (iii) above will also be fulfilled if the real parameters ω and ζ are chosen appropriately. The ellipsis in the first line of (16) represents the canonical terms for the superfields other than Σ = 24, H = 5, H = 5. Canonical here is in the sense of the superconformal framework [26][27][28][29][30][31][32][33], in which the Kähler metric constructed from the superconformal Kähler potential K ≡ −3Φ becomes trivial. The terms in the second and the third lines are noncanonical. The term proportional to γ renders the potential to be flat, in a manner analogous to the nonminimal coupling in the SM Higgs inflation model. The quartic term (proportional to ζ) controls the tachyonic instability, and the cubic terms (proportional to ω) control the symmetry of the potential so that the SM vacuum can be reached after the slow roll. The function Φ can be written using the component fields as Jordan frame To proceed, we consider the D-flat direction along H u -H d and parametrize it by a superfield ϕ as Further, it is convenient to rescale the scalar components of S and ϕ as 4 3 The η problem states that assuming the canonical Kähler potential, a generic superpotential and F-term supersymmetry breaking, the slow roll parameter η can never be O(1). Therefore it may be avoided also by considering D-term supersymmetry breaking or using a specially engineered form of the superpotential. See [25] for a review. 4 The normalization of s, ω, v differs from [13] by a factor of √ 2: The superconformal Kähler potential K is written in [13] as K. Note that K = −3 ln(−K/3) in this paper. With this normalization, the scalar-gravity part of the Lagrangian density takes the following form where g µν J is the inverse of the Jordan frame spacetime metric g J µν , R J is the scalar curvature in the Jordan frame and Φ =1 − 1 6 s 2 + 1 6 ωs 3 + ζ 12 The scalar potential V J in the Jordan frame is the F-term potential where the subscripts i and  denote differentiation with respect to the chiral and anitchiral superfields. In terms of the component fields its explicit form is where we have introduced v ≡ √ 2 v. Einstein frame To discuss cosmology it is convenient to bring the Lagrangian (20) to the Einstein frame in which the fields are minimally coupled to the gravity. By Weyl rescaling the metric g E µν = Φg J µν the Lagrangian in the Einstein frame reads where φ a = (s, h) and a = 1, 2. The scalar potential is In the Einstein frame the kinetic term for the scalar fields involves nontrivial field space metric The Christoffel symbol for the field space is computed from the metric G ab as where and The scalar curvature of the field space is In two dimensions the Riemann and the Ricci curvature tensors are written using the scalar curvature as C. The sextic Kähler model In the above setup we included the noncanonical term proportional to ω in the Kähler potential (16). The effect of this term is to enlarge the parameter space so that the inflaton trajectory is allowed to terminate at the SM vacuum (s, h) = (v, 0) [13]. For the same purpose we may alternatively consider the following Kähler potential: The term proportional to ω gives a sextic term in S, We take this as our second option of the Kähler potential that will be used in the supergravity embedding of the SU (5) GUT model. Jordan frame Using the same parametrization of the D-flat direction along H u -H d and the same normalization of the S-field, we find that the Lagrangian in the Jordan frame takes the same form (20), but now with where we have used The scalar potential in the Jordan frame reads Einstein frame After the Weyl transformation of the spacetime metric g E µν = Φg J µν the Lagrangian in the Einstein frame is written in the form of (26), with the scalar potential using (37) and the metric of the field space The Christoffel symbol of the field space is FIG. 1. The shape of the potential for ζ = 0 (left), ζ = 10 3 (center), and ζ = 10 4 (right) when the ω parameter is fixed to zero. The black square and the red circle are respectively the GUT vacuum and the SM vacuum. The ξ parameter is chosen to be ξ = 5285, which yields the Planck-normalized scalar power spectrum when ω = −116 and ζ = 10 4 . where The scalar curvature of the field space is We have seen above that assuming supergravity embedding with the Kähler potential either in the form of (16) or (32), the SU (5) GUT model with the superpotential (4) leads to a system of two scalar fields described by the Lagrangian (26). Note that in the limit of trivial s-field dynamics, that is, if we set s = v = 0, the Jordan frame Lagrangian (20) becomes with Φ = 1 + ξh 2 . This is the Lagrangian of the nonminimally coupled single field inflation model with a quartic self coupling, which has attracted much attention recently. This model predicts a small tensor-to-scalar ratio compatible with the Planck and the WMAP observations; see e.g. [34]. Since the field h above is identified as (the D-flat component of) the SM Higgs field, this model is considered as a realization of the Bezrukov-Shaposhnikov scenario of SM Higgs inflation [4,5] within supersymmetric SU (5) grand unification. The s field is a component of the GUT Higgs field and, since the GUT symmetry is broken in our world, phenomenological consistency does not allow the single-field limit s = v = 0. Hence an honest multifield analysis is mandatory if we are to make prediction based on this model. In the next section we present the results of numerical study of the multifield inflationary dynamics. The technicalities of the formalism we use are summarized in Appendix. A. III. NUMERICAL RESULTS In this section we discuss the multifield dynamics of the inflationary model introduced in the previous section. We first comment on the shape of the inflaton potential when ω = ω = 0 in (16) or (32) (in the ω = 0 limit these two models are identical). The potential in this case is not phenomenologically viable as the SM vacuum cannot be reached after inflation. We then investigate the cases for nonzero ω, first in the presence of the cubic term (16) and then the sextic term (32) of S in the Kähler potential; we will see that phenomenologically viable inflaton trajectories are allowed in both cases. Next, cosmological parameters, including the scalar spectral index, the tensor-to-scalar ratio, the isocurvature fraction and non-Gaussianity are computed. The formalism we use in our numerical code 5 is summarized in Appendix. A. For computation of the non-Gaussianities we use another set of numerical code developed in [23,37]. A. Inflaton potential of the SU (5) GUT model The inflationary model we study here includes 5 tunable parameters: ρ, λ, ξ, ω, and ζ. The first two of them concern the physics of grand unification and are O(1). For the sake of concreteness we shall set them to ρ = λ = 0.5 in the following analysis. The inflationary dynamics is not very sensitive to the values of ρ and λ [13]. The parameter ξ corresponds to the nonminimal coupling in the case of the SM Higgs inflation model and is crucial for the slow-roll dynamics. We fix this parameter by the Planck normalization of the fluctuation amplitude [1,38] A s = 2.207 × 10 −9 (TT, TE, EE + low P). The number of e-folds between the horizon crossing of the cosmic microwave background (CMB) scale and the end of inflation is chosen to be N e = 60. For a given inflaton trajectory, the end of inflation is characterized by the condition = 1, with the slow roll paremeter defined in (A27). Integrating the Hubble parameter backward in time from there along the inflaton trajectory for N e = 60 we locate the horizon crossing of the CMB scale. Solving the evolution of the fluctuations forward in time from there we fix the ξ parameter by the condition that the adiabatic mode at the end of inflation is normalized by the Planck observation (44). Note that in multifield inflation the amplitude of the adiabatic mode at the end of inflation may differ from the value at the horizon crossing, due to the isocurvature effects. To see the effects of the remaining two parameters ζ and ω, let us look at the shape of the potential (Fig. 1, the left panel) when ζ = ω = 0. As our focus is on Higgs inflation realized in supersymmetric GUT, we are interested in the inflaton trajectory that lies along the direction of h. However, the potential is seen to exhibit tachyonic instability in the direction of s field and hence slow roll in the h direction will not take place. The instability is removed by including a quartic term ζ|S| 4 in the Kähler potential (Fig. 1, the center and the right panel). In general, such higher terms can exist in supergravity. While a larger value of ζ renders the inflaton potential more stable, there exists an upper bound of ζ in our con-text, as the zeros of the Kähler metric κ = 1 − 2ζs 2 introduce singularities of the potential at s = ±1/ √ 2ζ, beyond which the supergravity Lagrangian is unreliable. As the SM vacuum lies at s = v ∼ M GUT in our model, a scenario of inflation that ends up in the SM vacuum requires ζ < 1/2v 2 ( 7 × 10 3 , assuming v 2 × 10 16 GeV). Within this range of ζ, no inflaton trajectories terminating in the SM vacuum can be found. This problem may be solved by modifying the Kähler potential further, with the terms parametrized by ω [13]. In the following we study the two cases explained in Sec. II, namely the model (16) with an additional cubic S term, and the one (32) with an additional sextic S term. B. The cubic Kähler model In this case the nontrivial component of the Kähler metric κ is modified as (22). Its zeros are shifted to s = s ± ≡ (−ω ± 2ζ + ω 2 )/2ζ. These are the location of the singularity walls that are used to tame the tachyonic instabilities. It is easy to see that s ± are real when ζ > −ω 2 /2. As the GUT vacuum and the SM vacuum must be both between the walls, we impose s − < 0 < v < s + , leading to ζ > 0 and ω < 1 2v − ζv. The purpose of introducing the ω parameter has been to shift the walls so that the SM vacuum is favored over the GUT vacuum; in the cubic Kähler model as the parameters ζ and ω are varied. The initial value of the h field is chosen to be hinit = 0.12 and the parameter ξ is fixed to 5285 using the Planck normalization of the scalar power spectrum when (ζ, ω) = (10000, −116) and e-foldings 60. The Ne in the table is the e-folding number between hinit and h end . this implies ω < 0. Finally, s + − s − 1 to cure the tachyonic instabilities, which gives to ζ 1. Background solutions As our interest is in the Higgs inflation model realized in GUT, we focus on inflaton trajectories which are sufficiently straight along the SM Higgs h direction in the large field region. For this a large value of ζ is needed, and as an example of large enough ζ we choose ζ = 10000. As mentioned earlier, we fix the value of ξ using the Planck normalization of the density fluctuations. In doing so we solve the background equation fully numerically without using slow-roll approximation. See Appendix A 1 and A 3 for the details of the procedure. For ζ = 10000 and ω = −116 (the reason for choosing this value of ω is explained below), we find that the Planck normalization (44) gives ξ = 5285. We use this trajectory as a reference case for the cubic Kähler model. To study the behavior of background solutions in the vicinity of this reference trajectory, we first vary the value of ω, keeping the values of ζ and ξ fixed. We find four types of numerical solutions for the inflaton trajectories, as shown in Fig. 2. In the figure, the initial value of h is chosen to be h = h init = 0.12 (which yields more than 60 e-folds in the reference case). The first type of trajectory, which we call the LW-type solutions, makes a turn after the slow roll and escapes through a hole in the s = s − wall, as depicted in Fig. 2a. The second type (the GUT-type) reaches the GUT vacuum after the slow roll, as shown in Fig. 2b. The third type, which we call the SM-type, is the phenomenologically viable one that reaches the SM vacuum s = v after the slow roll, as shown in Fig. 2c. The last one, the RW-type, is similar to the first but escapes through the wall at s = s + , as shown in Fig. 2d. We next change the value of the ζ parameter. The left panel of Fig. 3 shows how the four types of numerical solutions above are distributed when both ζ and ω are varied. For the ξ parameter we use the reference value ξ = 5285 throughout. The initial value for h is the same as above, h init = 0.12, and the initial value of s is chosen at a local minimum of the potential along the h = h init line. The solutions of the LW-type and the GUT-type, as well as the solutions of the SM-type and the RWtype, are seen to be mixed. In contrast, there is a clear line separating the solutions LW+GUT and the solutions SM+RW, which is found to be ω ≈ −0.0115 ζ − 1.00. The right panel of Fig. 3 shows the number of e-folds between h = h init = 0.12 and h = h end at the end of the slow roll, characterized by = 1. Away from the separatrix (45), the number of e-folds is seen to decrease; for such a trajectory, a larger value of h init is required to solve the horizon problem. The escape solutions LW and RW are numerical artifacts and should not be considered as (classical) physical solutions. At the holes in the walls, FIG. 5. The shape of the sextic Kähler potential for ω = 0 (left), ω = 10 6 (center), and ω = 10 7 (right) when the ζ parameter is fixed to zero. The black square and the red circle are respectively the GUT vacuum and the SM vacuum. The ξ parameter is chosen to be ξ = 6450, which yields the Planck-normalized scalar power spectrum when ω = 10 7 and ζ = −3000. in the (Jordan frame) potential V J (25) becomes indefinite. These holes are thus located at the points (s, h) = (s ± , 2λs ± (s ± − v)/3ρ). The walls are infinitely high except exactly at these points, and as these points are measure zero, the LW and RW solutions should respectively become the GUT and the SM solutions if the step size is sent to infinitesimal. Besides, the supergravity effective Lagrangian is unreliable near the singularity walls. As emphasized, phenomenologically viable inflaton trajectories are those reaching the SM vacuum after the slow roll. Since the trajectories that approach very close to the singularity walls are unreliable, we take solutions in the vicinity of the line (45) as benchmark cases of the cubic Kähler model. The left panel of Fig. 3 shows that the GUT and SM solutions become sparse as ζ becomes small, indicating that obtaining a reliable numerical solution becomes increasingly difficult in this parameter region. The parameter values for the reference trajectory ξ = 5285 and (ζ, ω) = (10000, −116) are chosen so that it is a SM solution near the separatrix (45) that does not approach too close to the singularity walls. Note that away from this reference point, the ξ parameter for the solutions in Fig. 3 is not strictly Planck-normalized. Cosmological observables Let us now discuss the inflationary predictions for this model. For the reasons mentioned above, we focus on the benchmark inflaton trajectories that end up in the SM vacuum and lie close to the separatrix line (45) in the parameter space (ζ, ω). Concretely, we choose ω = −0.0115 ζ − 2.00, a line slightly below the separatrix line (45) 6 , and 2000 ≤ ζ ≤ 10000. Table I shows the field value h = h * at the horizon crossing for the e-folding N e = 60, h = h end at the end of slow roll, the e-folding number N e between h = h init = 0.12 and h = h end , the scalar and the tensor power spectra P R and P T , the scalar spectral index n s , the tensor-to-scalar ratio r and the local-type nonlinearity parameter f local NL for ζ = 2000, 4000, 6000, 8000, 10000. Fig. 4 shows the scalar power spectrum P R , the scalar spectral index n s , and the tensor-to-scalar ratio r plotted against the parameter ζ. To obtain these results, we have once again solved the background equation forward in time from the initial values, identified the end point of inflation at which = 1, solved the background equation backward in time from the end point of inflation to find the horizon crossing with the N e = 60 condition, and then computed the cosmological observables. See Appendix A 2 and A 3 for the technicalities and relevant formulas. We have also computed the power spectrum for the isocurvature mode P S , which is not shown in the table as it is found to be exponentially suppressed. This suppression is due to the relatively large mass for the s field introduced by the Kähler metric; it is found impossible to obtain a sensible inflaton trajectory without introducing a large mass for s in this model. The isocurvature fraction β iso is thus essentially zero for these parameter values, which is consistent with observations [1]. As the parameter ζ is varied, the scalar power spectrum P R changes somewhat; in view of the Planck 2015 (TT+TE+EE+lowP) results [1,38] 2.133 × 10 −9 ≤ P R ≤ 2.283 × 10 −9 (68% C.L.), (47) only the range ζ 8356.9 is observationally consistent but this certainly does not mean that lower values of ζ are not allowed in this model, as we have fixed the ξ parameter using the Planck normalization of P R at (ζ, ω) = (10000, −116). The spectral index n s and the tensor-to-scalar ratio r are, in contrast, found to be extremely insensitive to the change of ζ. These values of n s and r are well inside the Planck constraints [1,38], as well as the BICEP2/Keck Array/Planck results (1). The nonlinearity parameter f local NL (the local-type non- Gaussianity; see [23] for the detail of our numerical code based on the backward δN formalism) is found to be f local NL ∼ O(1), in the parameter region of interest. For ζ 6.3 × 10 3 , the nonlinearity parameter is outside the Planck constraints [39] but again this does not imply that the lower values of ζ are excluded as the ξ parameter may be readjusted. C. The sextic Kähler model Let us next consider the other case in which the Kähler potential includes the noncanonical sextic term. The nontrivial component of the Kähler metric is (35), which vanishes when s 2 = (−ζ ± ζ 2 + 9ω)/9ω. Thus κ = 0 at four values of s's which are (i) neither real nor pure imaginary when ω < −ζ 2 /9; (ii.a) all pure imaginary when −ζ 2 /9 < ω < 0 and ζ < 0; (ii.b) all real when −ζ 2 /9 < ω < 0 and ζ > 0; (iii) 2 real and 2 pure imaginary when ω > 0. To remove the tachyonic instabilities in the s direction, we must have κ = 0 at at least two real s's (these are the locations of the singularity walls). 7 These constraints are from the temperature data alone. Thus the parameter regions that are of our interest are (ii.b) −ζ 2 /9 < ω < 0, ζ > 0 and (iii) ω > 0. We focus on the (iii) case below. Fig. 5 shows the behavior of the potential as the parameter ω is varied, when ζ is fixed to zero. Similarly to the cubic Kähler case, we choose the reference point ζ = −3000 and ω = 10 7 in the parameter space, for which the Planck normalization of the scalar power spectrum gives ξ = 6450. We use this value of ξ throughout the study of the sextic Kähler potential model. ω = 10 7 is large enough to remove the tachyonic instabilities of the potential in the s field direction. For the trajectories shown in Fig. 6, the initial value of h is chosen to be the same as in the cubic case, h = h init = 0.12. Background solutions To study the behavior of the background solutions near the (ζ, ω) = (−3000, 10 7 ) solution, we solved, like in the cubic Kähler potential case, the background equations of motion fully numerically without slow-roll approximation. We found the four types of inflaton trajectories LW, GUT, SM and RW similar to the cubic case. Examples of these are shown in Fig. 6. The distribution of these four types numerical solutions in the parameter range of −3200 ≤ ζ ≤ −400 and 10 5 ≤ ω ≤ 10 7 are shown in the left panel of Fig. 7 TABLE II. The values of the field h at the horizon crossing h * and at the end of slow roll h end , the e-folding number Ne, the scalar and tensor power spectra PR and PT , the scalar spectral index ns, the tensor-to-scalar ratio r and the local-type nonlinearity parameter f local NL in the sextic Kähler model as the parameters ζ and ω are varied. The initial value of the h field is chosen to be hinit = 0.12 and the parameter ξ is fixed to 6450 using the Planck normalization of the scalar power spectrum when (ω, ζ) = (10 7 , −3000) and e-foldings 60. The Ne in the table is the e-folding number between hinit and h end . The right panel of Fig. 7 shows the the number of efolds between h init = 0.12 and h end at which the slow roll parameter becomes unity. Phenomenologically viable trajectories are those reaching the SM vacuum after the slow roll; they are below the separatrix (50). As emphasized in III B 1, the escaping behavior of the LW and RW solutions are numerical artifacts and they should be considered as the GUT-type and the SM-type solutions, respectively. Cosmological observables To compute the cosmological observables in this model, we adopt the same methodology as explained in III B 2. We focus on the parameter region near the separatrix (50) and change the value of ω. Concretely, we choose ζ = −2.96×10 −4 ω −112.8 and 4.3×10 5 ≤ ω ≤ 1.0×10 7 . This line is slightly below the separatrix line (50), as the parameters exactly on the separatrix do not always give the SM-type solutions in the grid search; we are only interested in the SM-type trajectories that are phenomenologically viable. The lower value of ω = 4.3×10 5 is chosen to yield at least 60 e-foldings with our initial conditions h init = 0.12 (see Fig. 7). In Fig. 8 the scalar power spectrum P R , its spectral index n s and the tensor-to-scalar ratio r are plotted for different values of ω (ζ is chosen to be on the line above and ξ is fixed). We again found that the isocurvature fraction β iso is negligible, for the same reason as in the cubic Kähler potential case. Table 8 shows the field value of h at the horizon crossing, the value of h at the end of inflation, the e-folding number N e between h = h init and at the end of the slow roll, the power spectra P R and P T , the scalar spectral index n s , the tensor-to-scalar ratio r and the local-type nonlinearity parameter f local NL for several sample values of (ζ, ω). The values of n s and r in the table are well within the constraints of the latest Planck experiment results [38] we well as the BICEP2/Keck Array/Planck joint results [1]. For lower values of ω the scalar power spectrum is seen to increase and goes outside the Planck constraints (47) for ω 52.4 × 10 5 , but this is not meant to be the lower bound of this parameter as the ξ parameter may be readjusted (recall that we have fixed ξ = 6450 at (ζ, ω) = (−3000, 10 7 ) using the Planck normalization). The local-type nonlinearity parameter is f local NL ∼ O(1) throughout the parameter region of interest and is marginally within the present observatoinal constraints (49). D. Summary We have seen in this section the behavior of the background inflaton trajectories and the prediction for the cosmological observables in the inflationary scenarios introduced in the previous section. For systematic parameter scan, we fixed the ξ parameter by the Planck normalization of the scalar power spectrum at special points in the parameter spaces: (ζ, ω) = (10000, −116) for the cubic Kähler case and (ζ, ω) = (−3000, 10 7 ) in the sextic Kähler case. We then varied the parameters ζ and ω, in the range of 2000 ≤ ζ ≤ 10000, −200 ≤ ω ≤ 0 for the cubic Kähler model, and −3200 ≤ ζ ≤ −400, 10 5 ≤ ω ≤ 10 7 for the sextic Kähler model. In both cases we obtained four types of numerical solutions: two types of runaway solutions LW and RW, and the one that ends up in the GUT vacuum and the other that ends up in the SM vacuum. The runaway solutions are due to the (unavoidable) pathological behavior of the numerical integration near the Kähler metric singularities; these walls, while infinitely high, become infinitesimally thin near the measure-zero pin holes. Since a classical trajectory cannot penetrate such a wall, these runaway solutions should be regarded as numerical artifacts. The observable parameters predicted in the cubic and the sextic Kähler models are quite similar. We have selected the phenomenologically viable and numerically well-behaved sets of background inflaton trajectories on the SM side of the separatrix between the GUT-and SMtype solutions in the ζ-ω plane. We then computed the scalar power spectra in the adiabatic and the isocurvature modes, the tensor power spectrum, the scalar spectral index, the tensor-to-scalar ratio, and the local-type nonlinearity parameter for these inflaton trajectories. A wellknown attractive feature of the Bezrukov-Shaposhnikov scenario of Higgs inflaiton is that the prediction for the scalar spectral index and the tensor-to-scalar ratio agrees remarkably well with the present observations [1,38], once the nonminimal coupling parameter ξ is fixed by the scalar power spectrum. This feature is found to persist in our supersymmetric GUT embedding, in both cubic and sextic Kähler potential cases. Supersymmetric GUT embedding necessarily involves multiple scalars and in principle the multifield effects may change the cosmological observables; we have found that such effects, in particular the isocurvature mode of fluctuations, are negligible in our model. The absence of the isocurvature mode is due to the large effective mass along the s field direction. We also found that the nonlinearity parameter is O(1). As we vary the parameters ζ and ω along the vicinity of the separatrix, the scalar power spectrum P R is found to deviate from the Planck-normalized value, while the spectral index n s and the tensor-to-scalar ratio r are insensitive to the change of these parameters. The isocurvature fraction β iso stays negligible and f local NL stays O(1), as long as the scalar power spectrum stays close to the Planck-normalized value. IV. DISCUSSION As a well-motivated and technically natural beyondthe-SM implementation of the Bezrukov-Shaposhnikov scenario, we have discussed, extending the work of [13], Higgs inflation in supersymmetric GUT in this paper. The supergravity η problem is avoided by using the noncanonical Kähler potential in the superconformal framework; the noncanonical term gives rise to the nonminimal coupling of the Higgs field as in the Bezrukov-Shaposhnikov scenario. We have considered the minimal SU (5) GUT model as a prototype of GUT and analyzed the model including multifield effects. The prediction for the scalar spectral index n s and the tensor-to-scalar ratio r is very similar to the Bezrukov-Shaposhnikov scenario of SM Higgs inflation and thus agrees very well with the present observations. This feature is found to be insensitive to the change of the model parameters ζ and ω, which may indicate some attractor mechanism, similar to the one studied recently in [40][41][42]. The prediction of cosmological parameters in this model is also robust against multifield ambiguities, as the isocurvature mode of the fluctuations is found to be negligible. In the supersymmetric SU (5) Higgs inflation model that we have studied, the non-Gaussianity (the local-type nonlinearity parameter) stays relatively small, reflecting the fact that the multifield effects are overall insignificant. In similar embedding of Higgs inflation in the NMSSM or in the supersymmetric seesaw, in contrast, the non-Gaussianity can be important [23]. Why are the effects less important in the SU (5) case? A salient feature of inflation models realized in grand unification is that the GUT symmetry is broken during inflation. In the scenario we have studied, this is related to the asymmetry of the inflaton potential in the singlet (the s field) direction; the requirement that the trajectory must reach the SM vacuum disfavors trajectories that typically produce large isocurvature modes and large non-Gaussianity, that is, trajectories that stay on a ridge of the potential for a while and then make a turn [43,44]. While symmetries of an inflaton potential is commonly imposed in simple toy examples of multifield inflationary models, one cannot expect high symmetries in generic inflationary models, such as in GUT scenarios or the stringy landscape scenario (see however [45] for a discussion in favor of symmetries in generic models). We have provided a concrete case study of a GUT scenario in this paper and found that multifield effects are not important. The results seem to indicate that discussions based on inflationary toy models tend to overestimate the multifield effects. Let us conclude by commenting on possible directions of further research. One direction is to investigate less trivial examples of GUT embedding. While the SU (5) GUT is widely recognized as a prototype of grand unification, it is certainly not an entirely satisfactory example as it suffers e.g. from the proton decay problem. While many of the features found here are expected also in other GUT models, quantitative consistency check on various phenomenological and cosmological bounds in concrete realistic scenarios is certainly desirable. Another important topic that we have not touched upon in this paper is (p)reheating after inflation 8 . Recent studies of nonminimally coupled multifield reheating based on simple inflationary toy models indicate that energy transfer due to parametric resonance is efficient [47], since the strong single-field attractor behavior persists during reheating and multifield de-phasing effects can be avoided. In our phenomenological example based on supersymmetric GUT, in contrast, the inflaton dynamics after the slow roll may exhibit irregular and chaotic behavior, due to the irregular shape of the scalar potential near the GUT and the SM vacua. Such irregular motion may lead to suppression of the resonance effects as in [48]. DoF) of SO(3). These include 2 gauge DoF in the vector and 2 gauge DoF in the scalar mode. The 2 DoF in the tensor mode represent the two helicity states of gravitational waves. After the horizon exit, the tensor mode fluctuations undergo no nontrivial evolution as they decouple from the scalar mode. In the absence of a vector source, the evolution of the vector mode fluctuations is trivial decay [56] and hence of no interest; we will not discuss them any further. The metric with scalar mode fluctuations A, B, E, ψ and tensor mode fluctuations h ij may be written as where a = a(t) is the scale factor and * |i ≡ ∂ * ∂x i is the spatial derivative with respect to the comoving coordinates. In multifield inflation with n (= 2 in our case) inflaton components, there are n+4 scalar DoF, 2 of which are the gauge DoF. Since there are two constraint equations, the physical scalar DoF is n. This indicates that the dynamics of scalar mode fluctuations may be studied by analyzing essentially the perturbed Klein-Gordon equations for the inflaton fields alone. The relevant equations of motion for the scalar perturbations are neatly expressed by using the covariant formalism [43, 49-52, 54, 55] as defined by The curvature and isocurvature perturbations at time t after the horizon exit are then given by For the appropriateness of the approximations used see e.g. [43,51,52,54,55]. Cosmological observables The power spectra of curvature and isocurvature perturbations after the horizon exit are given in terms of the transfer functions as The curvature and isocurvature power spectra at horizoncrossing are P R (k * ) = P S (k * ) = H * 2π where H * and * are evaluated at t * . The isocurvature fraction β iso then becomes The tensor power spectrum is (A35) It will not evolve in the superhorizon scales. We use the standard definition of the scalar spectral index, It is evaluated as n s = n s, * − α * + β * T RS sin(2∆) , where n s, * = 1 − 6 + 2η (A38) is the spectral index at the horizon crossing and the angle ∆ is defined by Finally the tensor-to-scalar ratio r ≡ P T P R (A40) is evaluated as (A41)
2015-12-18T06:47:19.000Z
2015-12-18T00:00:00.000
{ "year": 2015, "sha1": "3ac239ec3b9238e500eaa44a666d4887782ead4f", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1512.05861", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "3ac239ec3b9238e500eaa44a666d4887782ead4f", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
13283740
pes2o/s2orc
v3-fos-license
Role of Hormone-sensitive Lipase in Leptin-Promoted Fat Loss and Glucose Lowering Aim: Myriad biological effects of leptin may lead to broad therapeutic applications for various metabolic diseases, including diabetes and its complications; however, in contrast to its anorexic effect, the molecular mechanisms underlying adipopenic and glucose-lowering effects of leptin have not been fully understood. Here we aim to clarify the role of hormone-sensitive lipase (HSL) in leptin's action. Methods: Wild-type (WT) and HSL-deficient (HSLKO) mice were made hyperleptinemic by two commonly-used methods: adenovirus-mediated overexpression of leptin and continuous subcutaneous infusion of leptin by osmotic pumps. The amount of food intake, body weights, organ weights, and parameters of glucose and lipid metabolism were measured. Results: Hyperleptinemia equally suppressed the food intake in WT and HSLKO mice. On the other hand, leptin-mediated fat loss and glucose-lowering were significantly blunted in the absence of HSL when leptin was overexpressed by recombinant adenovirus carrying leptin. By osmotic pumps, the fat-losing and glucose-lowering effects of leptin were milder due to lower levels of hyperleptinemia; although the difference between WT and HSLKO mice did not reach statistical significance, HSLKO mice had a tendency to retain more fat than WT mice in the face of hyperleptinemia. Conclusions: We clarify for the first time the role of HSL in leptin's effect using a genetic model: leptin-promoted fat loss and glucose-lowering are at least in part mediated via HSL-mediated lipolysis. Further studies to define the pathophysiological role of adipocyte lipases in leptin action may lead to a new therapeutic approach to circumvent leptin resistance. HSL in the anorexic, adipopenic, and glucose-lowering effects of leptin using gene-targeted mice deficient in HSL (HSLKO). We followed the commonly-used methods of hyperleptinemia, i.e., injection of recombinant adenovirus carrying leptin (Ad-Leptin) 24) and continuous subcutaneous infusion of recombinant leptin by osmotic pumps 23) , which produces supraphysiological and near-physiological levels of hyperleptinemia, respectively. We clarify, for the first time, the role of HSL in leptin's effects on fat loss and glucose lowering. Animals HSL-deficient (HSLKO) mice 38) , which were backcrossed at least five times into the C57BL/6J background, were used in this study. Genotyping was performed as described previously 38) . Mice were housed in a temperature-controlled environment with a 12-h light/dark cycle and were allowed free access to water and a standard chow diet (Oriental MF, Oriental Yeast, Tokyo, Japan; CLEA Rodent Diet CE-2, CLEA Japan, Tokyo, Japan). Mice were maintained, cared for, and used in experiments in accordance with the regulations of the Animal Care Committees of the University of Tokyo. Construction of Recombinant Adenoviruses Recombinant adenovirus carrying mouse leptin cDNA under the control of the cytomegalovirus promoter, designated as Ad-Leptin, was constructed using the cDNA cloned by reverse transcription polymerase chain reaction (RT-PCR) from mouse liver as described previously 39) . Recombinant adenovirus containing the -galactosidase cDNA (Ad-LacZ) was used as a control. The recombinant adenoviruses were expanded in HEK293 cells and purified by cesium chloride ultracentrifugation. The purified viruses were stored in 10% (v/v) glycerol in phosphate buffered saline (PBS) at 80 . In our preparations, 1 multiplicity of infection (m.o.i.) corresponded to 25 particles of adenovirus per cell. In contrast to the well-characterized effect of leptin on feeding behavior, the mechanisms underlying the adipopenic effects of leptin 22) are not fully understood. Adipopenic effect of leptin is partly independent of its anorexic effect, as suggested by the observations that hyperleptinemic mice and rats lose body fat considerably more than their pair-fed control animals [23][24][25] . It has been suggested that leptin, in addition to its anorexic effect, activates sympathetic nerve systems (SNS) innervating white adipose tissue (WAT) to increase lipolysis in adipocytes 26) . Zeng et al. recently proved this to be the case 27) , by showing that genetic ablation of sympathetic inputs blocks leptin-stimulated lipolysis. They also demonstrated that leptin stimulates phosphorylation of hormone-sensitive lipase (HSL), a canonical triglyceride (TG) hydrolase in adipocytes, via SNS-catecholamine pathway; however, direct evidence is lacking whether HSL is necessary for the adipopenic effect of leptin. The glucose/insulin-lowering effect of leptin is also suggested to be independent of leptin's anorexic effect 24,28) . Efforts to narrow down the site in the central nervous system (CNS) that specifically mediates leptin's glucose/insulin-lowering effect have revealed the dominant contribution of the hypothalamic arcuate nucleus (ARH) [29][30][31][32] and ventromedial hypothalamic nucleus (VMH)-SNS-catecholamine pathway 33,34) , or possibly other sites 35) . In contrast, the "efferent" effectors by which CNS regulates glucose metabolism are largely unknown 36) . Peroxisome proliferator-activated receptor (PPAR) pathway is one of such candidates, as the adipopenic effect, as well as the glucose/insulin lowering effect of leptin, is abolished in PPAR -deficient (PPAR − / − ) mice 37) . Another candidate would be the molecule(s) downstream of SNS-catecholamine pathway such as adipocyte lipases because SNS is supposed to mediate both adipopenic 27) and glucose-lowering effects 33,34) of leptin; however, the contribution of adipocyte lipase(s) in leptin's glucose-lowering action has never been tested directly. Aim The purpose of this study is to define the role of HSLKO-Leptin: 554 54 ng/mL). Consequently, Ad-Leptin treatment suppressed food intake to a similar degree in both genotypes (Fig. 1A). The transient inhi-repeated in male mice to confirm the results. Biochemical Analyses Plasma levels of glucose were measured by ANT-SENSE II (Bayer Medical, Tokyo, Japan) or by Glutest Neo (Sanwa Kagaku Kenkyusho; Mie, Japan). Plasma levels of leptin and insulin were assayed with the mouse leptin and insulin enzyme-linked immunosorbent assay (ELISA) kits (Morinaga, Tokyo, Japan). Plasma levels of cholesterol (Determiner TC; Kyowa Medex, Tokyo, Japan), trigycerides (TG) and glycerol (TG LH; Wako Chemicals, Tokyo, Japan), and free fatty acids (NEFA C; Wako Chemicals) were measured enzymatically. Statistical Analyses All values are given as mean standard error (SE). Differences between groups were evaluated with Student's t -test, one-way or two-way ANOVA by STAT view, version 5.0, for Macintosh (SAS Institute), or by PRISM 5 for Mac OS X (GraphPad Software, Inc.). Leptin treatment declined plasma FFA levels from 346 to 95 µM ( 73%) in WT mice (P 0.05), but only moderately from 238 to 158 ( 34%) in HSLKO mice. The plasma FFA levels most likely reflect the amounts of fat, a major source of plasma FFAs in the body (Fig. 3D). Other lipid parameters, such as TG (Fig. 3E) and total cholesterol (data not shown) were reduced by leptin treatment similarly in both genotypes without any difference between the genotypes. The fall in plasma FFAs and TG after Ad-Leptin treatment in WT mice was consistent with the previous reports 24,37) . These results suggest that leptin's effect on glucose metabolism was at least partially dependent on HSL. Contribution of HSL Depends on the Levels of Hyperleptinemia These data so far clearly demonstrate that HSL contributes to the leptin's effect on fat loss and glucose lowering, but not on food intake (Figs. 1, 2, and 3), in the setting of the supra-physiological levels of hyperleptinemia that induce severe fat loss. We next tested if HSL plays a role in leptin actions at nearphysiological concentrations of hyperleptinemia. To this end, we utilized an osmotic pump model to continuously infuse leptin subcutaneously 28) . As shown in Fig. 4A, infusion of leptin successfully increased plasma levels of leptin in both genotypes (WT-PBS: 0.7 0.4 ng/mL; WT-Leptin: 20.9 1.7 ng/mL; HSLKO-PBS: 1.6 0.7 ng/mL; HSLKO-Leptin: 19.9 1.0 ng/mL), with no significant difference between the genotypes. Food intake was suppressed similarly in WT and HSLKO mice (P 0.05), confirming again that HSL does not contribute to the anorexic effect of leptin bition of food intake after hyperleptinemia was consistent with the previous reports 24,40) . Body weight, which was similar between WT and HSLKO mice at the start of the experiment (WT: 18.6 0.7 g; HSLKO: 19.9 0.4 g), declined progressively in both genotypes to a similar degree (Fig. 1B). These data verified the efficiency of the method to overexpress leptin by adenovirus and revealed no difference between WT and HSLKO in terms of the anorexic effect of leptin. HSL Contributes to the Leptin's Effect on Fat Loss Previous works have demonstrated that leptin has a specific adipopenic effect, which depletes adipose tissues completely in Ad-Leptin-induced hyperleptinemia 24,28,37,41) . We next tested if this adipopenic effect of leptin requires HSL. After the injection of Ad-Leptin, there was a striking difference between genotypes in the appearance and weight of fat pads (Fig. 2). Consistent with previous reports 24) , hyperleptinemia resulted in the disappearance of visible fat in WT mice ( Fig. 2A), and the weight of parametrial white adipose tissue (WAT) declined from 104 to 6 mg ( 94.5%) (Fig. 2B). In clear contrast, HSL-deficient mice retained substantial amounts of fat pads ( Fig. 2A), and the weight of the parametrial fat declined from 154 to 26 mg ( 83%) (Fig. 2B). Compared to WT mice treated with Ad-Leptin, HSLKO mice treated with Ad-Leptin retained 4.3 times more fat pads (Fig. 2B, P 0.05). Similarly, subcutaneous fat remained significantly more (P 0.01) in HSLKO;Ad-Leptin mice than in WT;Ad-Leptin mice (WT-LacZ: 186 30 mg; WT-Leptin: 40 3 mg; KO-LacZ: 233 38 mg; HSLKO-Leptin: 59 3 mg). In contrast to WAT, leptin treatment reduced the weight of brown adipose tissue (BAT) in both genotypes similarly with no difference between Ad-leptin treated WT and HSLKO (WT-leptin: 25 2 mg; HSLKO-leptin: 28 2 mg) (Fig. 2C). These results suggest that the adipopenic effect of leptin in WAT is mediated at least in part via the HSL-mediated lipolytic pathway. HSL Contributes to the Leptin's Effect on Glucose Lowering We next tested if the metabolic effect of leptin is HSL dependent as well. Previous works have suggested that leptin has its specific effect on glucose metabolism, such as enhancing insulin sensitivity, at least partially independent of its anorexic effect 24, 28-30, 36, 42) . As shown in Fig. 3A, plasma levels of glucose in ad lib gradually fell in WT mice after Ad-Leptin injection (P 0.05, WT-LacZ vs. WT-Leptin), but were only moderately and non-significantly reduced in HSLKO mice. Plasma glucose levels on day 7 ad lib ( Fig. 3A) were significantly decreased only in WT mice (WT- tially independent of its anorexic effect 24) . By comparing Ad-Leptin treated rats versus Ad-LacZ treated rats pair-fed to Ad-Leptin treated rats, they found that Ad-Leptin treated rats lost body fat almost completely, whereas the pair-fed rats retained about 50% of the body fat 24) . We used the same adenovirus model (Fig. 1), and demonstrated for the first time that this "specific" adipopenic effect of leptin (Fig. 2) is at least in part mediated by HSL. Adipopenic effect of HSL is independent of leptin's anorexic effect, as leptin similarly suppressed food intake both in WT and HSLKO mice (Fig. 1). Unger's group previously reported a similar phenomenon in PPAR deficient mice: They found that hyperleptinemia by Ad-Leptin depleted the fat mass in WT mice but not in PPAR − / − mice, despite the same degree of suppression of food intake in WT and PPAR − / − mice 37) . As leptin upregulates mRNAs of PPAR and its target genes of fatty acid (FA) oxidation, they conclude that PPAR -mediated oxidation of FA is necessary for the adipopenic action of leptin. It is likely that leptin, on the one hand, stimulates lipolysis by activating adipocyte lipase(s) to liberate FAs, and on the other hand activates PPAR pathway to oxidize FAs, collectively leading to the adipose depletion. The mechanism that leptin stimulates lipolysis has only recently been uncovered at the molecular level. Zeng et al. recently reported that leptin increases the phosphorylation of HSL and stimulates lipolysis in adipocytes, via sympathetic nerve fibers that innervate the adipose tissue 27) . Disruption of sympathetic inputs to the fat pads either genetically, surgically, or pharmacologically, almost completely blocked leptinstimulated phosphorylation of HSL. The role of SNScatecholamine pathway in the leptin-mediated fat loss was further proved in vivo using mice deficient in dopamine -hydroxylase (DBH − / − ): delivery of leptin by osmotic pumps at a rate of 0.5 µg/h reduced fat in the wild-type mice but not in the DBH − / − mice 27) . To compare the role of HSL with that of DBH in this context, we used exactly the same experimental condition in our experiments in Fig. 4. Although adenovirus model clearly demonstrated the contribution of HSL in leptin-mediated fat loss (Fig. 2), the osmotic pump model (0.5 µg/h for 7 days, Alzet #1007D) revealed only a nonsignificant tendency that HSLKO mice retain more fat than WT mice (Fig. 4D). We confirmed this result by repeating experiments using a different model of osmotic pumps for a shorter duration (0.5 µg/h for 3 days, Alzet #1003D). The difference between the two models could be due to the different route of leptin release: from the liver for the adenovirus model versus subcutaneous tissues for the osmotic pump model. Alternatively, the difference may (Fig. 4B). In parallel with the suppressed food intake, body weights were reduced significantly after leptin infusion both in WT and HSLKO mice (P 0.05, Fig. 4C). At this levels of hyperleptinemia, body fat reduced significantly in WT mice (WT-PBS, 314 1 mg; WT-leptin, 80 3 mg) (Fig. 4D), but not as completely as in the Ad-Leptin model (Fig. 2B). Although HSLKO tended to retain more fat than WT mice after leptin infusion, the difference between genotypes did not reach statistical significance both in epididymal WAT (edWAT) (WT-Leptin: 80 3 mg; HSLKO-Leptin: 134 17 mg) and subcutaneous WAT (scWAT) (WT-Leptin: 108 35 mg; HSLKO-Leptin: 174 31 mg) (Fig. 4D). At this level of hyperleptinemia, leptin treatment did not significantly reduce the plasma levels of glucose in both genotypes, either ad lib (Fig. 4E) or in fasted status (Fig. 4F). These results suggest that HSL-mediated lipolytic pathway contributes to leptin's effect on fat loss and glucose-lowering more dominantly at the higher levels of hyperleptinemia that cause almost complete fat loss (Figs. 1, 2, and 3) than at the lower levels of hyperleptinemia in otherwise healthy mice (Fig. 4). Discussion Leptin contributes to the homeostasis of body fat by acting on a myriad of metabolic pathways, and leptin therapy is increasingly being used in a variety of disorders in humans. A precise understanding of the biological actions of leptin is warranted. Mainly acting in the brain, leptin inhibits food intake, stimulates adipocyte lipolysis 23,24,27) , and improves glucose metabolism 24,[28][29][30] . The latter two effects are at least in part independent of its anorexic effect; however, the downstream effectors that increase adipocyte lipolysis and improve glucose metabolism have not been fully understood 27,36,37) . The report herein identified HSL, an adipocyte TG lipase, as an efferent effector that partly confers the effect of leptin on fat loss (Fig. 2) as well as glucose lowering (Fig. 3), but not on suppression of food intake (Fig. 1). We also found that the role of HSL is more dominant at supra-physiological hyperleptinemia that elicited complete fat loss in WT mice (Fig. 2) than at near-physiological hyperleptinemia where the adipose depletion in WT is incomplete (Fig. 4). To our knowledge, this is the first study to clarify the role of adipocyte lipase in leptin's effect using a gene-targeted mouse model. Compared to the well-defined action of leptin in regulating food intake, the molecular mechanisms underlying leptin's adipopenic effect are not fully understood. Previous work by Chen et al. found that leptin has a "specific" adipopenic effect, which is at least par-that Ad-Leptin induced hypoglycemia by blocking gluconeogenesis. Currently, the precise mechanisms underlying leptin-induced fasting hypoglycemia in the adenovirus model is unclear. Changes in the counterinsulin hormones or transcription of gluconeogenic genes could not explain the fasting hypoglycemia of hyperleptinemic mice; we found rather increased levels of counter-insulin hormones such as glucagon and corticosterone, and increased mRNA levels of gluconeogenic genes, such as PGC1 , G6Pase, and PEPCK, in WT mice treated with Ad-Leptin, most likely as compensatory responses to hypoglycemia (data not shown). Decreased availability of substrates for gluconeogenesis is another possibility; however, our preliminary data indicate that leptin-induced hypoglycemia is not rescued by supplying substrates for gluconeogenesis (unpublished observations). Considering the protection against the leptin-induced hypoglycemia in HSLKO mice (Fig. 3), it can be hypothesized that some fat-derived factor(s) or lipolysis-derived factor(s), which may correlate with fat mass, affect gluconeogenesis in liver posttranslationally. The milder hypoglycemic effect in the face of milder fat-loss at lower levels of hyperleptinemia (Fig. 4) may support this hypothesis. We are currently working to test this hypothesis of fat-gluconeogenesis axis of leptin's action. Nonetheless, our data reveal for the first time that adipocyte lipase(s) mediate leptin's glucose-lowering effect at least partially. Despite the broad therapeutic possibilities of leptin to normalize hyperglycemia as well as to reduce hypoglycemia in type 1 diabetes as an adjunct to insulin 15) , leptin may have a potential adverse effect of severe hypoglycemia 16) . Further studies are needed to precisely define the molecular mechanisms of leptinmediated glucose lowering. The major limitation of the current study is that we could not rule out the possibility that the observed phenotype in HSLKO mice is not due to the loss of HSL per se, but due to some changes secondary to HSL deficiency. For example, the protection from leptin-induced hypoglycemia in HSLKO mice could be secondary to the changes in fat mass as discussed in the aforementioned paragraph, although the change in fat mass comes from the presence or absence of HSL. The observed phenotype in HSLKO mice could also be secondary to some changes in gene expression coupled to HSL deficiency. For example, mRNA expression of ATGL is about 70% lower in WAT of HSLKO mice than WT mice 62) , which is reproducible in our HSLKO mice as well (~88% lower than WT mice, Takanashi M., unpublished results). The lower expression of ATGL could potentially contribute to the phenotype in HSLKO mice. The exact contribution of each adipocytes lipase will be addressed in future stud-result from the different levels of hyperleptinemia: ~550 -600 ng/mL for the adenovirus model (as described in Results) versus ~20 ng/mL for the osmotic pump model (Fig. 4A). A likely explanation would be that HSL plays a significant role at higher levels of leptin, and other adipocyte lipases may play a more dominant role at lower levels of leptin. As adipocyte lipolysis is mediated not only by HSL but also by ATGL 45) , TGH-1 46) , or TGH-2 47) , these lipases may play a dominant role in leptin-mediated lipolysis and fat loss downstream of SNS-DBH-catecholamine pathway. In this sense, it is of note that ATGL contributes more dominantly than HSL to cancer-associated cachexia, another model of severe fat loss 48) . Interestingly, our data demonstrate that the contribution of HSL seems more dominant in WAT than in BAT (Figs. 2C and 4D), suggesting that the contribution of HSL and ATGL may differ in different types of adipose tissues. The fact that ATGL knockout, but not HSL knockout, is cold sensitive 38,49) , and the fact that TG lipase activity is decreased in WAT (by 60%) but not in BAT of HSLKO mice 38) , may suggest that ATGL plays a more dominant role in BAT. Further studies are warranted to clarify the contribution of each adipocyte lipase in the leptin-mediated fat loss. Then, how leptin coordinately increases lipolysis via SNS and at the same time increases FA oxidation via PPAR ? Increasing evidence suggests the physiological importance of lipolysis-PPAR axis: lipolysis activates PPARs by providing cognate ligand for PPARs. For example, ATGL-PPAR axis controls myriads of metabolic pathway in a variety of tissues: FAs derived from ATGL-mediated lipolysis regulate mitochondrial function in the heart via PPAR /PGC-1 50) , maintain mitochondrial function in muscle via PPAR 51,52) , promote mitochondrial function for insulin secretion in islet cells via PPAR 53) , activate PPAR in hepatocytes 54) , regulate intestinal lipid metabolism via PPAR 55) , and regulate FA oxidation in BAT via PPAR and PPAR 56) . HSL-PPAR axis controls lipogenesis and adipogenesis in adipocytes via PPAR [57][58][59][60] . The importance of HSL-PPAR axis in human physiology is recently highlighted from the discovery of human HSL null patients, who have partial lipodystrophic and diabetic phenotype, accompanying the downregulation of PPAR and its downstream target genes in adipose tissues 61) . The role of HSL in leptin's action is also suggested from the contribution of HSL in leptin-mediated glucose lowering (Fig. 3). In WT mice, Ad-Leptin improved glucose metabolism (Fig. 3A), which largely confirms the previous results 24,37) . We also found that the glucose-lowering effect of Ad-Leptin was more striking when mice were fasted (Fig. 3B), suggesting ies using inducible, tissue-specific knockouts of these lipases, which is beyond the scope of this study. Nonetheless, our study is the first to clarify the role of HSL in the leptin-mediated fat loss and glucose lowering, opening up a fruitful area of research. The study herein aimed to clarify the role of HSL in leptin's action at therapeutic doses. Our data demonstrate that HSL contributes to the adipopenic and glucose-lowering effect of leptin more dominantly at higher doses (Figs. 2 and 3). Next issue would be whether HSL confers the sensitivity to leptin in normal physiology or some pathological conditions, such as lipodystrophy, or type 1 and type 2 diabetes. Although we could not detect a significant contribution of HSL at a near-physiological dose of leptin in otherwise healthy mice (Fig. 4), this issue should be tested in other pathological models of obesity or diabetes. Decreased lipolytic activity may lead to obesity in the face of hyperleptinemia, so called leptin resistance, or may compromise the effect of leptin therapy 63) . Conversely, stimulation of lipolysis (e.g., by direct activation of sympathetic inputs to adipose tissues 27) ) may offer an alternative approach to induce fat loss and circumvent leptin resistance, a common feature of obesity. Conclusion Our data, for the first time, demonstrate that HSL contributes to leptin-mediated fat loss and glucose lowering. Future studies are warranted to elucidate the contribution of HSL or other adipocyte lipases such as ATGL in the physiological and therapeutic actions of leptin, for better understanding and treatment of diseases, such as lipodystrophy, diabetes, and its complications.
2018-04-03T00:37:12.606Z
2017-04-12T00:00:00.000
{ "year": 2017, "sha1": "6a07a0d2f2e86343030623726c1e660bbcd26475", "oa_license": "CCBYNCSA", "oa_url": "https://www.jstage.jst.go.jp/article/jat/24/11/24_39552/_pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "75e9f34fa2fde8e08f672018c14a9d8bde825233", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
222350444
pes2o/s2orc
v3-fos-license
Health care utilisation among older people with Down syndrome compared to specific medical guidelines for health surveillance: a Swedish national register study Background Specific medical guidelines for health surveillance exist for people with Down syndrome (DS) since 25 years but knowledge of adherence to the guidelines is lacking. The guidelines were developed to avoid unnecessary suffering from preventable conditions. The aims of the study were to investigate 1) planned health care visits in relation to the co-morbidities described in specific medical guidelines as a measure of adherence, 2) unplanned health care visits as a measure of potentially unmet health care needs and 3) gender differences in health care utilisation among older people with DS. Methods This register-based study includes people with DS (n = 472) from a Swedish national cohort of people with intellectual disability (n = 7936), aged 55 years or more, and with at least one support according to the disability law, in 2012. Data on inpatient and outpatient specialist health care utilisation were collected from the National Patient Register for 2002–2012. Results A total of 3854 inpatient and outpatient specialist health care visits were recorded during the 11 years, of which 54.6% (n = 2103) were planned, 44.0% (n = 1695) unplanned and 1.4% (n = 56) lacked information. More than half of the visits, 67.0% (n = 2582) were outpatient health care thus inpatient 33% (n = 1272). Most planned visits (29.4%, n = 618) were to an ophthalmology clinic, and most unplanned visits to an internal medicine clinic (36.6%, n = 621). The most common cause for planned visits was cataract, found at least once for 32.8% in this cohort, followed by arthrosis (8.9%), epilepsy (8.9%) and dementia (6.6%). Pneumonia, pain, fractures and epilepsy each accounted for at least one unplanned visit for approximately one-fourth of the population (27.1, 26.9, 26.3 and 19.7% respectively). Men and women had similar numbers of unplanned visits. However, women were more likely to have visits for epilepsy or fractures, and men more likely for pneumonia. Conclusions Increased awareness of existing specific medical guidelines for people with DS is vital for preventive measures. The relatively few planned health care visits according to the medical guidelines together with a high number of unplanned visits caused by conditions which potentially can be prevented suggest a need of improved adherence to medical guidelines. Background Longevity has increased dramatically in people with Down syndrome (DS) during the last decades. The main explanation is the progress in infant heart surgery, which has resulted in a decrease in mortality among children younger than one year of age from 40.8% in 1973 to 4.8% in 2003 [1]. However, living longer have consequences for age-related morbidity, with implications for health care and health surveillance for people with DS. The development of co-morbidities in the fourth to sixth decades of life is reported to be more prevalent among people with DS than in the population in general [2]. Thus, people with DS are more likely to need specialist care during their last years of life. Compared to the general population, many age-related changes in health and functional status, such as vision and hearing impairment, Alzheimer's disease, thyroid disorders and epilepsy occur at an earlier age among people with DS [3]. Visual and hearing impairment is common among people with DS already in early ages [4], and increases sharply after 40 years of age [2]. In fact, hearing impairment may increase to the extent that up to 100% experiences hearing loss at the age of 60 [5]. Epilepsy has been reported to occur among 40% before one year of age and among an additional 40% in older people with DS in connection with Alzheimer's disease [6]. Moreover, both Swedish and international population based studies have reported epilepsy to be a more frequent cause of death among people with DS than in the general population [7][8][9]. Additionally, epilepsy has been suggested to be the most prevalent cause for in-hospital mortality among people with DS [7]. The increased longevity for people with DS combined with the changed living context from institutions to living in the community, mostly in supportive housing, has changed the conditions for health care access for this population in Sweden. In the past, medical care was mainly provided in the institutions, whereas today, Swedish health policies state that people with DS should seek health care when needed in the same way as the general population. However, despite the special needs of people with DS, physicians with specialist education on adults or older people with DS are lacking, which, among other things, may jeopardise healthy ageing [2,10]. National health care guidelines or recommendations for health surveillance of co-morbidities of DS, including planned health care visits, have been developed in many countries, for children as well as for adults [4,[11][12][13]. Regular health checks, which have been introduced in several countries, may be one strategy to address the barriers that people with DS might encounter in seeking health care [14]. In Sweden, even though guidelines exist for adults with DS since 1991 and for children with DS since 1985 [4,11], there are no medical guidelines specific to older people that account for age-related diseases. There have been reports of poor compliance with existing medical guidelines for people with DS [15,16]. For example, although US guidelines state that obstructive sleep apnoea, atlanto-axial instability, and hearing or vision loss should be evaluated regularly in health care, less than 50% of adults with DS had been evaluated for any of these conditions during an 8.5-year period [15]. In the UK, one-third of a cohort of adults with DS had not had any medical assessment during the previous three years [17]. Even if Swedish specific medical guidelines for health surveillance of people with DS have existed for 25 years, adherence to the guidelines have not yet been investigated [1]. During the last decade, a few studies have focused on gender differences in people with DS with respect to morbidity [18], health care utilisation [19] and mortality [9]. Gender differences have been reported for epilepsy as a comorbid condition with dementia, being more prevalent in women with DS than in men with DS [18]. Furthermore, it is well-known that osteoporosis, and osteoporotic fractures are more common among older women than men in the general population [20]. This risk is even higher among women with DS as gonadal dysfunction, low hormone levels and early onset of menopause is more prevalent [21,22]. Regarding hospitalisation, men with DS have been reported to have longer inpatient stays than women with DS [19]. The gender differences in disease pattern and health care utilisation motivate the development of gender-specific health surveillance for older people with DS. Improving health surveillance is vital to enable healthy ageing in the older population with DS [23]. Health surveillance is reasonably planned health care beforehand, and adherence may be assessed by investigating planned health care visits in specialist care as many conditions require specialist knowledge to complement primary health care given by the general practitioner (GP). We previously reported that unplanned health care in somatic care exceeded planned visits for older people with intellectual disabilities with and without DS, and that older people with intellectual disabilities had fewer planned health care visits than the general population [24]. Studying health care utilisation as a method to investigate adherence to medical guidelines among older people with DS has not yet been performed. Therefore, health care utilisation patterns for people with DS in Sweden are not yet known, nor their connection to specific medical guidelines or reasons for unplanned visits. Taking into account that in Sweden, as in many other countries, there are no national registers of all forms of health care, we have used two reliable registers in this study that include all outpatient and inpatient specialist care and used them as proxy for health care utilisation. In fact, a national register for primary care is not available for this study. The aims of the present study were to investigate 1) planned health care visits in relation to the co-morbidities described in specific medical guidelines for people with DS as a measure of adherence to these guidelines, 2) unplanned health care visits as a measure of potentially unmet health care needs and 3) gender differences in health care utilisation among older people with DS. Methods The present study is register-based, using Swedish national registers to establish the study cohort as well as identify health care utilisation. Health care Health care in Sweden is funded by taxes. Mostly public but also private alternatives are available. The Health and Medical Services Act [25] regulates access to health care on equal terms for the entire population, based on an assessment of the needs of the individual. The first level of health care is primary health care. In order to receive second level care, i.e. specialist care, the patient usually needs a referral from the GP. For chronic conditions that requires specialist competence, the initial referral goes from the GP to a specialist clinic, but the subsequent contact between the patient and the specialist clinic is continued without any link to the GP. However, primary health care can keep the medical responsibility for general uncomplicated conditions [26]. People with DS, or their families, or staff in social service have themselves to initiate visits to the GP for an examination of a health problem. There are no formalised specialist health care services for intellectual disability or for DS. Swedish registers used in the study The Swedish intellectual disability services is based on the Swedish act concerning support and services for persons with certain functional impairments (Swedish abbreviation LSS) [27]. This act regulates measures of support and services for people with intellectual disability (including DS) and/or autism spectrum disorders. It is a right-based law that entitles adult people to apply for any number of 8 specified services. All support and service provided according to the LSS act is recorded in a register (LSS register). The National Patient Register (NPR) includes records for all visits made in inpatient and outpatient specialist health care. For each visit, the register includes one primary and up to 21 secondary diagnoses determined by the responsible physician at discharge from hospital or outpatient visits, and coded according to the International Statistical Classification of Diseases and Related Health Problems 10th Revision (ICD-10). Both the LSS register and the NPR are based on mandatory registration and are maintained by the Swedish National Board of Health and Welfare, a public authority with commissions from the government. Study cohort All people who had received at least one form of LSS support and service during 2012 and who were alive and at least 55 years old at the end of that year (n = 7936) were included in the original cohort. For this study, we selected those who had at least one diagnosis of DS (ICD-10 code Q90) recorded in the NPR during 2002-2012. Thus, the study cohort in the present study included 472 older people with DS, 247 men (52%) and 225 women (48%). Their mean age at the start of the 11-years study period, i.e. 1 January 2002, was 49.7 years (range 44-75). The majority of those included (n = 438, 93%) had a diagnosis of unspecified DS (Q90.9). Outcomes Data on health care utilisation and diagnoses were collected from the NPR for the period 2002-2012. Primary diagnoses (i.e. the main causes for the health care contacts) were considered for planned health care visits in relation to co-morbidities as a measure of guideline adherence and unplanned visits as a measure of potentially unmet health care needs. We specifically investigated the presence of known comorbidities listed in the Swedish specific medical guidelines together with the more recently published guidelines for Norway [11][12][13] (Appendix 1). Diagnoses were assessed on ICD-10 block level and as single diagnoses. For planned health care utilisation, we also investigated the following disease groups based on present medical guidelines for people with DS: arthrosis in different parts of the body (M15-M19), atlanto-axial instability (S13. Statistics Descriptive data on diagnoses and diagnostic groups (as described above) are presented for planned and unplanned visits separately, for the total study cohort as well as stratified by sex. For each diagnosis, we present 1) the number of people with this diagnosis as the primary diagnosis (i.e. cause for the visit) at least once during the study period and 2) the number of visits with this diagnosis as the primary diagnosis. Only diagnoses recorded for at least 10 people are included. Gender differences were investigated with respect to a) number of visits, using the Mann-Whitney U test, as the data were skewed, and b) having at least one visit during each year, using generalised linear models (GLMs) with calendar year indicating repeated measures, estimating relative risks (RRs) with 95% confidence intervals (CIs). All analyses were performed in IBM Statistics SPSS 23.0. P-values below 0.05 were considered statistically significant. Health care characteristics All 472 individuals had at least one registration recorded in inpatient or outpatient specialist health care during the study period, namely, the visit by which we identified the person as having DS. DS was recorded as the primary diagnosis, i.e., the main cause for the planned visit, at least once for 22.9% (n = 108) of the individuals, whereas intellectual disability (ICD-10 code F70-F79) was the primary diagnosis at least once for 11.7% (n = 55). For unplanned visits, DS was recorded as the main cause for the visit at least once for 7.0% (n = 33) and intellectual disability for 1.9% (n = 9) of the individuals (n = 472). Regarding health care visits, totally 3854 registered visits were identified for the 472 individuals included. Of these, 54.6% (n = 2103) were planned, 44.0% (n = 1695) were unplanned, and 1.4% (n = 56) lacked information for these two types of visits. More than half of the visits, 67.0% (n = 2582) were outpatient health care thus inpatient 33% (n = 1272). Inpatient care visits represented 9.9% (n = 209) of the planned visits and 64.5% (n = 1094) of the unplanned visits. All 56 visits that lacked information on whether they were planned or unplanned were made in outpatient specialist care. The most frequently visited department was the internal medicine department, to which 791 visits (20.5%) were made (Table 1). Of these, 38.7% were outpatient health care visit, and 19.8% were planned. Inpatient care in the internal medicine department were 61.3, and 78.5% were unplanned visit. When aggregating diagnoses according to known problems described in the specific medical guidelines (Table 2), the most common cause for a planned visit was cataract, followed by epilepsy, arthrosis and dementia. Unplanned visitspotentially unmet health care needs The most common cause for an unplanned visit, on the ICD-10 block level, was influenza and pneumonia (J09-J18), followed by general symptoms and signs (R50-R69; Appendix 2). For the aggregated diagnoses described above, pneumonia, pain and fractures each corresponded to at least one unplanned visit for more than one-fourth of the study cohort (Table 3). A fracture of femur was the most common type of fracture (S72, 11.2%, n = 53, with 62 visits); the most common type of pain was abdominal pain (R10, 13.6%, n = 64, with 143 visits). Gender differencesdiagnosis and type of health care visit Regarding primary diagnosis (main cause) of the planned health care visit, more women than men have had at least one planned visit for cataract, dementia, epilepsy, and arthrosis (Table 2). Regarding the four most frequent primary diagnoses (Table 3), fractures accounted for at least one unplanned visit for almost one third of the women and about one fifth among men, and epilepsy, including seizure, accounted for at least one unplanned visit for one-fourth of the women and one eighth among men. In contrast, more men than women had had at least one unplanned visit for pneumonia (Table 3). Regarding the number of total health care visits, there were no statistically significant differences between men and women (p = 0.200), men had a median of 5 visits (range 1-38) and women 6 (1-184). Men had fewer number of visits to specialist outpatient health care (p = 0.013), while the number of inpatient health care visits was similar. Also, whereas the number of unplanned visits was similar for men and women (p = 0.260), women had more planned visits than men (p = 0.002). A total of 17.4% men (n = 43) and 10.7% (n = 24) of women had no planned visits at all. The regression analyses confirms that men were nearly as likely as women to have at least one yearly visit in outpatient specialist care (RR 0.96, 95% CI 0.91-1.02) and inpatient care (RR 1.07, 95% CI 0.97-1.18). Men were also equally likely to have at least one yearly unplanned visit (RR 1.06, 95% CI 0.97-1.15), but they were somewhat less likely to have at least one planned visit (RR 0.92, 95% CI 0.86-0.99). Discussion More than half of the health care visits were planned (assessed as a proxy for adherence to specific medical guidelines) and almost half unplanned (assessed as proxy for unmet needs of preventive interventions) during the 11-year study period. The number of planned visit must be regarded as surprisingly few with respect to the existing specific medical guidelines. The most common causes for a planned visit were cataract, arthrosis, and epilepsy, and after that dementia, which is reasonable from the perspective of the existing medical guidelines and known comorbidities. However, due to the observed time period and the few visits made at the individual level, health surveillance needs to be further evaluated for older people with DS, not at least from the perspective of an ageing population. A large proportion of these older people with DS had at least one unplanned visit caused by fractures, pain, pneumonia and epilepsy. As these are well-known conditions, this could at least partly illustrate a lack of adherence to specific medical guidelines and preventive measures. If so, this finding is troublesome, given the extra burden for the person that an unplanned visit to a health care provider can entail especially in this group with decreased cognitive and communication ability [2]. Overall the health care utilisation was similar between men and women. However, men had fewer visits to outpatient health care visit and fewer planned visits than women. This result is not consistent with the few previous studies that have reported differences between men and women with DS using health care such as one study that report longer stay in hospitals for men compared Internal medicine 259 ( Diabetes mellitus (E10-E14) with women with DS [19,28] and slightly more hospital admissions [28]. However, the previous studies included people at younger ages and no outpatient data. Further research is needed before a conclusion can be drawn about possible differences in access as well as use of health care for men and women with DS, which is an issue in many countries. Planned visitsadherence to specific medical guidelines Almost 20 % of the men and just over 10 % in total in our study had not had any planned visits at all during the 11 years. It is well-known that morbidity increases with age in DS more than it does in the general population [2,5]. Thus this result is unexpected considering existing medical guidelines [11][12][13]. Australia has a similar health care system to Sweden in that primary health care and the GP are the first contact for the population for non-emergency health care [29]. Another study from Australia reports that people with ID are getting fewer referrals to specialist care compared to people without ID from their GPs in primary health care [30,31]. Weise and colleagues identified from previous research several barriers of access to the health care system [29] for people with ID. These were an illequipped health workforce, care staff untrained to recognise signs of common physical and mental ill-health and therefore missed necessary subsequent actions, diagnostic difficulties including diagnostic overshadowing of ID and low health literacy among people with ID. To the best of our knowledge, there are no Swedish studies on this issue. The most frequent cause for planned visits was age-related cataract, which occurred for about onethird of the individuals who had at least one planned visit. Specific medical guidelines recommend a visit to an eye health care specialist at least every fifth year for people with DS [11][12][13]. In addition, earlier reports have shown the prevalence of eye problems increasing with age [23] and reaching around 60% in older people with DS [2,16]. Thus, the number of recorded visits due to cataract was far below what would have been expected if compliance to the specific medical guidelines. One possible reason for fewer visits may be that the responsibility within the health care system is not established for these follow-ups which for certain persons with communication difficulties, such as people with DS, might be crucial for visits to be made [2]. Decreased sight has many negative consequences in everyday life, and the prevalence of visual impairment increased with the severity of ID and with age [32]. Therefore, the older persons with DS need to be invited for a regular follow-up not least to the extent that the medical guidelines recommend. There were few recorded planned visits for individuals in this study with a recorded primary diagnosis such as cervical spondylosis (CS) or heart problems [5]. Capone and colleagues report in a recently published systematic review an estimated prevalence of 60% of adults with DS that need health surveillance caused by cervical spondylosis and 35% caused by previously repaired or uncorrected congenital heart diseases (CHD) [5]. In the general population, the clinical presentation and manifestation of CS vary, and a safe diagnosis require multidimensional medical assessment [33]. Although CHDs among people with DS are well-known and currently well-treated at young ages, specific medical guidelines recommend that more attention should be given to the elevated risks of those with DS both with and without CHDs of developing cardiac morbidities later in life, such as mitral valve prolapse and pulmonary arterial hypertension [13,34]. We have previously reported a higher prevalence of heart failure among people with ID (including people with DS) compared to the general population when including all secondary diagnosis from health care visits [35]. CHD and cervical spondylosis are common causes to specialist health care visits, and the few recorded visits in this study need to be further investigated. However one possible reason could be the high rate of early death and that people with DS in this study, were all alive at the end of the study period, which might be the healthiest ones as they have survived to this high age [9]. In our previous national study, we found that people 42 years and older with DS had 11 times higher mortality risk than a matched control population. They also died earlier compared with people with other intellectual disability diagnoses, with the mean age at death being 63.5 years in those with DS, compared with 72.1 years in those with other intellectual disability diagnoses and 76.2 years in the control population [9]. Orthopaedic problems such as arthritis are painful, and in the more recent specific medical guidelines from Norway yearly follow-up is recommended [13]. Only one-third of the study cohort had visited an orthopaedic clinic during the study period. We have previously reported that people with intellectual disabilities are less likely to have prescriptions for nonsteroidal anti-inflammatory drugs (NSAIDs) [36,37] and people with falls are more likely to be treated at other departments than orthopaedic as inpatients [38]. A Finnish population-based study reported that 20% of patients with DS in an age group over 30 years had had at least one orthopaedic surgery during their life, including fractures and dislocations [16]. Several publications have reported positive results with pain-relief and improved function from hip surgery in people with DS [39,40]. Only a small number in this cohort had any reported orthopaedic surgery for joint implants which call for studies investigating potential needs for future improvements for this target group. It is essential that planned visits for older people with DS includes the assessment of the risk of developing dementia. Previous studies have found the prevalence of dementia to be low before the age of 30 years and then increase to a prevalence of 70-80% for those over 65 years [2,41]. The possible explanations of the discrepancy to the few registrations of dementia in this study is that we investigated only specialist health care and that the diagnoses are often made in primary health care. The relatively low amount of visits at neurological or in geriatrics and geropsychiatric clinics can explain the few registrations of dementia in this study. Leaders at group homes and day program services in Sweden have expressed a wish for ageing people with intellectual disabilities to have the opportunity to see a physician at least once a year [42]. This has been introduced in several countries, and others have reported the need for such visits [13,17,43]. Johansson and colleagues [42] reported a lack of experience and competence among staff in detecting the need for assistive devices or increased care. Based on the time since the institutions were closed in Sweden and that the cohort in the present study comprises older people with DS it is reasonable to expect that the absolute majority lived in supported housing such as group homes during the study period. Within our Swedish national cohort of older people with intellectual disabilities, 76% of all people were supported in group homes in 2012 (n = 7936, unpublished data). Unplanned visitspotentially unmet health care needs Unspecified abdominal pain was the most common pain diagnosis at unplanned health care visits. We have previously found that diagnoses of visceral pain and pain related to the urinary system were more common among people with intellectual disabilities than in the general population [37]. If the specific medical guidelines were updated with recommended regular screening for e.g. gastrointestinal disorders, treatment of pain might be initiated earlier and the number of unplanned health care visits may be reduced. Almost one third of women in the study cohort had at least one fracture during the study period, with fractures of the femur being the most common type. The prevalence of fractures in the population with DS is far greater than that seen in the general population for the same age group and time period (just over 10%) [44]. To some extent, it may be caused by osteoporosis, which earlier research has found more prevalent especially in women with DS [5,45] due to late menarche and early menopause [21]. In the population in general, the number of fractures increases with age, therefore the early ageing among people with DS could be one explanation for this higher prevalence of fractures. Osteoporosis is included in the more recent published specific medical guidelines from Norway for a regular follow-up [13] and the high level of fractures might be possible to reduce with a better adherence to the guidelines. We have reported on the need for fall prevention among older people with intellectual disability [38]. Falls occurred twice as often during vital activities among people with intellectual disabilities than in the general population. After a fall, people with intellectual disabilities most often experienced head or leg injuries and were more likely to require specialist care [38]. There is a need to further investigate if osteoporosis is being underdiagnosed, if relevant treatment is provided and if older people with DS are offered resources such as fall prevention similar to those recommended with high priority in national guidelines for the population in general [46]. A systematic risk assessment, investigation and treatment after the first fracture can reduce the proportion of new fractures with 40% [46]. In this study, unplanned visits were caused frequently by pneumonia and epilepsy, both well-known diseases among people with DS, especially in older age groups and in particular if dementia is present [47]. Both pneumonia and epilepsy have been reported to be more common as causes of death among people with DS than in the general population [1,7,9]. We have previously reported respiratory diseases as the main cause of death, accounting for 37% among people with DS [9]. This proportion rose to 50% when contributing causes were included. In addition, mortality from respiratory failure has been reported with a RR of 9.8 for in-hospital mortality among those with DS compared to the general population [7]. Respiratory diseases such as pneumonia are regarded as an ambulatory care sensitive condition (ACSC) [48] responsive to health care interventions, and thus such interventions may improve the health and quality of life in people with DS. Although epilepsy was a common cause of both planned and unplanned health care visits, only 40 people (8.5%) had visited a neurological clinic during the study period. For those who have developed dementia, the prevalence of epilepsy is reported to be more than 80% [49,50]. A previous study showed that despite medication, over half of those with epilepsy still reported experiencing seizures [51]. We have previously reported that epilepsy is the cause of death for a considerable number of people with DS [9]. Epilepsy, similarly to pneumonia, is considered an ACSC, thus unplanned hospitalisations should be possible to prevent [48]. Further studies should investigate whether older people with DS are treated in an optimal manner for epilepsy within primary health care and with adequate support from neurological specialist health care. Seeking unplanned care could be a consequence of poorly followed specific medical guidelines, which has been reported in other countries [16,17], or that older people with DS lack access to a primary health care provider who can adequately meet their complex needs [52]. Such difficulties may lead to delays in diagnoses, poorer disease management and unnecessary death [53]. Thus, regardless of diagnosis, the high number of unplanned visits, on which the results from the present study are consistent with our earlier report on people with intellectual disabilities in general [24], most likely indicates less optimal health care for an already vulnerable population. It is also unclear if it is the primary health care or the specialist health care that are responsible for the follow-ups according to guidelines comprising health surveillance of the ageing population with DS. This high number of unplanned visits cannot solely be explained by economic factors, as the health care in Sweden is mainly financed by taxes. Instead, we would like to propose two alternative explanations: 1) a lack of awareness in the health care system, as well as among supporting social service staff, of issues related to ageing adults with DS, which has been identified in interviews with managers and staff in intellectual disability services [42,54,55], and 2) difficulties in obtaining adequate health care for older people with DS and with, among other disabilities, communication difficulties [30]. DS was the recorded primary diagnosis of a visit to specialist health care at least once for almost one third of this cohort. This is earlier reported in studies on cause of death in this population [8,9,56]. The fact that a patient's disability is recorded as the primary cause for a health care visit or cause of death might overshadow the actual reason for their health condition or cause for visit. Strength and limitations of the study A major strength of this study is the use of the NPR, which is of high validity and has a 99% coverage rate of all somatic and psychiatric diagnoses registered at discharge [57]. It is mandatory for all health care providers, whether privately and publicly funded, to deliver data to the NPR, except for primary care [57]. Another strength of the study is that it examines a national sample of individuals with DS only, without the heterogeneity of studies of individuals with intellectual disability in general. However, people with no specialist health care visit at all under the 11 years period and thus without registration of DS are not included. However, we do not believe this affects the generalizability of the results. The generalizability of the results from this study to people with DS in other countries is dependent both on differences within health care systems and in how people with disability are regarded in the country concerned. The Swedish health care and the social service system for people with intellectual disability is largely decentralised to community care, mostly financed by taxes and supposed to be delivered equally to everyone in the population based solely on need. We believe that our results may be applicable to other countries with similar conditions. However, the results might have limited generalizability to some low and middle-income countries with a limited system of specialist health care where people with DS meet additional problems such as stigma, and lack of confidence for authorities [58,59]. Other obstacles for people with disability in low and middle-income countries are related to poverty with higher risks for morbidities and travelling to the hospital which is impossible for many people living in the countryside [60,61]. This national sample only includes those with DS who had received services according the LSS law in 2012 and who had at least one visit in inpatient or outpatient specialist care in 2002-2012 (not including primary health care) during which a diagnosis of DS was recorded. Thus, we have failed to include those who live their lives without service and support from the municipality. However, it may be reasonable to believe that most older people with DS would have some kind of support and service according to LSS. Parents to older people with DS are not expected to have the ability to be caregivers due to own diseases or are not alive. Also, the oldest people in this study have grown up and lived the main part of their life in large institutions according to the disability policy that was in place before the 1980s [62]. In addition, the majority ought to have visited a health care provider at least once during these 11 years, especially with respect to age-related diseases. Thus, we believe that the majority of older people with DS are included in our cohort. A possible weakness is that the individuals studied were all born during a time when the median age of people with DS was only 4 years [1]; thus this cohort consist of survivors, as all participants included were alive at 55 years of age and the survival age was nationally 63.5 years [9]. Therefore, it may be argued that these older individuals are the healthiest among those with DS. However, even if this may be the case, this cannot be a reason for failing health checks in accordance with specific medical guidelines in older people with DS according to the coherent research showing these people having more disease burden than in the general population [2,9,24]. We used specialist planned and unplanned health care on national data for evaluation of follow-up of medical guidelines developed with the goal to avoid unnecessary suffering from preventable conditions. The fact that the result is based only on specialist health care limits the generalizability to national results and must be taken into account when interpreting conclusions. However, the specialist health care is the best existing data as primary health care data not are registered at a national level in Sweden today. Many health conditions listed in the guidelines require examination in specialist health care. However, some diseases and problems listed in the specific medical guidelines, such as hypothyroidism and obesity, are probably followed up and examined in primary health care. The limitation that our data did not include common uncomplicated health conditions needs to be kept in mind when interpreting the result from this study. The scarce knowledge available highlight a need for representative studies of primary health care use in people with intellectual disabilities [31]. Future research also needs to identify potential inequalities caused by specific barriers for people with intellectual disabilities to access health care [63]. Conclusions Our data indicate deficiencies in adherence to specific medical guidelines and recommendations for health surveillance concerning people with DS. The low number of planned visits in relation to recommendations for specific disorders indicates few referrals to specialist health care from GPs or staff at specialist health care lacking awareness about the early ageing in the DS population. The high number of unplanned visits due to preventable conditions may represent potentially unmet health care needs within primary health care. We suggest stronger efforts in implementing existing medical guidelines, updated for an older population in terms of fractures and pain in particular. DS is a population with extensive co-morbidity that now are ageing in a way that not existed earlier. Future research is warranted investigating prevention measure both within inpatient and outpatient specialist health care, as well as primary health care on a national level. Table 4 Medical guidelines of health checks for optimal medical care of adults with Down syndrome International [11] Norway [13] Sweden [12] Alzheimer's disease (dementia) X X Funding This study was funded by FORTE (the Swedish Research Council for Health, Working Life and Welfare) no 2014-4753, and the Faculty of Medicine, Lund University. The funding bodies had no role in the design of the study, collection, analysis, interpretation of data, or writing the manuscript. Open Access funding provided by Lund University. Availability of data and materials In order to approve the study, the Regional Ethical Review Board in Lund made restrictions regarding access to the data due to the sensitive information on a very vulnerable group, i.e., persons with DS. Even though the data are anonymised, it contains enough details to enable identification of single individuals. Therefore, the datasets in the current study are available from the PI (GA) on reasonable request and only after approval from the Regional Ethical Review Board. However, as the database is compiled by national register data, other researchers may contact the register holders, the Swedish National Board of Health and Welfare and Statistics Sweden, to get access to the registries used in this study, and thereby generate a similar database. Ethics approval and consent to participate Approval was obtained from the Regional Ethical Review Board in Lund (Swedish government agency) (Ref. No. 2013/15). The government authorities responsible for the national registers used in this study do not provide personal identification numbers to researchers for research studies. This meant that it was no possible to obtain informed written consent from the participants in this study. Instead, the Ethical Review Board took its decision based on an active refusal from the participants described as follows. According to the demands from the Ethical Review Board, the information about the planned study and how to withdraw from the study was advertised in two major newspapers in Sweden. One of these was a widespread national public newspaper and the other a national newspaper "Unik" distributed by the Swedish National Association for Persons with Intellectual Disability (FUB) and supporting members. In the next step, permission was needed to access the data from the two register holders. The National Board of Health and Welfare, and Statistics
2020-10-15T14:16:11.654Z
2020-10-15T00:00:00.000
{ "year": 2020, "sha1": "096107813e0ad2406bd8599bad5c5753326803fb", "oa_license": "CCBY", "oa_url": "https://bmchealthservres.biomedcentral.com/track/pdf/10.1186/s12913-020-05800-7", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "096107813e0ad2406bd8599bad5c5753326803fb", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
269681235
pes2o/s2orc
v3-fos-license
Unveiling the molecular basis of paracetamol-induced hepatotoxicity: Interaction of N-acetyl-p-benzoquinone imine with mitochondrial succinate dehydrogenase Background and aim N-acetyl-p-benzoquinoneimine (NAPQI), a toxic byproduct of paracetamol (Acetaminophen, APAP), can accumulate and cause liver damage by depleting glutathione and forming protein adducts in the mitochondria. These adducts disrupt the respiratory chain, increasing superoxide production and reducing ATP. The goal of this study was to provide computational proof that succinate dehydrogenase (SDH), a subunit of complex II in the mitochondrial respiratory chain, is a favorable binding partner for NAPQI in this regard. Method Molecular docking, molecular dynamics simulation, protein-protein interaction networks (PPI), and KEGG metabolic pathway analysis were employed to identify binding characteristics, interaction partners, and their associations with metabolic pathways. A lipid membrane was added to the experimental apparatus to mimic the natural cellular environment of SDH. This modification made it possible to develop a context for investigating the role and interactions of SDH within a cellular ecosystem that was more realistic and biologically relevant. Result The molecular binding affinity score for APAP and NAPQI with SDH was predicted −6.5 and −6.7 kcal/mol, respectively. Furthermore, RMSD, RMSF, and Rog from the molecular dynamics simulations study revealed that NAPQI has slightly higher stability and compactness compared to APAP at 100 ns timeframe with mitochondrial SDH. Conclusion This study serves to predict the mechanistic process of paracetamol toxicity by using different computational approaches. In addition, this study will provide information about the drug target against APAP hepatotoxicity. Introduction Acetaminophen, commonly referred to as paracetamol or APAP, is widely utilized for its non-narcotic pain-relieving and fever-reducing qualities [1,2].Currently, paracetamol also used as combination drug with opioids for severe cancer pain [3].Although typically safe when used at recommended therapeutic doses, the excessive ingestion of this over-the-counter drug can potentially cause hepatotoxicity, which may lead to acute liver failure (ALF) [4,5].Along with ALF, acetaminophen overdose also associated with acute kidney injury, gastrointestinal ulceration, bronchospasm, and ductus arteriosus, etc. [3].The majority of ALF cases in both the United States and Great Britain are primarily attributed to APAP-induced hepatotoxicity [6][7][8][9]. Metabolism of APAP occurs in liver microsomes in three phases: major quantities (90 %) are detoxified by glucuronidation and sulfation processes, thereby turning into nontoxic metabolites; small amounts (2 %) are excreted through urine without further processing; and only 5-9% of ingested paracetamol undergo direct terminal oxidations by cytochrome P-450 (CYP2E1)-into highly toxic and reactive metabolites N-acetyl-p-benzoquinonimine (NAPQI) [10][11][12].This NAPQI is very unstable toxic intermediate undergo quick detoxification by glutathione conjugation [12,13].Due to intentional or unintentional over dosage of APAP causes the excessive production of NAPQI and subsequently depleted the glutathione level.Only approved therapeutic for APAP-induced liver injury is N-Acetylcysteine (NAC), which can replenish the glutathione level if treated immediately after APAP over dosage [14].If untreated, excessive NAPQI can bound with mitochondrial intermembrane proteins or cellular macromolecules, though the precise process is still up for debate [10,[15][16][17]. In APAP-induced hepatotoxicity, mitochondria play a crucial role.Interactions with mitochondrial proteins and the resulting production of oxidative stress are involved in the early stages of hepatic injury [18].The propagation or amplification phases come next, and they result in hepatocellular necroptosis [16,[18][19][20][21].In addition, the reactive oxidative metabolite NAPQI reduced succinate-driven ATP generation and specifically inhibited mitochondrial complex II in mouse model [22].Succinate dehydrogenase (SDH) is crucial as it intimately involved in both carbon metabolism and cellular respiration [23].It is recognized as complex II within the mitochondrial respiratory chain and plays a pivotal role in the oxidation of succinate to fumarate, thereby fueling mitochondrial respiration [24,25].As a result, altering the SDH is anticipated to decrease the cellular utilization of succinate, potentially leading to its accumulation, and to hinder the efficiency of cellular respiration [24].Similarly, the compromised cellular respiration and ATP depletion occurred during APAP-induced hepatic injury [22,26,27].And, there is no approved therapeutics against late stages or established paracetamol-induced liver injury, though different researchers reported mitochondria targeted different therapeutic approaches [3,28].However, the mechanism by which the reactive metabolite NAPQI interacts with mitochondrial proteins or respiratory chain and subsequently formation of the NAPQI-protein adducts remains uncertain [29,30].Hence, the aim of this research is to elucidate the intricate molecular mechanism underlying the interaction between NAPQI and mitochondrial complex II, specifically succinate dehydrogenase.This investigation employs a bioinformatic approach to delve into the implications of APAP toxicity.The holistic nature of this analysis aims to yield a significant understanding of the mechanistic pathway, potentially revealing prospective drug targets for countering APAP-induced liver injury. Molecular docking performance and visualization The docking calculations were performed using AutoDock Vina [34] and PyRx [35].Each of the dockings was considered three times and The results of docking were presented, and expressed as a negative value in units of kcal/mol, with lower scores indicating more favorable binding interaction [38].Additionally, molecular graphics visualization of docking complexes was done by utilizing the BIOVIA Discovery Studio Visualizer (https://discover.3ds.com/discovery-studio-visualizer-download) [36]. Molecular dynamics simulations 100 ns molecular dynamic (MD) simulations were used to assess the binding stability and compactness of receptor and receptor-ligand complexes [39].The thermodynamic stability of complexes was examined using the GROningen MAchine for Chemical Simulation (GRO-MACS) (version 2020.6).To mimic the inner mitochondrial membrane, complexes were integrated into a system comprised of 37 % phosphatidylcholine (POPC), 31 % phosphatidylethanolamine (POPE), 29 % cardiolipin (CL), phosphatidylinositol (PI), and TIP3 water model using CHARMM-GUI [40,41].The K and CL ions were chosen for charge neutralization, and the CHARMM36 m force field was used to minimize system energy [39].The equilibrium process was carried out by isothermal isochoric (NVT), and subsequently, isobaric (NPT) equilibration of the system was executed.The root mean square deviation (RMSD), root mean square fluctuation (RMSF), radius of gyration (Rog), solvent accessible surface area (SASA), and hydrogen bonding were used to assess the stability of the receptor-ligand complexes.After extracting the trajectory files, graphs were produced utilizing the Grace tool (https ://plasma-gate.weizmann.ac.il/Grace/). Protein-protein interaction network To get protein-protein interaction data for this investigation, we used the STRING (Search Tool for Retrieval of Interaction Genes/Proteins) database (https://string-db.org/).STRING is a comprehensive online resource that integrates both experimental and predicted interactions among proteins [42].The interaction analysis was carried out by selecting the highest confidence (0.900) and a maximum number of interactions was selected the 10 (ten).The investigation into protein-protein networking involved an in-depth analysis of three distinct processes; biological, molecular, and cellular processes, utilizing a web-based tool for Gene Ontology (GO) enrichment and pathway analysis (https://bioinformatics.com.cn/)[43]. Metabolic pathway analysis The metabolic pathway of tricarboxylic acid (TCA) cycle and production of ROS induced by metallic compounds was analyzed using the Kyoto Encyclopedia of Genes and Genomes (KEGG) (https://www.genome.jp/kegg/pathway.html)database [44]. Molecular docking of ligands and the SDH Different mitochondrial and cytosolic proteins were reported to interact with NAPQI and at first, we carried out the binding affinity of those proteins with NAPQI using PyRx listed in Table 1.We observed that Glutamine synthase, Glutamate dehydrogenase, Thioether S- As Succinate dehydrogenase (SDH) is responsible for cellular respiration, and SDH shown comparatively strong binding affinity (− 6.7 ± 0.081 kcal/mol).Therefore, further molecular docking analyses were carried out to forecast the binding affinity and interaction residues between receptors and ligands using AutoDock Vina [34] and PyRx [35].APAP shows the binding affinity − 6.5 ± 0 kcal/mol and − 5.8 ± 0.2 kcal/mol through PyRx and Auto Dock Vina respectively.Whereas, NAPQI exhibits more strong binding affinities of − 6.7 ± 0.08 kcal/mol and − 6.1 ± 0.2 kcal/mol by PyRx and Auto Dock Vina respectively.Table 2 provides a comprehensive overview of molecular interactions between the ligands and the targeted receptor, including their corresponding binding affinity and the interacting residues. APAP displays interactions involving three hydrogen bonds, as well as two pi-alkyl bonds (Fig. 1C).On the other hand, NAPQI exhibited two hydrogen bonds, along with two pi-alkyl bonds (Fig. 1D).In addition, NAPQI established a pi-sigma bond.Results from molecular docking studies suggest that NAPQI may bind with Complex II more efficiently compared to APAP. Molecular dynamics simulation of SDH protein and its ligands APAP and NAPQI Molecular dynamics (MD) simulations mimic cellular conditions, aiding in the assessment of protein-ligand complex stability and behavior.To better understand the conformational changes of the protein in the complex, a 100ns MD simulation of the protein in connection with the specific ligand was performed in this study. RMSD analysis of SDH-NAPQI and SDH-APAP complexes The acceptable range of the root mean square deviation (RMSD) change within protein-ligand complexes is between 0.1 nm and 0.3 nm.Elevated RMSD values indicate substantial conformational changes within the protein-ligand complexes.The RMSD profiles has been demonstrated in Fig. 2. The apo structure RMSD values gradually increased and After 75 ns, the value is relatively stable.After an initial period of stability, the APAP and NAPQI complexes exhibited synchronous RMSD profiles, averaging around 0.2 nm.Overall, protein-ligand complex structure showed lower RMSD values than protein only structure all over the simulation timeframe. RMSF analysis of SDH-NAPQI and SDH-APAP complexes Root Mean Square Fluctuation (RMSF) analysis was employed to discern the regional flexibility of the protein.Higher values correlate with increased flexibility at specific amino acid positions.The RMSF profiles (Fig. 3) of apo structure and two ligand compounds binding with SDHA subunits are shown.The apo structure showed the maximum flexibility in the C-terminal domain.Both complex showed higher fluctuation at N and C terminal.In general, protein looping-generating regions have higher flexibility than other secondary structures.The apo structure showed a considerably wider peak in the 250 to 350 amino acid position.Although the two complexes have similar trends in fluctuation, NAPQI showed lower fluctuations compared to APAP. SASA analysis of SDH-NAPQI and SDH-APAP complexes Solvent Accessible Surface Area (SASA) analysis assists in forecasting the stability of a protein's hydrophobic core.The SASA values illustrate the interplay between protein stability and solvent accessibility.In Fig. 4, for apo structure the SASA value was gradually increased after 50 ns it was relatively stable.The protein complex with APAP displays a gradual increase in SASA values, leveling off at around 460-470 nm 2 after 80 ns.In contrast, complexes with NAPQI exhibited progressive increase until 60 ns, therefore converging at an average value of ~450 nm 2 .The noticeable reduction in protein backbone exposure to the solvent, particularly in response to NAPQI bindings compared to APAP bindings, indicates a relatively higher structural stability of the hydrophobic core.This suggests that the interaction with NAPQI may contribute to more compact and stable conformation of the protein, potentially influencing its overall functionality. Radius of gyration of SDH-NAPQI and SDH-APAP complexes The radius of gyration (Rog) represents how spread out or concentrated the mass of an object is around its center.A relatively steady value of radius of gyration means stable folding of protein.Fluctuation of the radius of gyration implies the unfolding of the protein.Fig. 5 shows radius of gyration profile of three system, where protein has increasing Rog values during the simulation.For NAPQI-protein complex, first 40 ns Rog values were increasing after that graph of the complex was steady and compact.This steady and compact graph represent stable folding.Throughout the 100 ns simulation, NAPQI has lowest Rog values.The sustained lower Rog for NAPQI implies a more concentrated mass distribution around the center, reflecting a potentially more stable and compact conformation of the protein-ligand complex.The structural differences inferred from Rog values between APAP and NAPQI contribute valuable insights into their respective impact on the dynamic behavior and stability of the protein complex over the simulation time. Hydrogen bond profile of SDH-NAPQI and SDH-APAP complexes The hydrogen bond profiles of the two complexes are shown in Fig. 6.Hydrogen bonds between NAPQI and protein shown in the color red start the interaction with the 1-2 scale of hydrogen bond number.Ignoring the minor fluctuations, the number of hydrogen bonds remains on a 1-3 scale.On the other hand, the APAP profile shows a higher number of hydrogen bonds up to 25 ns.The APAP hydrogen bond number fluctuates from 0 to 4. APAP exhibited a maximum of 5 hydrogen bonds, observed near 25ns. The study examined APAP and NAPQI's binding dynamics with SDH.Both complexes remained within the acceptable RMSD range (0.1-0.3 nm) and displayed distinct RMSF profiles.SASA analysis revealed differences in hydrophobic core stability, and Rog values indicated NAP-QI's steady interaction with 1-2 bonds, contrasting with APAP's fluctuating bond count, which peaked at 25 ns.These results offer insights into the ligands' binding behavior with SDHA subunits. Metabolic pathway analysis for succinate dehydrogenase In this present study, we conducted metabolic pathway analysis, with a particular focus on the tricarboxylic acid (TCA) cycle.During this analysis, a crucial enzyme with the KEGG identifier 1.3.5.1, which corresponds to the flavoprotein subunit of succinate dehydrogenase was identified (Fig. 8).Succinate dehydrogenase plays a critical role in this metabolic pathway by catalyzing the conversion of succinate to fumarate while simultaneously transferring electrons to the electron transport chain, thereby contributing to the production of adenosine triphosphate (ATP) in cellular respiration [51].Reactive oxygen species (ROS) are commonly produced in the mitochondrial matrix as a natural byproduct of the electron transport chain during cellular respiration.This KEGG analysis is further linked with supporting mitochondrial RO production in the presence of metals (Supplementary Fig. 2). Discussion In this computational study, we predicted the molecular interaction of paracetamol-derived reactive metabolite NAPQI with mitochondrial complex II succinate dehydrogenase through conventional hydrogen, Pi-Alkyl, and Pi-Sigma bonds.This molecular interaction is also confirmed by molecular dynamics analysis through the stability and compactness of the SDH-NAPQI complex.Our study observation confirms the previous experimental findings that SDH is very sensitive to NAPQI and the succinate-driven compromised respiration subsequently generates APAP-inducing irreversible hepatic injury in mice [26].It was also reported that NAPQI binds to mitochondrial protein targets during paracetamol toxicity, causing a reduction in energy production, generation of reactive oxygen species, and cellular death [52].When it comes to lowering mitochondrial respiratory capacity, maximal respiratory rates, and ATP generation, NAPQI is significantly more effective than APAP [53]. Succinate dehydrogenase is a multi-subunit enzyme in the TCA cycle and a subunit (SDHA) of it involves the oxidation of succinate to fumarate [54,55].Electrons are then used to decrease Flavin adenine dinucleotide (FAD) to FADH 2 , which in turn reduces ubiquinone to ubiquinol in the respiratory chain.Through a series of reduction steps, the electron is transported from succinate to ubiquinol in this forward process.The reduced ubiquinone pool provides electrons for the reversal process.The enzyme succinate dehydrogenase is capable of producing ROS both in the forward and reverse directions.The competitive binding of dicarboxylates in the substrate binding site of complex II shows that the fully reduced, vacant flavin site is the major source of oxygen radical production [56]. Molecular docking study showed NAPQI has a slightly higher affinity for SDHA compared to APAP.Analysis of the APAP and NAPQI binding sites indicated that both ligands partially occupy the FAD binding sites (Supplementary Fig. 3).The cofactor-bound succinate dehydrogenase crystal structure revealed the establishment of hydrogen bonds at the sites LysA50, GlyA65, AlaA179, AsnA413, SerA414, and LeuA415 between the main chain atoms of FAD and SDHA, which support earlier studies [37].The higher binding potential of NAPQI suggests that it is more predisposed to binding succinate dehydrogenase (SDHA) within mitochondrial complex II.This observation is crucial in understanding the molecular mechanisms underlying acetaminophen-induced liver injury, particularly in elucidating how NAPQI interacts with mitochondrial enzymes like SDHA.In addition, Electron transfer flavoprotein (ETF), a mitochondrial heterodimeric protein act as an electron acceptor from several mitochondrial dehydrogenases [57,58] also shown significant binding affinity (− 6.3 ± 0.081 kcal/mol) with NAPQI.This ETF further transfer the electrons through ETF-ubiquinone oxidoreductase to the main electron transport chain [57].Our ongoing research focuses on an in-depth analysis of this Electron Transfer Flavoprotein in this context. The findings of the RMSD analysis also revealed that in a few timeframes, the RMSD for APAP and NAPQI marginally overlapped.After attaching APAP and NAPQI to the SDHA chain, the protein backbone fluctuated, indicating the N and C terminals had the largest variations.This finding implies that the binding properties of APAP and NAPQI are comparable.NAPQI's gyroscope radius and SASA value are smaller than APAP's, despite this.A molecule is more stable when its SASA value is lower because it is less exposed to environmental variables like solvent molecules or chemical reactions [59].Additionally, the hydrogen bond profile indicates that NAPQI interacts in a distinct way with protein residues via hydrogen bonds.Overall, the molecular dynamics simulation of this work demonstrated that NAPQI favorably interacts with succinate dehydrogenase and may have the ability to change the enzymatic activity of this membrane-integrated protein SDH.By illuminating the stability and binding procedures of protein-ligand complexes, this prediction could aid in the investigation of the interactions between proteins and ligands. NAPQI selectively inhibits mitochondrial complex II and reduces the rates of ATP biosynthesis driven by succinate [22].Protein-protein interaction networks reveal important associations related to mitochondrial energy metabolism.Succinate dehydrogenase and fumarate dehydrogenase are ubiquitously expressed and play a vital role in ATP production through the mitochondrial respiratory chain [60].SDHA2 directly interacts with SDHA and is required for FAD insertion [61].However, the compromised activity of complex II after binding of NAPQI in the flavin site could potentially reduce energy production.On the other hand, the oxidative stress followed by elevated ROS in mitochondria by environmental xenobiotics such as arsenic [62], chromium [63], lead [64], and mercury [65] leads to the continuous release of superoxide.Mitochondrial membrane complexes are responsible for ROS generation as well as energy production.Therefore, the overall impact of toxic drug metabolite NAPQI or chemical carcinogens on nucleophilic sites potentiates the cellular destruction events subsequently. Previously it was reported that succinate dehydrogenase is a therapeutic target for bleomycin-induced Idiopathic pulmonary fibrosis [66], therefore this study provides a significant drug target SDH for APAP-induce hepatotoxicity.Considering the biological significance of mitochondrial membrane complex II in APAP toxicity, this computational study suggests that NAPQI has favorable binding in the flavin site of Succinate dehydrogenase, although further validation of this site towards an inhibitory enzymatic mechanism will be needed. Conclusion The interaction between NAPQI and mitochondrial succinate dehydrogenase (SDHA) poses a significant threat to cellular well-being, resulting in diminished energy production and the formation of detrimental reactive oxygen species.This investigation underscores SDHA's potential as a focal point for drug discovery and emphasizes the importance of comprehending the mechanistic pathways associated with complex II inhibition.Molecular dynamics simulations indicate that NAPQI binding to SDHA can impact enzymatic activity, highlighting the imperative for further exploration in this domain.In essence, this knowledge illuminates the intricate interplay between toxic metabolites, mitochondrial function, and their implications for cellular survival. Fig. 1 . Fig. 1.The docking-based ligand interaction and their interacting residues.A) Solid ribbon representation of succinate dehydrogenase (SDH), with SDH-A, SDH-B, SDH-C, and SDH-D subunits colored green, teal, orange, and gray, respectively.B) Residues in the SDH-A subunits associated with the two ligands.C) 3D interactions of APAP with succinate dehydrogenase.D) 3D interactions of NAPQI with succinate dehydrogenase.Green, pink, and purple colors represent conventional hydrogen bond, pi-alkyl bond, and pi-sigma bonds respectively. Fig. 2 . Fig. 2. The graphical representation of RMSD data.The colors bar represents three distinct setups of simulation; blue color for apo structure, green color for APAP-Protein complex, and red color for NAPQI-Protein complex.(A) The RMSD of the backbone after isq fit to the backbone of protein only system.(B) The RMSD of protein in the presence of APAP was measured based on the backbone atoms of the complex system.(C) The RMSD of protein in the presence of NAPQI was measured based on the backbone atoms of the complex system.(D) This is the overlapping graph of the apo protein, APAP complex, and NAPQI complex for comparative analysis.This graphical representation offers insights into structural variations within the protein backbone under different conditions, facilitating a deeper understanding of conformational changes induced by apo structure, APAP, and NAPQI.RMSD: Root mean square deviation; APAP: Paracetamol; NAPQI: Nacetyl-p-benzoquinonimine. Fig. 3 . Fig. 3.The graphical representation of RMSF data of SDHA subunit.The colors bar represents three distinct setups of simulation; blue color for apo structure, green color for APAP-Protein complex, and red color for NAPQI-Protein complex.(A) The RMSF of the backbone after isq fit to the backbone of protein only system.(B) The RMSF of protein in the presence of APAP was measured based on the backbone atoms of the complex system.(C) The RMSF of protein in the presence of NAPQI was measured based on the backbone atoms of the complex system.(D) This is the merged graph of the apo protein, APAP complex, and NAPQI complex for comparative analysis.Maximum fluctuation observed in the C-terminal region (colored blue), the N-terminal region (colored red) and middle residues (colored yellow).This graphical representation offers insights into structural variations within the protein backbone under different conditions, deepening our understanding of conformational changes induced by apo structure, APAP, and NAPQI.Apo structure exhibits wider flexibility in the middle compared to the other two complexes, indicating a more compact structure in the complexed states.RMSF: Root mean square fluctuation; APAP: Paracetamol; NAPQI: N-acetyl-pbenzoquinonimine. Fig. 4 .Fig. 5 . Fig. 4. The graphical representation of SASA values.The colors bar represents three distinct setups of simulation: blue color for apo structure, green color for APAP-Protein complex, and red color for NAPQI-Protein complex.(A) The SASA values of protein only system.(B) The SASA of protein in the presence of APAP.(C) The SASA of protein in the presence of NAPQI.(D) In this overlapping graph, the apo protein, Protein-APAP complex, and Protein-NAPQI complex were analyzed comparatively.Apo protein exhibits higher exposure of water might indicate more open or flexible structure compared to complexed with APAP and NAPQI.This graphical representation provides insights into variations in the solvent-accessible surface area among different protein states, offering glimpse into the structural dynamics and interactions in the presence of APAP and NAPQI.RMSF: Root mean square fluctuation; APAP: Paracetamol; NAPQI: N-acetyl-pbenzoquinonimine. Fig. 6 . Fig. 6.The graphical representation of hydrogen bonding count of 100 ns simulation.(A) Hydrogen bonding profile of APAP-Protein complex colored as green.(B) Hydrogen bonding profile of NAPQI-Protein complex colored as red.(C) In this graph, overlapping graph of the APAP complex, and NAPQI complex for comparative analysis.This graphical representation of hydrogen bonding provides insights into the dynamic interactions and structural relationships within these complexes, offering a detailed view of hydrogen bond form and persist over the course of the simulation.APAP: Paracetamol; NAPQI: N-acetyl-p-benzoquinonimine. Fig. 8 . Fig. 8. Succinate to fumarate conversion in the TCA cycle.This KEGG pathway diagram illustrates the conversion of succinate to fumarate within the TCA cycle.In this enzymatic reaction, succinate is oxidized to fumarate, accompanied by the transfer of electrons to the electron transport chain (ETC).Succinate is converted into fumarate by succinate dehydrogenase (1.3.5.1) is one of the critical conversions in the TCA cycle.The released electron is utilized in the oxidative phosphorylation process. Table 1 Binding affinity prediction of mitochondrial and cytosolic proteins with NAPQI using PyRx. Table 2 Prediction of binding affinity and interaction profiling of ligands and the succinate dehydrogenase (SDH) using PyRx and AutoDock Vina. APAP: Paracetamol; NAPQI: N-acetyl-p-benzoquinonimine; CID: Compound ID.M.S. Hossen et al. mean and standard deviation were calculated.Redocking was performed by AutoDock Vina tool for the reliability of the software, cross-validation of binding affinity calculations, and consistency of the docking algorithm.The exhaustiveness was maintained as 20 exclusively for finding the best binding pose.For succinate dehydrogenase (PDB ID: 1ZOY) [36,37] grid box was positioned at the standard value
2024-05-11T15:54:59.944Z
2024-05-07T00:00:00.000
{ "year": 2024, "sha1": "afb17d370e85b7b58420c67e93f9abe761965807", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.bbrep.2024.101727", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "7be933110dcbf33e48582f43b353cd31f1d86ed3", "s2fieldsofstudy": [ "Chemistry", "Medicine", "Biology" ], "extfieldsofstudy": [] }
55855598
pes2o/s2orc
v3-fos-license
Total phenolic, condensed tannin and antioxidant activity of four Carya species from China Different species of functional agricultural crops may vary in antioxidant capacities. In this study, the antioxidant activities of methanol extracts from four species of Carya genus were compared by various antioxidant assays, including the reducing power, 1,1-diphenyl-2-pycrylhydrazyl (DPPH) radical scavenging activity and the superoxide anion scavenging activity. The reducing power of extracts from Carya dabieshanensis, Carya cathayensis, Carya hunanensis and Carya illinoensis were 0.246, 0.237, 0.22 and 0.073 at the concentration of 0.50 mg/ml, respectively. The scavenging effect on the DPPH radical (IC50) were 1.140, 1.364, 1.437 and 3.682 mg/ml, respectively, while the scavenging effect on superoxide anion radical were 27.44, 22.80, 26.15, 1.99 mg AE/g, respectively. Among the four species, C. dabieshanensis possessed the highest antioxidant activity, while C. illinoensis was the lowest. The total phenolic (TP) contents and condensed tannins (CT) were determined in all samples spectrophotometrically. For all species, C. dabieshanensis possessed the highest TP content (80.54 mg GE/g defatted kernel) and C. hunanensis possessed the highest CT content (59.62 mg CE/g defatted kernel). In addition, strong correlations of total phenolic contents and condensed tannins contents with reducing powers, DPPH radical and superoxide anion scavenging activities were also found in this work. INTRODUCTION Tree nuts have long been considered as important components of the diet due to several bioactive and health-promoting components.Epidemiological evidence indicates that, the consumption of tree nuts may lower the risk of cardiovascular disease.The European prospective investigation into cancer and nutrition (EPIC) cohort study, conducted in ten European countries also showed that, women consuming more than 6.2 g per day of nuts and seeds reduced their risk of colon cancer Abrreviations: DPPH, 1,1-Diphenyl-2-pycrylhydrazyl; TP, total phenolic; CT, condensed tannins; NBT, nitro blue tetrazolium; BHA, butylated hydroxyanisole; IC50, half maximum inhibitory concentration; EC50, half maximum effective concentration.by 31% (Jenab et al., 2004).The Carya genus (family Juglandaceae) comprises several species and is commercially cultivated in North America and east Asia for over 500 years.Recently, growing interest was developed in the exploitation of Carya (Taipina et al., 2009;Osorio et al., 2010).Pecan, belonging to the Carya genus, has been reported to possess the highest antioxidant capacity and the highest phenolic content among the common fruits and vegetables across the US (Wu et al., 2004).Villarreal-Lozoya et al. (2007) also found that, kernels from different pecan cultivars had high antioxidant capacity and total phenolic content. Growing evidence suggests that, species and genotype would alter the antioxidant compositions and properties in a selected agricultural crop (Gursoy et al., 2009;Esmaeili et al., 2010).Thus, it is of paramount importance to examine species and genotype for their antioxidant activity, in order to find potential species rich in special healthy functions.The objective of the present study was to characterize four different Carya species for their nutraceutical constituents, including total phenolic content, antioxidant activities and condensed tannins content. Material and thermal processing Carya cathayensis seeds were mechanically harvested in early September 2009 from Linan, Zhejiang Province, China.The seeds of Carya dabieshanensis, C. cathayensis, Carya hunanensis and Carya illinoensis were purchased from Hangzhou Donglin Co. Ltd.The seeds were washed with excess water and then sun-dried at about 30°C for three days.The kernels with the brown outer testa or pellicle were separated from the shell by cracking with a small hammer and were ground in a mortar.The kernel powder was defatted with hexane according to the method of Villarreal-Lozoya et al. (2007), then, samples were freeze-dried and stored at -20°C until analyses. Preparation of methanol extracts Extraction was done by macerating 1 g of defatted kernel powder with 40 ml of 70% methanol.The mixture was kept in a rotary shaker overnight and centrifuged at 3,000 g for 20 min (SCR20BC, Hitachi, Japan).A working solution (2.5 mg defatted kernels/ml) was prepared by dissolving 1 ml of the supernatant in 10 ml of methanol. Determination of total phenolic and condensed tannin content Total phenolic content was determined with Folin-Ciocalteu reagent according to Slinkard and Singleton (1977) using gallic acid as standard.In a volumetric flask, 0.1 ml of the methanol extract (final concentrations were 2.5 mg defatted kernels/ml), 0.9 ml of distilled water and 1 ml Folin-Ciocalteu reagent were mixed thoroughly.After 3 min, 3 ml of 0.188 mol/l Na2CO3 was added, then the mixture was allowed to stand for 2 h with intermittent shaking.The absorbance was measured at 760 nm in a spectrophotometer (UV-2100, Unico, Shanghai, China).The final results were expressed as milligram gallic acid equivalents per gram of defatted kernels (mg GE/g).Condensed tannin (CT) content was evaluated using the vanillin assay (Price et al., 1978).An aliquot of 0.5 g of defatted kernels was placed in centrifuge tubes and 20 ml of 1% HCl in methanol was added to each sample.Each tube was vortexed every 10 min and placed in a water bath at 30°C with constant shaking for 20 min.After incubation, tubes were centrifuged and supernatants were extracted.Aliquots of the supernatants were placed in two separate assay tubes, one for the sample determination and the other for blank determination.Samples and blanks were incubated for exactly 20 min after adding 5 ml of the vanillin reagent (0.5 g of reagent and 200 ml of 4% HCl methanol) to samples and 4% HCl in methanol to the blanks.After 20 min, the absorbance was measured at 500 nm in a spectrophotometer.Results were expressed as milligram catechin equivalents per gram of defatted He et al. 10473 kernels (mg CE/g). Determination of reducing power The reducing power of methanol extract was determined by the method of Mao et al. (2006).The 1 ml of methanol extract (0.25, 0.5 and 0.75 mg defatted kernels) prepared from working solution were mixed with 2.5 ml of 0.2 mol/l phosphate buffer (pH 6.6) and 2.5 ml of 0.03 mol/l potassium ferricyanide (K3Fe(CN)6).Aliquots (2.5 ml) of 0.6 mol/l trichloroacetic acid were added to the mixture, which was then centrifuged for 10 min at 1,000 g.The upper layer of the solution (2.5 ml) was mixed with 2.5 ml of distilled water and 0.5 ml of 0.006 mol/l FeCl3 and the absorbance was measured at 700 nm in a spectrophotometer. Determination of DPPH radical scavenging activity The model of scavenging the 1,1-diphenyl-2-pycrylhydrazyl (DPPH) radical is a widely used method to evaluate the free radical scavenging ability of various samples (Ebrahimzadeh et al., 2009).Scavenging activity of DPPH free radical was measured based on Lee et al. (1996).Negative control was prepared by mixing 0.125 ml distilled water with 3.875 ml of 0.2 mol/l DPPH.The 0.125 ml of the methanol extract (0.5, 1.0, 1.5, 2.0 and 2.5 mg defatted kernels/ml) was added to 3.875 ml of 0.2mol/ l DPPH.The mixture was gently homogenized and left to stand at room temperature for 30 min.Absorbance was read using a spectrophotometer at 517 nm.The activity of scavenging DPPH radicals was calculated using the equation: where, As is the absorbance of the sample, A(-) is the absorbance values of negative controls, respectively. Determination of superoxide anion scavenging activity The superoxide anion scavenging activity was measured using the xanthine/xanthine oxidase method Mao et al., 2006).Working solution of extract (1.25 mg defatted kernels/ml) was separately added to a 1.0 ml mixture of 0.4 mmol/l xanthine and 0.24 mmol/l nitro blue tetrazolium chloride (NBT) in 0.1 mol/l phosphate buffer (pH 8.0).A 1.0 ml solution of xanthine oxidase (0.049 unit/ml), diluted in 0.1 mol/l phosphate buffer (pH 8.0), was added and the resulting mixture incubated in a water bath at 37°C for 40 min.The reaction was terminated by adding 2.0 ml of an aqueous solution of 69 mmol/l sodium dodecylsulphate (SDS) and the absorbance of NBT was measured at 560 nm.A standard curve was prepared using ascorbic acid as reference reagent.Superoxide anion scavenging activity was expressed in milligram ascorbic acid equivalents per gram of defatted sample (mg AE/g). Statistical analysis The data were reported as mean ± SD for triplicate determinations.Analysis of variance and the least significant difference tests (SPSS for Windows, 1999, SPSS Inc., Chicago, IL) were conducted to identify differences among means.Statistical significance was declared at P < 0.05. Total phenolic content It is well known that phenolic compounds exist in many 2007) also found that, kernels from different pecan cultivars had high antioxidant capacity and total phenolic content. Table 1 shows the total phenolic (TP) contents of the Carya species, which were significantly (p<0.05)different.Total phenolic content ranged from 14.63 to 80.54 mg GE/g defatted kernel with C. dabieshanensis showing the highest TP value, while the C. illinoensis had the lowest.For all species, the following trend was found; C. dabieshanensis > C. cathayensis > C. hunanensis > C. illinoensis (P < 0.05).The TP value of C. dabieshanensis, C. cathayensis and C. hunanensis were at least 4.9 times higher than that of C. illinoensis (P < 0.05).The TP value of C. illinoensis in this study was in agreement with those of Wu et al. (2004) who found that, the TP value of C. illinoensis ranged from 12.84 to 20.16 mg GE/g. Condensed tannin content Most phenolic compounds commonly identified in walnut and hickory are phenolic acids and condensed tannins (Fukuda et al., 2003;Ito et al., 2007;Zhang et al., 2009).The condensed tannins (CT) content evaluated with the vanillin assay showed differences among Carya species, ranging from 13.21 to 59.62 mg CE/g defatted kernel which was similar to the values found by Polles et al. (1981).Also, Prado et al. (2009a) reported that, the CT content of pecan nut (C.illinoinensis (Wangenh.)C. Koch) shell infusion and acetone extracts of kernel cake were 43 ± 7 and 16.4 ± 4.2 mg CE/g, respectively.The CT content of defatted kernel from different pecan cultivars ranged from 23 to 47 mg CE/g (Villarreal-Lozoya et al., 2007).Among the four species, C. hunanensis presented the highest CT values, while C. illinoensis, had the lowest (Table 1).Carya species showed the following in a descending order: C. hunanensis > C. cathayensis > C. dabieshanensis > C. illinoensis (P < 0.05). Scavenging effect on DPPH radical 1,1-Diphenyl-2-pycrylhydrazyl (DPPH) is a stable nitrogen-centered free radical whose color changes from violet to yellow upon reduction by either the process of hydrogen or electron donation.Substances which are able to perform this reaction can be considered as antioxidants and therefore, radical scavengers (Brand-Williams et al., 1995).The scavenging effects of extracts from Carya kernels tested on the DPPH radical were measured as shown in Figure 1.The scavenging activity of extracts on inhibition of the DPPH radical was related to the concentration of extracts added, the activity increased as a result of increasing concentration for each species.The scavenging effect of extracts from four species of Carya kernels on the DPPH radical followed the order: C. dabieshanensis > C. cathayensis > C. hunanensis > C. illinoensis (P < 0.05) and the half maximum inhibitory concentration (IC 50 ) were 1.140, 1.364, 1.437 and 3.682 mg/ml, respectively, which was low, compared to that reported by Zhu et al. (2008), who stated that, the percentage of DPPH radical scavenging activity at 200 l volume of Chinese Hickory (C.cathayensis) kernel ethanol extracts, -tocopherol and butylated hydroxyanisole (BHA) were 87.8, 82.4 and 87.8%, respectively. Villarreal-Lozoya et al. (2007) reported that, the mean value of the scavenging effect on DPPH radical of different pecan cultivars was 487 ± 42 g Trolox equivalents per gram of defatted sample.The scavenging effect on DPPH radical of acetone extracts of kernel cake from Pecan nut were 68.0 ± 21.0 mg Trolox equivalent antioxidant capacity per gram of defatted sample (Prado et al., 2009b).Futher study on the milligram Trolox equivalent antioxidant capacity per gram of four species in this study should be done. Reducing capacity The Fe 3+ -Fe 2+ transformation was determined as the reducing capacity in this study.The presence of reductants (antioxidants) in the samples would result in the reduction of Fe 3+ to Fe 2+ by donating an electron.The amount of Fe 2+ complex can then be monitored by measuring the formation of Perl's Prussian blue at 700 nm.Increasing absorbance at 700 nm indicates an increase in reductive ability (Ebrahimzadeh et al., 2010).The reducing capacity of extracts from the four species of Carya kernels also increased with increasing amount of the extracts and decreased in the order: BHA > C. dabieshanensis > C. cathayensis > C. hunanensis > C. illinoensis (P < 0.05) and the absorbances were 1.19, 0.246, 0.237, 0.22 and 0.073 at the concentration of 0.50 mg/ml, respectively (Figure 2).Zhu et al. (2008) reported that the reducing capacity of Chinese Hickory (C.cathayensis) kernel ethanol extracts, -tocopherol and BHA at 700 nm was 0.76, 0.64, 1.08, at 200 l volume, respectively.It was found that, the reducing capacity of four species of Carya kernels was correlated with the phenolic compounds from the correlation analysis. Superoxide anion scavenging activity Compared with other oxygen radicals, the superoxide anion (O 2 •-) has a longer lifetime, can move to to a target cell at a longer distance and thus, is more dangerous.Therefore, it is important to study the ability of the extract to scavenge superoxide anion.Recent researches showed that, scavention of superoxide anion radicals is of importance for protection against early events in oxidative damage (Hu and Skibsted, 2002).In this study, the scavenging effect of extracts from four species of Carya kernels on superoxide anion radical followed the order: C. dabieshanensis > C. hunanensis > C. cathayensis > C. illinoensis (P < 0.05) and were 27.44, 26.15, 22.80 and 1.99 mg AE/g, respectively (Figure 3).Sun et al. (2004) reported that, the scavenging effect on superoxide was related to the number of active hydroxyl groups in the molecules.Therefore, the strong scavenging effect on superoxide by C. dabieshanensis, C. hunanensis and C. cathayensis may be due to the abundant phenolic hydroxyl groups. Correlation between total phenolic content and antioxidant activity It is important to examine the correlation between the content of the total polyphenols and antioxidant potential because some authors have reported that, there is no correlation between the content of these main antioxidant compounds and the radical scavenging capacity (Yu et al., 2002).The results obtained by us do not support these claims.In the present study, there is a strong correlation between total phenolic content and reducing power capacity (R 2 = 0.9985).In addition, the content of phenolic compounds was also highly correlated with 1,1-diphenyl-2-pycrylhydrazyl radical scavenging capacity (R 2 = 0.9675) and superoxide anion scavenging capacity (R 2 = 0.9739).These data are in accordance with others, who have shown that high total phenol content increases the antioxidant activity (Villarreal-Lozoya et al., 2007;Prado et al., 2009b).Therefore, the phenolics of Carya kernels may be responsible for its antioxidant properties, but further studies are warranted for the isolation and identification of individual phenolic compounds and also in vivo studies are needed for a better understanding of their mechanism of action as an antioxidant. Tannins are water-soluble polyphenols that are present in many plant foods.In this work, data also showed a strong correlation of condensed tannin with reducing power (R 2 = 0.9049), DPPH radical scavenging capacities (R 2 = 0.8544) and superoxide anion scavenging capacity (R 2 = 0.9195).This may be interpreted that, the tannins were the main polyphenolics of Carya kernels.A remarkable radical scavenging effect against DPPH (EC 50 = 0.34-4.72M) of tannins from the n-butanol (n-BuOH) extract of walnuts (the seeds of Juglans regia L.) were also measured by Fukuda et al. (2003). Conclusions The results from various free radical scavenging systems Carya species revealed that, the four species of Carya kernels have significant antioxidant activity.The significant variation in antioxidant properties, total phenolic content, condensed tannin content of different species of Carya was observed in this study.C. dabieshanensis possess the highest antioxidant activity and C. illinoensis had the lowest.A correlation between total phenolic content, condensed tannins and the antioxidant properties was also observed.This study can be used as a basis for future breeding programs aiming to develop Carya kernels with improved nutritional profile and health benefits. Figure 1 . Figure 1.DPPH radical scavenging effect of methanol extracts from defatted Carya kernels .Each value represents mean ± standard deviation of three replicates. Figure 2 . Figure 2. Reducing capapcity of methanol extracts from defatted Carya kernels.Each value represents mean ± standard deviation of three replicates. Table 1 . Total phenolic contents and condensed tannin contents of methanol extracts from defatted Carya kernels.
2018-12-05T10:55:28.747Z
2011-09-30T00:00:00.000
{ "year": 2011, "sha1": "a3fe44da384be1b19ee6362e04c172bac36909a7", "oa_license": "CCBY", "oa_url": "https://academicjournals.org/journal/AJB/article-full-text-pdf/CD12F8032483.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "4a1ac657b9917330af738aacc6f4015585fdb92a", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Chemistry" ] }
49299197
pes2o/s2orc
v3-fos-license
Granular cell type of ameloblastoma Ameloblastoma is a locally invasive tumor derived from odontogenic epithelium. An uncommon variant of ameloblastoma is granular cell type, which cannot distinguish from other ameloblastoma subtypes by clinical and radiographic findings alone. Only review of it’s microscopic features allows distinction from other subtypes. The purpose of this article is to present a case of granular cell ameloblastoma. This subtype should be distinguished from the other histopathologic subtypes because of it’s higher recurrence rate and more aggressive biological behavior. Radiographic and histologic findings as well as treatment are also discussed. INTRODUCTION Ameloblastoma is a locally invasive tumor derived from odontogenic epithelium. [1] Majority of patients present in the fourth decade. [2] Men are involved more than female [3] and more than 80% of ameloblastomas are in the mandible (mostly angle and ramus). [4] Clinically, jaw swelling and pain are the most frequent presenting symptoms. [5] Radiographically, ameloblastoma is included solid (multicystic) and unicystic. [4,6,7] Microscopically, the follicular and plexiform patterns are the most frequent and less common histopathology subtypes include the acanthomatous, granular cell, desmoplastic, and basal cell. [5] Granular cell ameloblastoma (GCA) is a rare subtype (<3/5%). [8] It cannot be distinguished from other ameloblastoma subtypes by clinical and radiographic findings alone [9] histopathology features of GCA are characterized by the groups of granular cells, which have abundant cytoplasm filled with eosinophilic granules. [5] The granular cells usually form the central mass of the epithelial tumor islands and cords. The periphery of the islands consists of nongranular columnar cells. [10] Sometimes, granular cells' phenotype has been attributed to an aging or degenerative change in long-standing lesions. [6] However, this tumor usually shows higher recurrence rate and more aggressive behavior which demand a close postoperation follow-up. [10] The purpose of this article is to present a case of GCA and review its microscopic features that allow it's distinction from other ameloblastoma subtypes. Radiographic and histologic findings as well as treatment are also discussed. CASE REPORT A 47-year-old male presented with a chief complaint of a painless swelling in his right mandible and mobility of lateral and canine teeth in the same side. Swelling was begun from 3 years ago until 8 months before was reached to present size; mobility of teeth revealed from 3 months ago. There were no lymphadenopathy and tenderness [ Figure 1a]. Panoramic radiograph showed a large, multilobular radiolucency with ill-defined borders, located in the body of partial edentulous right mandible and extending from the lateral incisor to the first molar area [ Figure 1b]. According to preoperative management of patient, routine biochemical and hematological investigations were done, and all were within normal limits. With a differential diagnosis of central giant cell granuloma or odontogenic tumors or any other centrally located mesenchymal tumors, the patient posted for further evaluation. Incisional biopsy was done, but the resected tissue was found to be insufficient to arrive at a histopathological diagnosis. The patient refused further incisional biopsy; based on the suggestion of the surgeon, the patient posted for surgery. Under general anesthesia, removing part of the jawbone including tumor with right lateral and canine teeth was performed. In gross, tumor appeared as a combination of cystic and solid areas [ Figure 2]. Histopathology survey of surgical specimen revealed a combination of cystic and solid areas. The peripheral layer of cystic areas consisted of a parallel arrangement of tall cylindrical cells with reverse polarity [ Figure 3a, white arrow] of their hyperchromatic nuclei and vacuolization of the cytoplasm, and in solid area, the accumulations of cell rich in eosinophilic granular cytoplasm were found [Figures 3b, black arrows]. Furthermore, in the periphery of solid part of islands and cords of epithelium, a row of cell similar to ameloblast was found. According to above findings, diagnosis of GCA was given. After 2 months, the radiography which was taken showed acceptable healing improvement and patient schedule for follow-up in 6-month interval [ Figures 4a and b]. DISCUSSION The age distribution of granular cell variant is similar to the other types of ameloblastomas which shows an approximately equal prevalence in the third to seventh decades of life. About 85% of tumors occurred in the mandible, the vast majority of which affected the molar-ramus region. [5] Jaw swelling and pain were the most frequent presenting symptoms. Compared to the other ameloblastoma subtypes, no distinguishing radiographic findings have been reported, the patient in this study was completely matched to above finding. In review of literature and case report which was done by Arora et al., similar clinical and histopathological features with our case also could be found. [8] Histopathologically, GCA has numerous large eosinophilic granular cells. These cells usually form the central mass of the epithelial tumor islands and cords. The periphery of the islands consists of nongranular tall columnar cells. GCA is diagnosed by the presence of granular cells, which usually occur within the central area of tumor and progressively replace the stellate reticulum. [11] Our case also showed similar features. Ultrastructurally, it has been revealed that the lysosome accumulation in these cells provides the characteristic granularity. [5] It is evident from the literature; there are two main lines of interpretation about nature of granular cells, some consider it as a metabolic, while others of the view that it represent a degenerative process. More recent observation supports the later view to be more tenable based on the increased expression of death signaling molecules. Taneeru et al. suggested that the synthesis of signaling molecules such as β-catenin and Wnt-5a is upregulated in the granular cells, but their transportation or secretion is impaired, resulting their accumulation within granular cells, as autophagosomes. [12] The GCA has a more aggressive behavior compared with the other subtypes; it may be locally aggressive and has relatively higher recurrence rate. [9] Unlike the case reported in this article, despite curettage with peripheral osteotomy which was done, after 2 months, radiography showed acceptable healing improvement. However, we need an extended period of follow-up in this patient for better judgment. The differential diagnosis of GCAs includes other oral lesion with a similar morphology of granular cell accumulation such as granular cell tumor, granular cell odontogenic tumor, and congenital epulis, but these lesions usually could differentiate easily. [5] Treatment of ameloblastomas should be based on patient's history, clinical, radiographic examination, and finally histopathology findings. [13,14] However, similar to the other types of solid ameloblastoma, the prognosis is more dependent on the surgical procedures, i.e., GCAs treated by enucleation or curettage exhibit a high recurrence rate. [5] Surgical options include segmental resection, en bloc resection, simple curettage, and excision with peripheral osteotomy. [13,14] The last one which was done for our patient and after 2 months clinically [ Figure 4a] and radiography which was taken [ Figure 4b] showed acceptable healing improvement and patient schedule for follow-up in 6 months interval. CONCLUSION GCA is a rare condition with unique histopathology findings; this subtype should be distinguished from the other histologic subtypes because of its higher recurrence rate and more aggressive behavior and necessity of a long period of follow-up. Declaration of patient consent The authors certify that they have obtained all appropriate patient consent forms. In the form the patient(s) has/have given his/her/their consent for his/ her/their images and other clinical information to be reported in the journal. The patients understand that their names and initials will not be published and due efforts will be made to conceal their identity, but anonymity cannot be guaranteed. Financial support and sponsorship Nil.
2018-06-21T00:27:03.646Z
2018-05-05T00:00:00.000
{ "year": 2018, "sha1": "4a0d635cda9505cf6be795b11e3f5505f610b5eb", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/1735-3327.231868", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "c15ddbec485f013a5629c5352ae72d0f17bdc7e7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
14771089
pes2o/s2orc
v3-fos-license
Caution for Diagnosis and Surgical Treatment of Recurrent Cholangitis Abstract The hepatocellular carcinoma (HCC) patients with bile duct tumor thrombus (BDTT) usually have no specific clinical symptoms at early stages. HCC with BDTT was usually misdiagnosed when the intrahepatic tumor was small, even undetectable. In this study, 5 cases of HCC with BDTT misdiagnosed as choledocholithiasis and cholangitis in the local hospital are described. We analyzed retrospectively and summarized our experiences of these 5 HCC patients with BDTT misdiagnosed in the local hospital during the past 5 years. The diagnosis, treatment, and outcome of the patients are discussed. Three patients underwent hepatectomy with thrombectomy and T-tube drainage. One patient underwent hepatectomy with the resection of the common bile duct and hepatojejunostomy, and palliative surgery was performed in 1 patient with portal vein tumor thrombus and intrahepatic metastasis. The patients were followed for 6–22 months; 4 patients died of tumor recurrence and metastasis or hepatic failure, despite 3 of these patients having received transhepatic arterial chemotherapy and embolization or radiofrequency ablation therapy. Early and accurate diagnosis of HCC with BDTT is very important. When patients have a history of abnormal recurrent cholangitis, HCC with BDTT should be highly suspected. Intraductal ultrasonography (US), intraoperative US, and histopathological examination are very valuable for the diagnosis. The prognosis of HCC patients with BDTT is dismal. Identification of this type of patient is clinically important, because surgical treatment may be beneficial. INTRODUCTION H epatocellular carcinoma (HCC) is one of the most common malignant neoplasms worldwide and is characterized by a low rate of early diagnosis and high mortality, especially in Eastern Asia. It is the fifth most common malignancy in men and the ninth most common malignancy in women. 1 Worldwide, HCC contributes to more than 695,900 deaths annually. 2 HCC generally spreads through the liver via the portal vein. Portal vein tumor thrombus is frequently observed in resected liver specimens and its incidence is high. 3,4 Tumor thrombus is also detected within the bile duct, where they can cause obstructive jaundice. HCC with bile duct tumor thrombus (BDTT) is relatively rare; its incidence is approximately 1.2%-9.0%, and its clinical and pathological characteristics remain to be defined. 5 Jaundice in HCC patients is divided into hepatocellular and icteric types in terms of its underlying pathophysiology. 6 Hepatocellular-type jaundice in patients with HCC is typically associated with advanced liver cirrhosis or extensive tumor infiltration to liver parenchyma that leads to hepatic insufficiency. 3,7 For these patients, life expectancy is short, and aggressive treatment modalities, including surgery, are not recommended. Icteric-type jaundice is caused by obstruction of the bile duct by BDTT. Once BDTT in HCC patients extends to the common hepatic duct or common bile duct, it causes obstructive jaundice. The HCC patients with BDTT usually have no specific clinical symptoms or signs at early stages although modern diagnostic modalities are available. 8,9 It is usually difficult to make accurate diagnosis before operation, because of low incidence and limited awareness of its clinical and imaging features to find the BDTT preoperatively. 10 Because of insufficient knowledge on this disease, it tends to misdiagnose BDTT as choledocholithiasis and cholangitis, especially that intrahepatic tumor might be small, even undetectable. In the present study, we analyzed retrospectively and summarized our experience of 5 HCC patients with BDTT misdiagnosed as choledocholithiasis and cholangitis in the local hospital during the past 5 years. To our knowledge, this is the first report describing cholangitis as the initial symptom of HCC. Patients Five patients misdiagnosed as choledocholithiasis and cholangitis in the local hospital were confirmed HCC with BDTT by surgery and histology between 2007 and 2012 at our hospital. Five patients were admitted to the local hospital with initial symptoms (high fever, jaundice, and abdominal pain) of cholangitis. All the patients received emergency treatment after abdominal ultrasound examination and/or abdominal computed tomography (CT) scan. Three patients received choledocholithotomy and T-tube drainage therapy. The other 2 patients received emergency endoscopic nasobiliary drainage (ENBD) drainage; 1 case was confirmed as BDTT after thrombus extraction during endoscopic retrograde cholangiopancreatography (ERCP) and then transferred to our hospital for further treatment. The other patient received plastic stent placement after ENBD drainage, and further received choledocholithotomy and choledochojejunostomy because of recurrent cholangitis. Methods Laboratory data about the patients before surgery were recorded and analyzed. The patients received 2 or more preoperative diagnostic imaging procedures, including transabdominal ultrasonography, helical CT with plain scan and enhanced scans, and magnetic resonance imaging (MRI) with magnetic resonance cholangiopancreatography and ERCP to confirm the diagnosis. All available imaging data including diagnosis reports and images were retrospectively reviewed by 2 radiologists. A consensus was reached with the main findings or signs recorded. Surgical and pathologic reports were also reviewed and correlated with the major findings or signs on comprehensive imaging. All the patients were discharged after sufficient recovery. This study was approved by the Institutional Review Board of the First Affiliated Hospital, Xi'an Jiaotong University, Xi'an, China. Follow-Up During the first 6 months postoperatively, the patients were reexamined every 1-2 months. After that the patients were reexamined every 3-6 months. At each follow-up visit, clinical, laboratory, and radiological (abdominal CT scan and chest X-ray) data were collected. All 5 patients were followed-up until the end of 2012. Patients' Characteristics The patients included 4 men and 1 woman between the age of 47 and 72 years. The characteristics of the patients are summarized in Table 1. Data from these patients were collected preoperatively, and notes from the referring hospital were reviewed whenever possible. Classification of HCC With BDTT HCC with BDTT was classified according to the classification proposed by Ueda et al, 11 on the bases of the location of BDTT. HCC with BDTT was classified as type 1 (BDTT involving the second-order intrahepatic duct), type 2 (BDTT involving the first-order intrahepatic duct), type 3 (BDTT involving the hepatic confluence), and type 4 (dislodged BDTT within the common hepatic duct). The 5 cases were classified as shown in Table 2. According to Ueda et al classification, 11 4 cases belong to dislodged BDTT within the common hepatic duct or common bile duct and 1 case was BDTT involving the hepatic confluence. "+" stands for Positive; "À" stands for Negative. AFP ¼ alpha-fetoprotein (range 0-7.02 ng/mL), DBIL ¼ direct bilirubin (range 0-6 μmol/L), ENBD ¼ endoscopic nasobiliary drainage, HbsAg ¼ hepatitis B surface antigen, HCVAb ¼ hepatitis C virus antibody, TBIL ¼ total bilirubin (range 6-20.5 μmol/L). Treatment Strategies All five patients received emergency treatment in the local hospital when they were misdiagnosed as choledocholithiasis and cholangitis. Three patients received choledocholithotomy and Ttube drainage therapy and 2 of them were confirmed as BDTT pathologically after the operation. BDTT was detected again by T-Tube cholangiography after the patient was admitted to our hospital (Figure 1). These 2 patients underwent hepatectomy with thrombectomy and T-tube drainage after 14 and 20 days, respectively. The other patient was admitted to our hospital 4 months after the first treatment and only received percutaneous transhepatic biliary drainage (PTBD) therapy because of the portal vein tumor thrombus and intrahepatic metastasis. Two of 5 patients received emergency ENBD drainage, and 1 case was confirmed as BDTT after thrombus extraction during ERCP and then transferred to our hospital in 7 days. This patient underwent segments VII and VIII bisegmentectomy with thrombectomy and T-tube drainage. The last patient received plastic stent placement after ENBD drainage, and further received choledocholithotomy and choledochojejunostomy in the local hospital. This patient underwent left hemihepatectomy and hepatojejunostomy 92 days after the first treatment. The determination of resectability was based on tumor characteristics, remnant liver volume, liver function, and general status of the patients (Table 3). Outcome Four patients underwent hepatectomy and BDTT removal. The extent of resection is shown in Table 3. One patient received PTBD therapy because of the portal vein tumor thrombus and intrahepatic metastasis. Surgical complications occurred in 1 patient, including pleural effusion and subphrenic abscess, that was successfully managed with conservative treatment. The patients were followed for 6-22 months. Four patients died of tumor recurrence and metastasis or hepatic failure with a mean survival time of 15 months, despite 3 of these patients received transhepatic arterial chemotherapy and embolization (TACE) or radiofrequency ablation therapy. DISCUSSION HCC with BDTT has been reported to be rare and accounts for only 1.2%-9.0% of HCC. 12 It is generally believed that the invasion of HCC into the biliary tree ultimately leads to the formation of BDTT. However, recent studies revealed that primary tumor might be small, even undetectable, and there was no histopathologic evidence of direct tumor invasion into bile duct wall in some patients. 13,14 When the intrahepatic tumor is very small and cannot be detected by imaging examination, and recurrent cholangitis was the predominant symptom of the patients, it is difficult to distinguish HCC with BDTT from choledocholithiasis and cholangitis. In this study, 5 cases were misdiagnosed as choledocholithiasis and cholangitis in the local hospital because of cholangitis as the predominant symptom without intrahepatic tumor detection. Identification of this particular type of HCC is clinically important, because the presence of BDTT does not render HCC unresectable. After appropriate preoperative management, hepatectomy with thrombectomy appears to be effective for HCC patients with BDTT. The mechanism of BDTT in HCC patients is not well understood. Previous reports demonstrated that HCC invades into the cystic duct to cause tumor thrombus shedding and biliary tract hemorrhage, which further lead to obstructive jaundice. 15 The reasons why HCC is present as BDTT without detectable primary intrahepatic tumor are as follows; the tumor may originate from cancerization of ectopic hepatocytes in the bile duct wall, or the primary tumor is just too small to be identified, or the tumor located at the origin of or close to the intrahepatic duct grows intraluminally and stretchs inferiorly. 16 With regard to the pathogenesis of BDTT, its formation is mainly through the following mechanisms 13,14,17 : HCC cells directly invades the bile duct and tumor tissues fill the bile duct; tumor tissues rupture and a fragment of tumor tissue separate from the primary lesion, migrate to different sites of the extrahepatic bile ducts, and result in obstruction jaundice and cholangitis; and hemorrhage from the tumor invasion may partially or completely fill the distal bile duct with tumor-containing blood clots. Generally, BDTT was not adherent to the bile duct wall so it could be removed easily. BDTT rarely invade the walls of the large bile ducts around the hepatic hilus. Therefore, liver resection of the involved hepatic segments with thrombectomy through a choledochotomy is a rational technique for curative resection. 12 More recently, efforts on stem cell biology may shed light on the pathogenesis of BDTT. Accumulating evidences indicate that HCC with BDTT, especially with small or undetectable primary lesion and/or no histopathologic evidence for bile duct invasion, might arise from liver stem/ progenitor cells residing in the canals of Hering and, possibly, some primary lesions are formed first within the intrahepatic biliary tree. 18 Early and accurate diagnosis of HCC with BDTT is very important. In our cases, HCC patients with BDTT were misdiagnosed as choledocholithiasis because cholangitis was the predominant initial symptom. The primary suspect was a bile duct stone and the patients received choledochotomy or ENBD therapy at the initial admission in the local hospital. However, neither a bile duct stone nor a gallbladder stone had been shown in the past history. In this study, all the 5 patients were positive for the markers of chronic viral hepatitis. A relatively high percentage of cirrhosis was observed on preoperative images and during surgery in our patients. Although parenchymal mass may not be detectable on crosssectional images in the BDTT patients, other signs were helpful in the diagnosis of HCC with BDTT, such as abnormal recurrent cholangitis, patients in the high-risk population with history of liver cirrhosis, hepatitis B surface antigen, or hepatitis C virus antibody positive. Patients with these features should be considered the potential diagnosis of HCC with BDTT and concentration of AFP levels should be detected for differential diagnosis of HCC. Intraductal ultrasonography (IDUS) is very valuable for diagnosing this disease. Sasaki et al 19 reported that IDUS can distinguish between BDTT caused by HCC and bile duct stone according filling defect with or without acoustic shadow. However, IDUS has not been widely accepted because it is invasive and requires special equipments and specific expertise. The characteristics of early enhancement pattern on dual-phase contrast enhanced CT or dynamic contrast enhanced MRI are important to diagnosed BDTT. 20 It was reported that color Doppler sonography can also effectively detect tumor vascularity of BDTT. 21 Furthermore, if BDTT is suspected, it is still important to look for more sensitive techniques such as histopathological examination. Moreover, intraoperative ultrasonography (IOUS) should be performed to find the potential intrahepatic tumor or to determine the resection level, especially in patients with cirrhosis. 3 The ideal therapy for HCC with BDTT is to remove the primary tumor and the BDTT surgically. Surgical methods include lobectomy of the liver, hepatectomy with removal of BDTT, and thrombectomy through choledochotomy followed by T tube drainage. 16,22 As BDTT is not tightly adhesive to the bile duct wall, it is not difficult to remove during exploration of the biliary tract. However, although surgical treatment can achieve good results, most patients have missed the best time for surgery because of misdiagnose at the time of disease onset. Removal of BDTT via choledochotomy and resection of intrahepatic tumor were considered in patients with adequate hepatic function. 10 Bile duct drainage, such as ENBD, plastic stent placement, PTBD, or PTBD plus stent placement, should be practiced to relieve jaundice in patients with extensive intrahepatic metastasis, multiple recurrent lesions or inadequate hepatic function. After bile duct drainage is performed, TACE may be performed to inhibit the vascularization of the tumor and the BDTT and thereby control tumor growth or even prevent fatal bile duct bleeding caused by the BDTT. HCC with BDTT may be accompanied by portal vein tumor thrombus simultaneously, as both the portal vein and bile duct are surrounded by the same glisson sheath. 14 Portal vein tumor thrombus indicate systemic metastasis, which lead topoor prognosis, while the HCC that only combined BDTT could get good outcomes after treatment. HCC patients with BDTT have a poorer prognosis than other HCC patients, probably because the following reasons: 10,12,22 (a) many patients do not receive effective treatment at the best time for surgery because of misdiagnose at the initial admission to the local hospital. (b) HCC patients with BDTT is often accompanied by liver cirrhosis and obstructive jaundice, it is difficult to assess hepatic functional reserve before determining treatment. (c) In HCC patients with BDTT, shorter survival time may be associated with portal vein invasion, portal vein tumor thrombus and intrahepatic tumor recurrence. In summary, early and accurate diagnosis of HCC with BDTT is very important. When patients have history of abnormal recurrent cholangitis, HCC with BDTT should be highly suspected. IDUS, IOUS and histopathological examination are very valuable for diagnosing this disease. The
2018-04-03T00:40:38.408Z
2014-08-29T00:00:00.000
{ "year": 2014, "sha1": "2389d75ea02d5bad3c80aa9dfb53e50dbc509461", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1097/md.0000000000000080", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2389d75ea02d5bad3c80aa9dfb53e50dbc509461", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
233437717
pes2o/s2orc
v3-fos-license
The Relationship of 25-Hydroxyvitamin D Plasma Levels with Breast Cancer Stadium Assessed from Menopause Status Breast cancer is a type of cancer with high incidence and mortality especially in developing countries. Vitamin D regulates the expression a number of genes involved in the development of cancer cells. The aim of this study is to analyze the relationship between 25-hydroxyvitamin D (25 (OH) D) plasma level with breast cancer stage based on menopausal status. This is an observational research method with cross sectional design. Research subjects were 53 newly diagnosed breast cancer patients and had not received chemotherapy. Menopausal status and stage data were obtained from interviews and medical record data. Levels of 25-hydroxyvitamin D plasma were measured (ELISA) method. The results obtained Stage II, III and IV each have an average level of vitamin D of 28,56 ng/ml (95% CI; 23,61 – 33,52 ng/ml), 28,18 ng/ml (95% CI: 24,49 – 31,87 ng/ml) and 27,86 ng/ml (95% CI: 22,68 – 33,04 ng/ml).The average plasma concentration of 25 (OH) D in pre-menopausal patients is 28,54 ng/ml and average plasma 25 (OH) D levels in post-menopausal patients is 27,79 ng/ml. There was no significant relationship between plasma levels of 25 (OH) D and breast cancer stage in both pre menopausal and post-menopausal patients. I. Introduction Breast cancer is the most common type of cancer in women with a high mortality rate. In 2012, there were 522,000 deaths from breast cancer worldwide. In Asia, as many as 639,824 cases of breast cancer were recorded throughout 2012. Countries with the highest number of cases include China (187,213 cases), India (144,937 cases), Japan (55,710 cases), Indonesia (48,998 cases) and Pakistan (34,038 cases). The highest mortality rates are in India, China, Indonesia, Pakistan and Japan. (Bray, F (2013); Ghoncheh, M (2016)). The risk factor for breast cancer is multifactorial, one of which is nutrition. Vitamin D is a fat soluble vitamin which has various functions. Low levels of vitamin D are associated with a number of diseases such as diabetes, cardiovascular disease, osteoporosis, multiple sclerosis and cancer. Several case control studies have reported lower levels of vitamin D in breast cancer patients compared to control subjects. (Atoum, M.F (2017); (Elsoud, M.R.A (2016); (Younus A. (2016)). This is inseparable from the role of vitamin D as an anticancer agent. Research by Jeong et al. in animals, it has been shown that vitamin D can inhibit the growth and development of breast cancer cells. In invitro 1, 25 (OH) 2D (active vitamin D) and its Britain International of Exact Sciences (BIoEx) Journal ISSN: 2686-1208(Online), 2686-1216(Print) Vol. 3, No. 1, January 2021 www.biarjournal.com/index.php/bioex -66-analogues have antiproliferative activity that can inhibit proliferation and trigger apoptosis cultured breast cancer cells. (Jeong, Y. (2015); Chen, J. (2014); Duffy, MJ. (2016). Vitamin D regulates the expression of a number of genes involved in cancer cell development. In the process of proliferation, vitamin D increases the expression of p21 and p27 and decreases the expression of CKDs and cyclin. In the process of apoptosis, vitamin D increases the expression of antiapoptotic proteins. In terms of inhibiting metastasis, vitamin D is able to reduce MMP expression and decrease HIF1α, VEGF, Il-18 expression. 10 Several studies have reported lower levels of vitamin D in women who have gone through menopause compared to women who have not. Given the large role of vitamin D in the development of cancer cells, vitamin D deficiency in women who have undergone menopause is a risk factor for breast cancer. On the other hand, exposure to estrogen in breast tissue causes women who have not menopause in the age range 45-55 years to be more at risk for breast cancer. Breast cancer stage is a clinical assessment to describe the size of the cancer, its spread and its effects on other organs based on 3 criteria, namely tumor size, lymph nodes and metastases. Judging from the mechanism of action, vitamin D has an influence on the 3 components that determine these stages. Therefore, researchers wanted to see the relationship between plasma 25-hydroxyvitamin D (25 (OH) D) levels with breast cancer stage based on menopausal status. II. Research Methods Based on the inclusion criteria, the subjects in this study were women who were diagnosed with breast cancer and had not undergone chemotherapy. A total of 53 subjects were collected from October 2017 -February 2018 at the surgical Oncology clinic of H. Adam Malik Hospital, Medan. Subjects were interviewed to determine the characteristics of diagnosed age, age of menarche, menopausal status and age, history of contraceptive use and history of breast cancer, while breast cancer stage data were obtained from patient medical records. The subject's blood was drawn as much as 3 cc. The blood is inserted into the EDTA tube and the plasma is separated. The plasma is then stored at -80c until the vitamin D levels are checked. Plasma vitamin D levels were checked by ELISA method using 25-hydroxyvitamin D (25 (OH) D) Kit (® DBC Canada). The inspection protocol follows the instructions on the kit. Ethical Clearance This research was approved by the Health Research Ethics Committee, Faculty of Medicine, University of North Sumatra with letter no 473 / TGL / KEPK FK USU-RSUP HAM / 2017. Before conducting interviews and taking blood, all research subjects signed the informed concent after being given an explanation of the aims and benefits of the study. Statistic Analysis Normality test using Shapiro-Wilk Bivariate analysis using the Spearman rank correlation test ANOVA test to assess differences in mean levels of vitamin D in the group stage and menopausal status. The limit of significance set is 5%. Results Based on the characteristics of research subjects, it is known that 45.3% of study subjects were in the age range of 40-49 years, 58.5% experienced menarch under 13 years. There were 50.9% patients with post-menopause, and 59.3% with menopausal age 45-50 years, there were 49.1% with a history of hormonal contraceptive use and 17.0% with a family history of breast cancer (Table 1 Based on the stage of breast cancer, there were no subjects with stage I, 11 subjects with stage II, 25 people with stage III and 17 people with stage IV. Stages II, III and IV each had mean vitamin D levels of 28.56 ng / ml (95% CI; 23.61 -33.52 ng / ml), 28.18 ng / ml (95% CI: 24.49 -31.87 ng / ml), 27.86 ng / ml (95% CI: 22.68 -33.04 ng / ml). There appears to be a decrease in vitamin levels in line with the increase in breast cancer stage, but it is not statistically significant. Based on menopausal status, there were 26 subjects with premenopause and 27 subjects with post menopause. There was no significant difference in vitamin D levels between the menopause and postmenopausal groups, namely 28.54 ng / ml (95% CI: 25.84 -31.24 ng / ml), 27.79 ng / ml (95% CI: 23 , 56 -32.01 ng / ml) (p = 0.758) ( Table 2). Discussion According to Arysha (2020) women are one important element in a family or community. Therefore, women's health, especially their reproductive health, is one of the important health problems. Vitamin D is widely known to have an immunogenic role and play an anti-proliferative role in the body as well as an endocrine hormone. Vitamin D deficiency is known to be associated with an increase in the incidence of malignancies from the breast, prostate, and colon (Garland, C.F. 2009). Suboptimal vitamin D levels are assumed to trigger cell proliferation, angiogenesis, and metastasis. In this study, there were no research subjects with stage I. There was no statistically significant relationship between 25 (OH) D levels in each group of breast cancer stages. However, it is still seen that the higher the stage of the study subject, there is a decrease in 25 (OH) D levels. This study is in line with previous research by Karthikayan et al. who found no significant association between cancer stage and levels of 25 (OH) D (Karthikayan, A. 2018). The most common presentation of breast cancer is a lump in the breast that does not cause pain. Some people tend to ignore this symptom. As a result, patients with an advanced clinical stage are a common occurrence. Because everyone's health-seeking behavior is different, this is a confounding factor in clinical staging. Patients with poorly differentiated breast cancer have lower levels of vitamin D (Karthikayan A, 2018) than patients with moderately differentiated and well differentiated tumors. This suggests that true vitamin D metabolites inhibit proliferation and induce apoptosis and cell differentiation. Physiologically, changes in a woman's menopausal status are known to have an effect on reducing levels of 25-hydroxyvitamin D (25 (OH) D). This occurs due to changes in diet, lifestyle, insulin sensitivity, and decreased physical activity experienced by menopausal women. In addition, postmenopausal women experience changes in vitamin D metabolism, such as a decrease in the synthesis of vitamin D from the skin or changes in the body's consumption of vitamin D which affects vitamin D status and body physiology (Perez-Lopez FR, 2020). There is also a hypothesis that the presence of estrogen increases activity of enzymes that play a role in activating Vitamin D, decreased levels of estrogen during the menopause tra nsition can trigger symptoms of Vitamin D deficiency (Buchanan, J.R. 1986). Several studies evaluating the relationship between vitamin D and breast cancer based on menopausal status are still inconsistent (Shin, MH (2002); Lin, J (2007); Knight, JA (2007); Rossi, M (2009)). In this study there was no significant relationship between 25 (OH) D levels and menopausal status in the study subjects. Although it appears that 25 (OH) D levels are lower in study subjects with post-menopausal status. This study is in line with Karthikayan et al. who found no significant relationship between menopausal status and levels of 25 (OH) D (Karthikayan A, 2018). In addition, it is in line with research conducted by Anderson et al. who found no significant interaction between Vitamin D, calcium or menopausal status (Anderson, L.N. 2010). Unlike the case-control research conducted by Kawase et al. who found that vitamin D and calcium intake were inversely associated with the development of breast cancer risk in all subjects (pre-and postmenopausal). However, vitamin D intake was significantly associated only with the pre-menopausal group of women, and calcium intake was significantly associated with risk in postmenopausal women. This association has been modified by differences in tumor receptor status. So they concluded that vitamin D and calcium decreased the risk of breast cancer in Japanese women and that this relationship differed based on menopausal status and receptor status (ER, PR, HER2). In this study, there was no significant relationship between vitamin D levels and breast cancer stage based on the menopausal status of the study subjects. This is in line with other studies which state that there is no significant relationship between low vitamin D levels and tumor prognosticity (Imtiaz, S. 2014). However, in contrast to other studies that stated 25 (OH) D levels were significantly higher in patients with early stage cancer compared to those with advanced or metastatic stages (Palmeri, C. 2006). The relationship between vitamin D levels, breast cancer, and prognostic factors such as tumor stage, grade, size, lymphatic node metastasis and status of hormone receptors were contradictory. Low vitamin D levels are associated with advanced stage, tumor size, and grade in patients with post-menopausal status (Janbabai, G. 2016), similarly to women with premenopausal status but triple-negative (Yao, S. 2017). Vitamin D insufficiency and deficiency are found in tumors with advanced stages and metastases. , many positive lymph nodes, a low proportion of ER +, PR +, and high Ki-67 (de Sousa Almeida, 2017). The inconsistency of results across existing studies may be related to different sample sizes and limited demographic information related to ethnicity and lifestyle. Menopausal status may be closely related to vitamin D status and vitamin D receptor polymorphisms. In addition, there is a modified influence of environmental factors, such as diet, gene variations that are closely related to vitamin D metabolic pathways such as vitamin D-binding protein, an enzyme that plays a role in activation and degradation of vitamin D. (Atoum, M. 2017).
2021-04-29T06:34:01.399Z
2021-02-15T00:00:00.000
{ "year": 2021, "sha1": "2eca807665855cf89d03df05301913a408767b85", "oa_license": "CCBYSA", "oa_url": "http://biarjournal.com/index.php/bioex/article/download/383/406", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "2eca807665855cf89d03df05301913a408767b85", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
54928813
pes2o/s2orc
v3-fos-license
Influence of Putrescine Application on Storability, Postharvest Quality and Antioxidant Activity of Two Iranian Apricot (Prunus armeniaca L.) Cultivars The limited postharvest storage life of apricot is the focus of this study. Presenting a solution to improve the postharvest storage of studied apricot cultivars is the goal. Studding the effect of different concentration of postharvest putrescine on quality attributes and antioxidant activity of two apricot cultivars during storage is the approach taken. The two apricot cultivars (‘Lasgerdi’ and ‘Shahrodi’) were harvested at the commercial ripening stage, and fruits were immerged in 1, 2, 3 and 4 mM putrescine as well as distilled water (control) for 5 min, then fruits were packed in boxes with polyethylene cover and stored at 4°C and 95% relative humidity for 20 days. The changes in weight loss, fruit firmness, total soluble solids, titratable acidity, pH, maturity index, ascorbic acid, total phenolics and antioxidant activity were estimated after 0, 5, 10, 15 and 20 days during storage. The results showed that the weight loss, total soluble solids, pH and maturity index increased significantly while the fruit firmness, titratable acidity, ascorbic acid, total phenolics and antioxidant activity decreased significantly during storage for both cultivars. During storage, a significant difference between control and putrescine treatments in all measured parameters is observed. The putrescine treatments reduced significantly the weight loss and maintained their firmness. In this condition, the highest and lowest of titratable acidity, ascorbic acid, total phenolics and antioxidant activity were observed in treatments of 4 mM putrescine and control, respectively. The data revealed that the quality of apricot fruits was improved by the use of putrescine treatment due to its effect on delaying the ripening processes. Introduction The modern fruit industry needs research promoting the commercial attributes of fruit quality. For Apricot (Prunus armeniaca L.) this is of a great importance since attributes are too frequently not satisfactory to the customers (Davarynejad et al., 2010). Due to its nutritional and antioxidant properties, considerable attention has been paid to it in recent years. Apricot is known to contain considerable of vitamins (A and C), minerals, fibre, carotenoids, flavonoids, lycopene and other antioxidant compounds (Haciseferogullari et al., 2007;Munzuroglu et al., 2003). Apricot is climacteric fruit with a limited postharvest storage life due to acceleration of quality loss, affecting some properties such as fruit firmness, texture, total soluble solids and titratable acidity. A suitable method for shelf life extension, which avoids detrimental effects on quality of fruit, would be beneficial for both the consumer and the producer. A number of strategies have been used to improve the quality properties and shelf life of apricot fruit, such as low temperature storage and postharvest treatments with polyamines (Martínez-Romero et al., 2002), aminoethoxyvinylglycine (AVG) and 1-methylcyclopropene (1-MCP) (Palou and Crisosto, 2003). Polyamines (PAs) are known as a group of natural compounds with aliphatic nitrogen structure that are ubiquitous in plants, animals and microorganisms. The major polyamines are found in every plant cell, such as spermidine (Spd) and spermine (Spm) and putresine (Put) (Galston and Sawhney, 1990). It is known that polyamines play important roles in many physiological processes in plants, including growth and development of cell and respond to environmental stresses. Treatment with exogenous polyamines has been reported to increase fruit firmness in apples (Kramer et al., 1989(Kramer et al., , 1991Wang et al., 1993), strawberry (Ponappa et al., 1993), tomato (Law et al., 1991), lemon (Valero et al., 1998a, 1998b, 1998c), peach (Bregoli et al., 2002 and plum (Serrano et al., 2003). Other beneficial effects of exogenous polyamines have been reported for both climacteric and non-climacteric fruit such as delayed colour changes, reduced mechanical damage and susceptibility to chilling injury and increased shelf life Perez-Vicente et al., 2002;Serrano et al., 1996). Thus polyamines treatment has the potential for commercial control of quality properties and increased shelf life of harvested fruit. However, little information exists on the use of different concentration of putrescine to preserve apricot fruit quality during storage. Therefore, the objective of this re-ing a digital acidity assay and expressed as g of malic acid per 100 g of fresh weight (g/100 g FW). The pH measurements were performed using a digital pH meter (Metrohm 601) at 21°C. Maturity index was calculated by dividing total soluble solids to titratable acidity. Ascorbic acid and total phenolics Ascorbic acid was determined by employing the method described by Mazumdar and Magumdar (2003). Results were expressed as mg ascorbic acid per 100 g of fresh weight (mg/100 g FW). The total phenolics were determined by using Folin-Ciocalteu method (Singleton et al., 1999). One gram of apricot tissue was extracted with 10 ml methanol (85%). 250 µl of this extract was dissolved in a 250 µl of sterile distilled water, and then samples were mixed with 2.5 ml of 10-fold-diluted Folin-Ciocalteu reagent and 2 ml of 7.5% sodium carbonate. The mixture was shaked for 1.5 to 2 hours before the absorbance was measured by a Cecil 2010 UV-visible spectrophotometer at 765 nm. Gallic acid was used as a standard. The results were expressed as mg gallic acid equivalent in 100 g fresh weight (mg/GAE 100 g FW). Antioxidant activity Antioxidant activity was assessed according to the method of Ismail et al. (2009). Briefly, 1 g of apricot tissue was extracted with 10 ml methanol (85%). One ml of this extracts were mixed with 2 ml of 0.15 mM DPPH in methanol. The mixtures were shaken vigorously and left to stand for 30 min (under dark condition). The control was prepared by adding 2 ml of DPPH to 1 ml methanol. Absorbance of the resulting solution was measured at 517 nm by a Cecil 2010 UV-visible spectrophotometer. The antioxidant activity is expressed in the form of the percentage of free radical scavenging. Statistical analysis This experiment was conducted according to factorial based on completely randomized block design with 4 replications. Data were analyzed by Statistical Analysis System (SAS) software Version 9.1 using analysis of variance (ANOVA) and differences among means were determined for significance at p<0.05 using Tukey's test. Weight loss The weight loss increased significantly during storage at 4°C in both cultivars (Fig. 1). Similar results were also reported by Ghasemnezhad et al. (2010). There were a significant difference (p<0.05) between treatments of control and putrescine in terms of their effects on weight loss (Fig. 1). The control treatment had the highest weight loss during storage, followed by the 1 mM putrescine treatment, while the lowest was in 4 mM putrescine treatment. The results were in agreement with the findings re-search was to analyse and compare the effect of different concentrations of postharvest putrescine application on quality properties and antioxidant activity of two most important local and famous apricot cultivars ('Lasgerdi' and 'Shahrodi') during storage. Materials and methods The 'Lasgerdi' and 'Shahrodi' apricot cultivars harvested manually at commercial maturity stage at July 2011 from 14 years-old trees in Mashhad, Iran. The average temperature, the amount of rainfall and relative humidity in growing season (March to July) of 2011 were 28.6°C, 20 mm and 26%, respectively. Soil texture were being sandloam, EC = 4.1 (ds.m -1 ) and soil pH = 7.2. The trees were spaced 6 and 3 m between and along the rows, respectively. Trees were grown under traditional irrigation and routine cultural practices suitable for commercial fruit production. Fruits were transported by a ventilated car to the laboratory soon after harvest, where apricots with defects (sunburns, cracks, cuts and bruises in peel) were discarded and then fruits were selected in accordance with their colour and weight. The homogeneous fruits were randomized and divided into 5 lots of 80 fruit for following treatments in 4 replicates (each replicate contained 20 individual fruit): control (distilled water) and different concentration of putrescine (1, 2, 3 and 4 mM). Treatments were performed by dipping fruits in 10 L of solution for 5 min, and then they were left to dry at room temperature and were packed in boxes with polyethylene cover and after stored at 4°C and 95% relative humidity (RH) for 20 days. After 0, 5, 10, 15 and 20 days (5 days intervals), 3 fruit from each replicate for each treatment (12 fruit) were sampled for analytical determinations. All reagents, solvents and standards were of analytical reagent grade. Weight loss and firmness of fruit The same samples were evaluated for weight loss of the fruit each time at 5 day intervals until the end of experiment. Fruits were weighted in the air on a balance of accuracy of 0.001 g. The weight loss was determined by the following formula: weight loss (%) = [(A-B)/A] × 100. Where A indicates the fruit weight at the time of harvest and B indicates the fruit weight after storage intervals. Fruit firmness was determined by a fruit pressure tester (8 mm diameter probe) on pared surfaces from opposite sides of each fruit and the results were expressed as Newtons (N). Total soluble solids, titratable acidity, pH and maturity index The total soluble solids (TSS) were determined with a digital refractrometer (Erma, Tokyo, calibrated using distilled water). Results were reported as degree °Brix at 21°C. The titratable acidity (TA) was determined by us-ported by Martinez-Romero et al. (2002) and Serrano et al. (2003). According to Woods (1990), weight loss in fruit during storage could be due to the water exchange between the internal and external atmosphere, the transpiration rate being accelerated by cellular breakdown. In this sense, the putrescine treatment modified or consolidated the cell disposition and delays the removal of epicuticular waxes which play an important role in water exchange through the skin, and then lower weight loss would occur. With regard to the results, the putrescine treatments showed significantly less of weight loss during storage being a negative correlation between putrescine concentrations and weight loss. Fruit firmness As shown in Fig. 2, fruit firmness (expressed as forcedeformation ratio) declined rapidly during storage at 4°C for both cultivars, that the fruit firmness levels at the initial of the storage period were higher than the end ones just for the all treatments. Martínez-Romero et al. (2002) has reported that apricot fruit firmness decreased significantly during storage. Significant differences (p<0.05) were revealed among the treatments for fruit firmness (Fig. 2). The treatment of putrescine showed significantly highest firmness levels than control treatment during storage. The higher the putrescine concentration applied, the greater the improvement in firmness, that highest firmness values were observed in 4 mM putrescine treatment. Similar results were also reported for apples (Wang et al., 1993), other apricot cultivars (Martínez-Romero et al., 2002), plum (Serrano et al., 2003) and strawberry (Zokaee et al., 2007). Fruit softening during storage is the main factor limiting storage and shelf life of apricot fruit. The effect of polyamines on the reduction of fruit softening or firmness augmentation can be attributed to their capacity cross-link to pectic substances in the cell wall, resulting in rigidification that is detectable immediately after treatment (Abbot et al., 1989) and also as inhibition of the action of walldegrading enzymes, such as pectinesterase, pectinmethylesterase and polygalacturonase and reduce fruit softening during storage . In accordance with this hypothesis, the putrescine exogenously applied went to cell walls to maintain high levels of fruit firmness and these high levels of firmness lead to increased shelf life. Total soluble solids For both cultivars, content of total soluble solids decreased during the first 5 days of storage and from this time (Fig. 4), which is in agreement with results Ghasemnezhad et al. (2010). There was a significant difference (p<0.05) between control and putrescine treatments in terms of their effects on level of titratable acidity (Fig. 4). Among the studied treatments, 4 mM putrescine treatments had the highest amount of titratable acidity and control treatment had the lowest titratable acidity content during storage. Zokaee et al. (2007) also reported that the treated strawberry fruits with putrescine had the highest amount of titratable acidity during storage. The titratable acidity is an important factor in maintaining the quality of apricot fruits, which is directly related to the organic acids content present in the fruit. The putrescine treatment showed decreases were significantly lower in content of titratable acidity than control treatment during storage. Zokaee et al. (2007) and Ishaq et al. (2009) suggested that titratable acidity decreases could be due to consumption of organic acids in fruits during respiration. In the present study it seems that putrescine treatments did have any significant effect on respiration process which could reduction or delay of respiration and maintain titratable acidity. pH As shown in Fig. 5, pH values decreased during the first 5 days of storage and from this time until end of the storage until end of the storage period increased (Fig. 3). Similar patterns of changes were reported by Zokaee and Esna-Ashari (2008). As shown in Fig. 3, a variation in terms of total soluble solids content was observed among the treatments and the differences were statistically significant (p< 0.05). The highest concentration of total soluble solids was observed for control treatment during storage, followed by 1 mM and 2 mM putrescine treatments while the lowest was in 4 mM putrescine treatment. Similar data were also reported for apricot (cv. 'Tokhm Sephid') and peach (cv. 'Zapherani') (Zokaee and Esna-Ashari, 2008). The during storage, increase in content of total soluble solids was probably due to concentrated juice content as a result of dehydration and hydrolysis of polysaccharides. During storage, all treatments showed increases in content of total soluble solids, although the increases were significantly lower in treatment of putrescine than in control treatment. This effect of putrescine can be attributed to low levels of the respiration rate, ethylene production and delay in ripening process. According to data, one can say that there is an inversely relation between putrescine concentrations and level of total soluble solids during storage. Titratable acidity The results showed that titratable acidity content decreased significantly during storage at 4°C for both culti- Fig. 4. Effect of putrescine on titratable acidity (g/100 g fresh weight) of two Iranian apricot cultivars during storage at 4°C. The results represent the means of 12 fruit in 4 replicates ± standard errors The maturity index (TSS/TA) is responsible for the taste and flavor of apricot. Khan et al. (2008) reported that the lower maturity index in putrescine treatment is might be due to their higher titratable acidity, as compared to control treatment, that is in agreement with our results. Ascorbic acid The content of ascorbic acid decreased significantly during storage at 4°C for both cultivars (Fig. 7). Ishaq et al. (2009) reported also the ascorbic acid content in apricot fruit was reduced during storage. As shown in Fig. 7, a significant variation in ascorbic acid concentration was found among the studied treatments. The lowest concentration of ascorbic acid was observed for control treatment during storage, followed by 1 mM and 2 mM putrescine treatments while the highest was in 4 mM putrescine treatment. Ascorbic acid is an important nutrient quality factors, which is very sensitive to degradation due to its oxidation compared to other nutrients during storage. The all treatments showed decreases in content of ascorbic acid, although the decreases were significantly lower in treatment of putrescine than in control treatment during storage. Ishaq et al. (2009) reported that the content of ascorbic acid decreases during storage could be due to the conversion of dehydroascobic to diketogulonic acid by oxidation. The effect of putrescine may be ascribed to decreased or delayed ascorbate oxidase activity. period increased for both cultivars. Significant differences (p<0.05) were revealed among the different treatments for pH values (Fig. 5). The highest and lowest the pH values were observed in control and 4 mM putrescine treatments, respectively. Our results were in agreement with data reported by Zokaee and Esna-Ashari (2008). The all treated fruits showed increases in pH values, although the increases were significantly lower in treated fruits with putrescine than in control fruit during storage. This effect of putrescine might be due to create a thin layer on the surface of fruit which delayed degradation process. With respect to titratable acidity and pH values, titratable acidity content significantly decreased while pH value significantly increased during storage for both cultivars. Thus, one can say that there is an inversely relation between titratable acidity and pH. Maturity index In both cultivars, the maturity index increased significantly during storage at 4°C, that the maturity index levels at the initial of the storage period were higher than the end ones just for the all treatments (Fig. 6). There were significant differences in the maturity index content of the different treatments, that control treatments had the highest amount of maturity index than the other treatments (Fig. 6). Similar results were also reported for plum (cv. 'Angelino') (Khan et al., 2008). The determination of antioxidant activity is one of the ways of expressing the nutritional and biological value of fruits. The results showed that the antioxidant activity decline along with deceases of total phenolic and ascorbic acid contents. Thus, it can be concluded that antioxidant activity is closely correlated with the total phenolics and ascorbic acid content. In previous researches, the positive correlation between antioxidant activity and total phenolics has been reported (Díaz-Mula et al., 2009;Ghasemnezhad et al., 2010). The treatment of putrescine maintained antioxidant activity of the fruit significantly during storage being a positive correlation between putrescine concentrations and antioxidant activity of fruit. This effect of putrescine treatment was probably due to maintain of total phenolics and ascorbic acid levels during storage. Conclusion Apricot is a climacteric fruit with limited postharvest storage life due to acceleration of quality loss. A suitable method for shelf life extension, which avoids detrimental Total phenolics For both cultivars, total phenolics content decreased significantly during storage at 4°C, that the total phenolics levels at the initial of the storage period were higher than the end ones just for the all treatments (Fig. 8). As shown in Fig. 8, a variation in terms of total phenolics content was observed among the treatments and the differences were statistically significant (p<0.05). The maximum concentration of total phenolics was observed for 4 mM putrescine treatment during storage, followed by 3 mM and 2 mM putrescine treatments while the highest was in control treatment. According to data, all treatments showed decreases in content of total phenolics, although the decreases were significantly lower in treatment of putrescine than in control treatment. During storage, level of total phenolics decrease might be due to breakdown of cell structure in order to senescence phenomena (Ghasemnezhad et al., 2010). It was assumed that the effect of putrescine treatment on maintain of total phenolics content can be attributed to delay in senescence process. Antioxidant Activity The data indicated that the antioxidant activity decreased significantly during storage at 4°C for both cultivars (Fig. 9). There was a significant difference (p<0.05) Fig. 8. Effect of putrescine on total phenolics (g/100 g fresh weight) of two Iranian apricot cultivars during storage at 4°C. The results represent the means of 12 fruit in 4 replicates ± standard errors Fig. 7. Effect of putrescine on ascorbic acid (g/100 g fresh weight) of two Iranian apricot cultivars during storage at 4°C. The results represent the means of 12 fruit in 4 replicates ± standard errors effects on quality of fruit, would be beneficial for both the consumer and the producer. In this study, effect of different concentration of postharvest putrescine application on quality properties and antioxidant activity of two most important Iranian apricot cultivars ('Lasgerdi' and 'Shahrodi') during storage were investigated. This research showed the same behaviour in all measured factors during storage at 4°C for both cultivars apricot. The weight loss, total soluble solids, pH and maturity index increased significantly while the fruit firmness, titratable acidity, ascorbic acid, total phenolics and antioxidant activity decreased significantly during storage for both cultivars. In addition, statistically significant differences were observed between control and putrescine treatments in all measured parameters during storage. Exogenous putrescine treatments have shown improvement of apricot during storage at 4°C. Thus, the results suggest that putrescine treatment may be used commercially to extend the storage life of apricot.
2018-12-12T23:28:36.811Z
2013-05-28T00:00:00.000
{ "year": 2013, "sha1": "3c573c6ff727d0a463d7d87aaf38334a86b05ef0", "oa_license": "CCBY", "oa_url": "https://www.notulaebiologicae.ro/index.php/nsb/article/download/9041/8501", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "3c573c6ff727d0a463d7d87aaf38334a86b05ef0", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Chemistry", "Biology" ] }
250243737
pes2o/s2orc
v3-fos-license
Dynamics of quantum double dark-solitons and an exact finite-size scaling of Bose-Einstein condensation We show several novel aspects in the exact non-equilibrium dynamics of quantum double dark-soliton states in the Lieb-Liniger model for the one-dimensional Bose gas with repulsive interactions. We also show an exact finite-size scaling of the fraction of the Bose-Einstein condensation (BEC) in the ground state, which should characterize the quasi-BEC in quantum double dark-soliton states that we assume to occur in the weak coupling regime. First, we show the exact time evolution of the density profile in the quantum state associated with a quantum double dark-soliton by the Bethe ansatz. Secondly, we derive a kind of macroscopic quantum wave-function effectively by exactly evaluating the square amplitude and phase profiles of the matrix element of the field operator between the quantum double dark-soliton states. The profiles are close to those of dark-solitons particularly in the weak-coupling regime. Then, the scattering of two notches in the quantum double dark-soliton state is exactly demonstrated. It is suggested from the above observations that the quasi-BEC should play a significant role in the dynamics of quantum double dark-soliton states. If the condensate fraction is close to 1, the quantum state should be well approximated by the quasi-BEC state where the mean-field picture is valid. Introduction The experimental realization of trapped atomic gases in one dimension (1D) has provided a new motivation for the study of strong correlations in fundamental quantum mechanical systems of interacting particles [1,2,3,4,5]. Furthermore, the nonequilibrium dynamics of closed interacting quantum systems is now extensively studied in 1D by experiments and theories [6,7,8]. In many 1D quantum interacting systems quantum fluctuations may play a key role and often lead to subtle nontrivial effects. We thus expect that fundamental many-body properties such as the quasi-Bose-Einstein condensation (BEC) should play a key role in the nontrivial quantum dynamics such as quantum dark-solitons. We shall define it shortly with the Penrose-Onsager criterion. Let us introduce a theoretical model for the 1D system of interacting bosons with repulsive short-range potentials. Here we call it the 1D Bose gas. For simplicity we assume that the interactions are given by the delta-function potentials, since they give nontrivial effects in the 1D case although they are simple. For instance, the scattering length depends on the strength of the delta-function potential in 1D systems. We thus have the Lieb-Liniger model (LL model) as the system of the 1D Bose gas. The Hamiltonian of the LL model is given by [9,10] Here N denotes the number of bosons, and we assume the periodic boundary conditions of the system size L on the wave-functions. We employ a system of units with 2m = = 1, where m denotes the mass of the particle. We recall that the coupling constant c is positive. It is an exactly solvable model of the 1D quantum manybody system. It is known that all the eigenvectors are constructed by the Betheansatz method [11]. Furthermore, the Gross-Pitaevskii (GP) equation appears as the Heisenberg equation of motion for the second-quantized Hamiltonian of the LL model. It is expressed in terms of the classical complex scalar field ψ as follows [12]. We expect that the GP equation should play a central role in the long-distance meanfield behavior of the 1D Bose gas in some quantum state if the quasi-BEC occurs in the quantum state of the LL model especially in the weak-coupling regime. If it is the case, the solution of the GP equation should correspond to the macroscopic wave-function of the quasi-BEC state, and describe the quantum state well at least approximately. We define the quasi-BEC by the criterion due to Penrose and Onsager [13,14] (see also Section 4.2). Suppose that particle number N is very large but finite. The density matrix at zero temperature is given by the ground state |λ of the system asρ = |λ λ|. Then, we define the one-particle reduced density matrix by its partial trace with respect to all but one degree of freedom:ρ 1 = N tr 23···Nρ . Let N 0 denote the largest eigenvalue of the one-particle reduced density matrixρ 1 . If it is of order N , i.e., the ratio n 0 = N 0 /N is nonzero and finite for large N , then we say that the system exhibits the quasi-BEC, and we call n 0 the condensate fraction. If the quasi-BEC occurs in some quantum states of the LL model, we expect that the GP equation should play a central role for characterizing the quantum state, although it is only a partial differential equation for a complex scalar variable. In the present research, we assume that the quasi-BEC should occur if the coupling constant is small enough with respect to the system size or the number of bosons, and hence some solutions of the GP equation such as multiple dark-solitons can be compared with the density profiles of some quantum states in the quasi BEC of the 1D Bose gas. In fact, we shall show a finite-size scaling of the quasi BEC in the present research. It should be emphasized that such quantum states whose density profiles coincide with those of single dark-solitons of the GP equation have been constructed explicitly in the form of superposition of the yrast states in the Lieb-Liniger model [15]. The construction resolved a long standing problem suggested by Ishikawa and Takayama almost forty years ago [16]. Here we remark that it was shown through the strong coupling limit [17,18] that the yrast states and the mean-field solitons are closely related to each other with respect to quantum numbers. Furthermore, several significant properties in the non-equilibrium dynamics of a quantum single dark-soliton have been exactly investigated [19] and the generic and the ideal Gaussian weights have been introduced [20,21]. Moreover, the density and phase profiles of quantum states of double dark-solitons have been explicitly constructed [22], and the phase shift has numerically been estimated in the scattering of two quantum dark-solitons [23]. There is another aspect of quantum dark-soliton states. Successive measurements of particle positions in the Lieb-Liniger model also leads to observing quantum darksolitons numerically [24,25]. There is a question of how the density profile of a superposition of yrast states is related to the successive measurements of particle positions. When the coupling constant c is equal to zero it was analytically shown that the construction of the quantum dark-soliton state with the Gaussian weight [21] is related to the particle position method [24] as shown in Ref. [21]. When the coupling constant is small and nonzero: c > 0, an ansatz was proposed to bridge between the calculation of single-particle density and the particle position method [26]. In the present paper we show various novel aspects in the exact non-equilibrium dynamics of quantum double dark-solitons, which give pairs of notches in the density profiles, by explicitly constructing corresponding quantum states in the Lieb-Liniger model of the 1D Bose gas with the repulsive interactions. For instance, we exhibit the time evolution of the density profile of the double dark-soliton whose two notches are located at the same position, and that of the phase profiles of the quantum double dark-solitons. In particular, we give an example where the winding number of the phase profile changes during the scattering process of two notches. Furthermore, we also show an exact finite-size scaling of the fraction of the BEC for the ground state. It should characterize the quasi-BEC which we assume to occur in quantum double dark-soliton states in the weak coupling regime. We show that if the coupling constant decreases as a power of the system size, condensate fraction does not vanish and remains constant when we send the system size to a very large value with fixed density. We recall that if the condensate fraction is nonzero for a large particle number N , we call it the quasi-BEC by employing the Penrose-Onsager criterion. It follows from it that the quasi-BEC occurs only if the coupling constant is very small with respect to the system size. Therefore quantum states of dark-solitons may appear particularly in the weak coupling regime. Based on the definition of the quasi-BEC we derive a kind of macroscopic quantum wave-function by exactly deriving the amplitude and phase profiles of the matrix element of the bosonic field operator, by making use of Slavnov's formula of form factors [27]. Here we recall that the bosonic field operator is defined in the second-quantized Hamiltonian of the Lieb-Liniger model [28]. Let us briefly summarize the finite-size scaling of the quasi-BEC for the ground state, which we shall show in detail in Section 4. The scaling behavior of the quasi-BEC in the 1D Bose gas is fundamental when we send particle number N or system size L to very large values. We define the interaction parameter γ by γ = c/n with coupling constant c in the delta-function potentials and density n = N/L. We show that if γ is given by a negative power of N , i.e. γ = A/N η , condensate fraction n 0 is nonzero and constant for any large value of L or N . We also show that exponent η and amplitude A are independent of density n, and evaluate them as functions of n 0 . Thus, the condensate fraction n 0 for the ground state is given by a scaling function of variable γN η , which corresponds to amplitude A. If the condensate fraction of a given quantum state with large N is nonzero in the 1D Bose gas, we suggest that the classical mean-field approximation such as the GP equation should be valid for the state [15]. Furthermore, we show that the 1D Bose gas of a finite particle number may have the same condensate fraction for any large L in the case of the ground state. Finally, we mention some potentially relevant results in the following. For strong and intermediate interaction strengths, the Lieb-Liniger Gross-Pitaevski equation is introduced, which is an extension of the GP equation [29]. Associated with the quantum states of dark solitons, bound states of dark solitons are numerically studied by solving the GP equation [30], dynamics of a bright soliton in the quasi-BEC with timedependent atomic scattering length in a repulsive parabolic potential [31], quantized quasi-two-dimensional Bose-Einstein condensates with spatially modulated nonlinearity [32], matter rogue wave in Bose-Einstein condensates with attractive atomic interaction [33], exact soliton solutions, and nonlinear modulation instability in spinor Bose-Einstein condensates [34]. The contents of the paper consist of the following. In Section 2 we explain the Bethe ansatz and useful formulas for evaluating the form factors of the field operator. We also define the winding number for solutions of the GP equation under the periodic boundary conditions. In Section 3 we show the time evolution of the quantum double dark-soliton state constructed with equal weight for the following two cases: (i) The soliton positions X 1 and X 2 are different: X 1 = L/4 and X 2 = 3L/4; (ii) the soliton positions are the same: X 1 = X 2 = 0. We also show the time evolution of the quantum double darksoliton state constructed with the Gaussian weights. Here, two notches have different speeds thanks to the Gaussian weights, and we evaluate the phase shift in the collision of the two dark solitons. We remark that two notches have mostly the same speed if the quantum double dark-soliton state is constructed with equal weight. In Section 4 we show the finite-size scaling behavior of the condensate fraction in the ground state for the 1D Bose gas with repulsive interactions at zero temperature. According to it, we can estimate that the fraction of the quasi-BEC condensate should be equal to 0.99 for the quantum double dark-soliton state with N = L = 20 and c = 0.05 studied in the present research. Bethe ansatz equations In the LL model, the Bethe ansatz offers an exact eigenstate with an exact energy eigenvalue for a given set of quasi-momenta k 1 , k 2 , . . . , k N satisfying the Bethe ansatz equations (BAE) for j = 1, 2, . . . , N : Here I j 's are integers for odd N and half-odd integers for even N . We call them the Bethe quantum numbers. The total momentum P and the energy eigenvalue E are expressed in terms of the quasi-momenta as If we specify a set of Bethe quantum numbers I 1 < · · · < I N , the BAE in Equation (3) have a unique real solution k 1 < · · · < k N [28,11]. In particular, the sequence of the Bethe quantum numbers of the ground state is given by The Bethe quantum numbers for low lying excitations are systematically derived by putting holes or particles in the perfectly regular ground-state sequence. Coupling constant In the thermodynamic limit several physical quantities of the LL model are characterized by the single parameter γ = c/n, where n = N/L is the density of particle number N . We often fix the particle-number density as n = 1 throughout the present paper, and change coupling constant c so that we have different values of γ. Quantum double dark-soliton state A quantum state that has two notches in both profiles of density and square amplitude of the matrix element of the field operator was proposed in [22]. We call it the quantum double dark-soliton state, and it is given by the superposition of "two-hole" excitation states as follows. with a normalization factor M N for N particles. The quantum state |p 1 , p 2 , N is characterized by a configuration of Bethe quantum numbers that has two vacancies located at p 1 and p 2 in the series of the Bethe quantum numbers, which is illustrated in Figure 1 (a). This configuration represents the Bethe quantum numbers of the ground state of N particles along with those of additional two particles. In Equation (6) Figure 1 some configurations with two holes p 1 and p 2 are exhibited. In the third configuration, two holes p 1 and p 2 are located in its middle part of the series which corresponds to the ground state of N particles. Here we remark that in order for two notches have positive velocities we derive two hole excitations derived from the configuration constructed by adding two particles to the right of the "Fermi momentum" as shown in Figure 1 (a). If we add the two particles to the right and left of the "Fermi momentum" symmetrically, then the sum of the momenta vanishes. The density profile of this state X 1 , X 2 , N |ψ † (x)ψ(x)|X 1 , X 2 , N shows the two density notches at the positions x = X 1 , X 2 , which coincides with the squared amplitude of the elliptic soliton [22]. Here, by the determinant formula for the norms of Bethe eigenstates [35,36] we can effectively evaluate the matrix element × e −i(p 1 X 1 +p 2 X 2 ) p 1 , p 2 , N |ψ † (0)ψ(0)|p 1 , p 2 , N . Here, P and P in an exponential term denote the total momentum of the state |p 1 , p 2 , N and |p 1 , p 2 , N calculated through Equation (4), respectively. The sum in the above equation is taken over all pairs of p = {p 1 , p 2 } and p = {p 1 , p 2 } that belong to the set P N . The matrix element of the form factors of the density operator [27,37,38] is given by where the quasimomenta {k 1 , · · · , k N } and {k 1 , · · · , k N } give the eigenstates |p 1 , p 2 , N and |p 1 , p 2 , N , respectively. We use the abbreviations k j, := k j − k and k j, := k j − k . The kernelK(k) is defined byK(k) = 2c/(k 2 + c 2 ). The matrix G(k) is called the Gaudin matrix, whose (j, ) th element is given by The matrix elements of the (N − 1) by (N − 1) matrix U (k, k ) are given by We have also considered the matrix element of the single field operator where P and P denote the total momenta of the state |p 1 , p 2 , N and |p 1 , p 2 , N − 1 , respectively. The determinant formula is given by [35,36,27,39,37,38] where the quasi-momenta {k 1 , · · · , k N } and {k 1 , · · · , k N −1 } give the eigenstates |p 1 , p 2 , N and |p 1 , p 2 , N − 1 , respectively. We recall that the matrix G(k) denotes the Gaudin matrix, whose (j, )th element is given in Equation (9). The matrix elements of the (N − 1) by (N − 1) matrix U (k, k ) are given by 2.4. One-particle reduced density matrix The matrix element of the one-particle reduced density matrix, ρ 1 (x, y) := x|ρ 1 |y , for a quantum system is expressed as a correlation function in the ground state |λ : In the LL model we can numerically evaluate the correlation function by the form factor expansion. Inserting the complete system of eigenstates, µ |µ µ|, we have where P µ denotes the momentum eigenvalues of eigenstates |µ . Each form factor in the sum (15) is expressed as a product of determinants by making use of the determinant formula for the norms of Bethe eigenstates [35] and that for the form factors of the field operator [27,38,37]: where the quasi-momenta {k 1 , · · · , k N } and {k 1 , · · · , k N −1 } give the eigenstates |λ and |µ , respectively. Here we have employed the abbreviated symbols k j, := k j − k and k j, := k j − k . The matrix G(k) is the Gaudin matrix, whose (j, )th element is given . The matrix elements of the (N − 1) by (N − 1) matrix U (k, k ) are given by [37,27,38,35] For the ground state |λ we have shown that the sum of the form factor expansion is almost saturated for the one-particle and one-hole (1p1h) excitations together with twoparticles and two-holes (2p2h) excitations. The saturation rate is explicitly presented in Table 1 of Section 4.3. However, for excited states the saturation rate has not been evaluated. It should be technically nontrivial to evaluate it for excited states. For the quantum states of double dark-solitons, we suggest that the saturation rate should be close to one in the weak coupling case in the form factor expansion up to some excitations with relatively small numbers of particles and holes. It is based on the observation that the density profiles of quantum double dark-soliton states are similar to those of the double dark-solitons of the GP equation, as we shall show in Section 3. Winding number We introduce the winding number J associated with solutions of the GP equation under the periodic boundary conditions. Let us assume that a solution of the GP equation φ(x) = ρ(z) exp[iϕ(x)] satisfies the periodic boundary conditions: where J is an arbitrary integer. The integer J is called the winding number [17,18]. In the previous study, we constructed the quantum single dark-soliton with a nonzerowinding number. Time evolution of quantum double dark-soliton state constructed with equal weight By making use of the time dependent field operatorψ(x, t), the local density and the matrix element of the quantum state at a given time t are expressed as follows. where E is the energy of the state |p 1 , p 2 , N , andρ(x, t) =ψ † (x, t)ψ(x, t) denotes the local density operator. We have obtained the exact expressions of the time evolution in Equations (19) and (20) since the Bethe ansatz method gives the exact energies for the quantum state |X 1 , X 2 , N . 3.1.1. Quantum dark-soliton located at X 1 = L/4 and X 2 = 3L/4 initially Figure 2 shows the time evolution of the density profile , i.e., the graph of ρ Q (x, t) versus x at a given time t, for the quantum double dark-soliton state with initial soliton positions X 1 = L 4 and X 2 = 3L 4 under the periodic boundary conditions. We call the plot in the left panel of Figure 2 the two-dimensional (2D) density plot of the local density. Here, the value of the local density ρ Q (x, t) at position x and time t is expressed by the brightness of the point at (x, t) in the space-time diagram, where the horizontal axis corresponds to the x coordinate, while the vertical axis to time t. In the right panels of Figure 2 snapshots of the density profile of ρ Q (x, t) at t = 0, 2, 4, and 11 are plotted. We note that the density profile shown in panel (a) of Figure 2 is identical to the upper-left panel of Figure 9 for c = 0.05 in Ref. [22]. In the latter panel it was shown that the density profile of the quantum double dark-soliton state completely coincides with the density profile of the elliptic double dark-soliton solution of the GP equation. Thus, at t = 0, the density profile of the quantum double dark-soliton state coincides with that of the elliptic soliton solution of the GP equation. The positions of notches are expressed by the areas of the darker color in the 2D density plot at the left panel of Figure 2. The trajectories of the positions of the two notches in the density profile are given by two parallel linearly elongated regions in the diagram of time t and coordinate x, as shown in the left panel of Figure 2. Thus, the two notches moves at the same velocity in the positive x direction. In the snapshots of the density profiles, the soliton notches are gradually filled, i.e., they become shallower in time evolution, as shown in panels (a), (b), (c), and (d) of Figure 2. That is, the distance between the bottoms of the notches is kept constant through the time evolution, while the depths of the notches become smaller. Here we have defined the depth of a notch by the difference between the largest and smallest values in the density profile. For example, at t = 11, the notches are located at x 1 = 1.9115 and x 2 = 11.9115, and the distance between the two notches is given by ∆x = x 1 − x 2 = 10 = L/2, which is equal to that of t = 0. It was reported in Ref. [40] that quantum double dark-solitons with notches of almost the same depths can appear again after their depths of notches become much smaller over a time scale of 1/c. However, the quantum double dark-soliton states constructed in the present research do not show this reappearing or recurrent behavior in time evolution. Once the soliton notches in the density profile are completely filled, i.e., their depths vanish, the density profile remains flat and uniform in time evolution, as illustrated in Figure 2. We note that the construction of the quantum soliton in Ref. [40] is different from that of the present research, and also that the number of particles is equal to N = 8 in Ref. [40], which is smaller than N = 20 for the system in Figure 2. The notches in the density profile of ρ Q (x, t) and those in the profile of the square amplitude |ψ Q (x, t)| 2 of matrix element ψ Q (x, t) exhibit different decaying behaviors in time evolution. Figure 3 shows the time evolution of the square amplitude profile of matrix element ψ Q (x, t) with initial soliton positions X 1 = L 4 and X 2 = 3L 4 under the periodic boundary conditions. The average density is decreasing in the time evolution of the profile of the square amplitude |ψ Q (x, t)| 2 in Figure 3, while the notches in Figure 2 are filled gradually. In the density profile, the average density is kept constant as time t increases, since the density is conserved as a whole for any time t: On the other hand, we suggest that the amplitude of the matrix element between the two different quantum states of double dark-soliton should gradually decrease and finally vanish in time evolution, since they have different energies and particle numbers. In the 2D density plot at the left panel of Figure 3, the trajectories of notches in the space-time diagram are depicted by linearly elongated parallel regions with darker color . The values at the bottoms of the notches are almost equal to zero constantly in time evolution in panels (a), (b), (c), and (d) of Figure 3. Consequently, Figure 3 shows the trajectories of the notches more clearly than Figure 2, as depicted in the 2D density plot at the left panel. The snapshots of the phase profile at different times in time evolution are shown in Figure 4. Here we remark that the phase is given by the argument of the matrix element of Equation (20) as a complex number. In Figure 4 the abrupt jumps of the phase profile are located at the positions of the notches in Figure 3. The abrupt jumps of the phase profile move with the same constant velocity as the notches in the square amplitude profile. Furthermore, the whole phase profile is gradually shifted toward the negative direction in time evolution. Moreover, the shape of the phase profile as a whole remains the same at least up to t = 40. At the initial time t = 0, the profiles of the square amplitude and the phase of the matrix element ψ Q (x, t) shown in panels (a) of Figure 3 and Figure 4 are identical to those of Figures 10 and 11 for c = 0.05 in Ref. [22], respectively. Panel (a) of Figure 3, the square amplitude profile of the matrix element, corresponds to the panel of c = 0.05 in Figure 10 of Ref. [22], where it was shown that the square amplitude profile of the classical and quantum double dark-soliton overlap completely. Panel (a) of Figure 4, the phase profile of the matrix element, corresponds to the panel of c = 0.05 in Figure 11 of Ref. [22], where the phase profiles of the classical and quantum double dark-solitons overlap completely. However, the time evolution of the phase profile in the quantum double darksoliton state is different from that of the elliptic dark-soliton solution, which is given by the travelling wave solution of the GP equation. We recall that the phase profile in the quantum double dark-soliton is gradually shifted toward the negative direction in time evolution in Figure 4, while the phase profile of the travelling wave solution is not shifted. Thus, the time evolution of the quantum dark-solitons that we have constructed is slightly different from that of the classical elliptic soliton solution. We remark that two notches have mostly the same velocity as shown in Figures 2 and 3 for the quantum double dark-soliton constructed with equal weight. In Section 3.2 we shall show that two notches have different velocities for the quantum double dark-soliton state constructed with the Gaussian weights. 3.1.2. Quantum dark-soliton positions located at X 1 = X 2 = 0 initially By placing the positions of the notches for the quantum dark-solitons X 1 and X 2 at the same point , the profiles of the density and square amplitude derived in time evolution are plotted in Figures 5 and 6, respectively. In both profiles of the density and the square amplitude it seems as if the two notches repel each other in time evolution. The quantum double dark-soliton state with overlapping positions of two notches has different properties in the profiles of the density and the square amplitude from the quantum double dark-soliton state in Equation (6) with different initial positions of two notches as X 1 = L 4 and X 2 = 3L 4 . In the density profile of Figure 5 the notches are much deeper than those in Figure 2, similarly as the notches in Figure 6 of the square amplitude profile. In the profile of the square amplitude , the values at the bottoms of the notches increase in time evolution: The values of the square amplitude at the bottoms of notches are not close to zero at t = 11 in Figure 6. Thus, for the quantum double dark-soliton with two overlapping positions of notches the difference between the density profile and the square amplitude profile is smaller than in Figure 2 and Figure 3. The snapshots of the phase profile in time evolution are exhibited in Figure 7 for . At t = 0, the winding number was given by J = 2, while it suddenly changed to J = 1 at t = 0.05. After the change of the winding number, the phase profile became smoother in shape gradually in time evolution. Furthermore, we observe in Figure 7 that the whole phase profile was shifted toward the negative direction step-by-step in time evolution. It is also the case in Figure 4: The whole phase profile was shifted in the negative direction for the quantum double dark-soliton state with initial positions of notches placed at X 1 = L 4 and X 2 = 3L 4 . The abrupt change of the winding number may occur in time evolution for the phase profile associated with the quantum states, i.e., the phase profile of the matrix element of the field operator between the quantum double dark-soliton states in Equation (20). The boundary condition of the phase is given by the form of Equation (18) for solutions of the GP equations and also for the phase profile associated with the quantum states in Equation (20). However, the quantum states do not depend on the boundary conditions of classical solutions. It is sufficient if the phase profile associated with the quantum states satisfies one of the boundary conditions of Equation (18) specified by an integer J, which we have called the winding number. Thus, the winding number J may change abruptly in time evolution in the phase profile associated with the quantum double dark-soliton states in Equation (20). Time evolution of quantum double dark-soliton state with the ideal Gaussian weights Let us consider the Gaussian weighted superposition of the excited states consisting of two particle-hole excitations which are determined by a pair of holes p = {p 1 , p 2 } in the set P : Here, N is a normalization factor and the set P is the same as given in Section 2.3. The Gaussian function is given by with two Gaussian parameters (P, σ) [21]. The parameters P and σ are determined by the target soliton depth d and the density n = N/L: Here we have defined the soliton depth d by the smallest value in the density profile of a single dark-soliton. It is different from the "depth of a notch" defined in Section 3.1.1. The target soliton depth d is expressed with the dark soliton solution to the GP equation moving with velocity v in the thermodynamic limit φ ∞ P (x) [21]: Here |φ ∞ P (x = 0)| denotes the square root of the local density at the origin, which is the position of the notch in the thermodynamic limit, and v c,∞ is called the critical velocity of the infinite system. When the system size L is finite, the largest velocity of the elliptic dark-soliton solution of the GP equation is denoted by the critical velocity v c [22]. It approaches the critical value v c,∞ in the limit of sending the system size L to ∞. The exact profiles in time evolution are numerically derived for the local density ρ Q (x, t) and the square amplitude of the matrix element ψ Q (x, t) of the field operator by calculating the time-dependent matrix elements of the field operator between the Gaussian weighted quantum states of Equation (21), similarly as we have demonstrated in Equation (19) and Equation (20) of Section 3.1 for the quantum double dark-soliton state constructed with equal weight. For the Gaussian weighted quantum double darksoliton state, by assigning a pair of proper values of the target soliton depth d to the two notches of a given superposition of quantum states of Equation (21), we can construct a quantum double dark-soliton state such that its density profile has two distinct notches with different depths. We have constructed several quantum double dark-soliton states in which the density profile has two distinct notches with different depths. In Figures 8, 9, 10, 11, 12 and 13 we set the target soliton depths as d = 0.6 and d = 0.0 to the two notches, respectively, and we generated the Gaussian weights by making use of Equation (22). Here, the corresponding Gaussian parameters are given by (P 0 , σ) = (0.124027π, 0.106667) and (P 0 , σ ) = (π, 0.421637), respectively, which are derived by making use of Equations (23) and (24). We have thus obtained the quantum double dark-soliton state of distinct narrow notches with different depths. Here we recall that single dark-solitons with different depths have different speeds in the same direction for the GP equation. We observe the scattering of two notches in the density and phase profiles of the quantum double dark-soliton state. It exhibits the phase shift which is a characteristic property in soliton-soliton collisions [41,23], as shown in the density profile. We remark that the 2D density plot of the local density in the space-time diagram and the snapshots of the density profile at different times are presented in Figure 8 for the quantum double dark-soliton state constructed with the Gaussian weights for c = 0.05. As the two notches of the double dark-soliton approached each other, they moved along approximately straight and linear trajectories with different constant velocities. The collision occurred around at a time interval including t = 11 (see panel (c), which corresponds to the pink dotted line in the left panel of Figure 8). After the collision, each of the dark solitons travelled at the same velocity before the collision. Furthermore, we confirm that the phase shift occurred after the collision in the left panel of Figure 8. Let us investigate the phase shift explicitly. By applying the Galilean transformation, that is, in Figure 9 we observe the scattering process in the inertial frame of reference moving with the left-hand-side notch of the quantum double darksoliton in Figure 8. We clearly confirm the phase shift after the collision as shown in Figure 10 shows the time evolution of the square amplitude profile of the matrix element of the field operator for the quantum double dark-soliton states constructed with the Gaussian weights for c = 0.05. The quantum state is the same as that of Figure 8. We have constructed the double dark-solitons of distinct narrow notches with different depths not only in the density profile but also in the square amplitude profile, i.e., the graph of |ψ Q (x, t)| 2 versus x. We observe the scattering of two notches in the quantum double dark-soliton states. As the two notches of the double dark-soliton states approached each other, they moved along approximately straight and linear trajectories with different constant velocities, as shown in Figure 10. The collision occurred in the time interval including t = 11 (see panel (c), which corresponds to the pink dotted line in the left panel of Figure 10). After the collision, each of the dark solitons travelled with the same velocity before the collision. We observe at least approximately the same phase shift as shown in Figure 8. We remark that panel (a) of Figure 10, the square amplitude profile of the matrix element for the quantum states constructed with the Gaussian weights, corresponds to the panel of c = 0.05 in Figure 13 of Ref. [22]. We now demonstrate that the winding number changed during the scattering process in the time evolution of the Gaussian weighted quantum double dark-soliton states. Figure 11 shows the time evolution of the phase profile. In each panel, the phase profile satisfies the boundary condition: with a winding number J. At the initial time t = 0, the two notches of the quantum dark-soliton were located at the most distant points from each other such as X 1 = L/4 and X 2 = 3L/4, and the winding number is given by J = 1. When the two notches of the quantum dark-soliton states became very close in space, the winding number was suddenly changed to J = 0, in the time interval including t = 11, as shown in panel (c) of Figure 11. After the collision, the winding number was recovered: The winding number at t = 21 was given by J = 1, as shown in panel (d) of Figure 11. We remark that panel (a) of Figure 11, the phase profile of the matrix element between the Gaussian weighted quantum double dark-soliton states, corresponds to the panel of c = 0.05 in Figure 14 of Ref. [22]. We explicitly evaluate the phase shift due to the scattering of two notches in the quantum double dark-soliton state. The left panel of Figure 12 shows the square amplitude profile of the matrix element ψ Q (x, t) in time evolution observed in the inertial frame of reference moving together with the deeper notch of the quantum double darksoliton. The abrupt increase (or decrease) in the phase profile, which we call a phase jump, was located at the position of the deeper notch of the double dark-soliton, as shown in panels (a), (b), and (d) of Figure 12: It was located at x = 5 in panels (a) and (b), and at x = 2 in panel (d). Thus, the position of the deeper notch in the double dark-soliton was shifted after the collision in the inertial frame of reference. It corresponds to the phase shift due to the scattering of the two notches. Let us investigate the changes of the winding number in time evolution in detail. The winding number J was equal to zero when the two notches of the quantum double dark-soliton were close to each other in space, as shown in panel (c) of Figure 12. Figure 13 exhibits that the abrupt changes of the winding number J from 1 to 0 and from 0 We recall that it is not necessary for the winding number in the phase profile of a quantum state to be conserved during the time evolution of the quantum system. The winding number is defined for the corresponding classical system, i.e., the GP equation, or for the phase profile of the quantum system. The dynamics of the quantum system can be much more complex than the solutions of the GP equation. When the two notches are far from each other in space, the phase profile of the quantum system is similar to that of the classical solution, while it is not the case when they collide with each other since they are very close in space. In summary, the Gaussian weighted superposition of the two-hole excited states has lead to the quantum double dark-soliton states in which two notches have different depths [22]. It follows that the notches of the quantum double dark-soliton state have different velocities, and hence we have observed the scattering of two notches in the quantum double dark-soliton state exactly. We have also shown that the winding number of a quantum double dark-soliton state changed when the two notches approach each other, explicitly for the Gaussian weighted quantum double dark-soliton states. We remark that one can make the quantum single dark-soliton black by making use of the Gaussian weights, as shown in Ref. [21]. However, for the quantum double dark-soliton, it seems that it is difficult to construct the double black-soliton only by applying the Gaussian weights to the superposition of a set of two-hole excitations. Motivation to study the quasi-BEC in 1D for the ground state In 1D systems quantum fluctuations play a key role and often give subtle and nontrivial effects. It is known that BEC occurs even for bosons with repulsive interactions due to the quantum statistical effect among identical particles [13]. In fact, the existence of BEC has been proven rigorously for interacting bosons confined in dimensions greater than one [42]. In 1D case there is no BEC for bosons with repulsive interactions due to strong quantum fluctuations if we assume the standard thermodynamic limit with fixed coupling constant [43]. On the other hand, if the coupling constant is very weak, we may expect that even the 1D bosons with a large but finite number of particles undergo a quasi-condensation in which "a macroscopic number of particles occupy a single one-particle state" [13]. We call it a quasi-BEC by following the Penrose and Onsager criterion. However, it has not been shown explicitly how such a quasi-condensation occurs in interacting bosons in one dimension. Furthermore, it is nontrivial to expect it for the 1D Bose gas that is solvable by the Bethe ansatz. No pair of particles can have the same quasi-momentum in common for a Bethe-ansatz solution. Here we recall that we call the 1D system of bosons interacting with repulsive delta-function potentials the 1D Bose gas. For the impenetrable 1D Bose gas where the coupling constant is taken to infinity, condensate fractions are analytically and numerically studied [44], while in the weak coupling case it is nontrivial to evaluate the fractions in the 1D Bose gas. We thus study in section 4 how the condensation fraction n 0 , i.e., the degree of the quasi-BEC, explicitly depends on the system size L, the number of particles N and the coupling constant c in the ground state of the LL model and particularly in the weak coupling case. It will be an illustrative example. Onsager-Penrose criterion of BEC Let us review the definition of BEC through the one-particle reduced density matrix for a quantum system [13,14]. We assume that the number of particles N is very large but finite. At zero temperature, the density matrix is given byρ = |λ λ|, where |λ denotes the ground state of the quantum system. We define the one-particle reduced density matrix by the partial trace of the density matrix with respect to other degrees of freedom:ρ 1 = N tr 23···Nρ . This matrix is positive definite and hence it is diagonalized asρ Here we put eigenvalues N j in descending order: N 0 ≥ N 1 ≥ N 2 ≥ · · · > 0. The sum of all the eigenvalues is given by the number of particles: j N j = N . Here we recall tr 1ρ1 = N due to the normalization: tr 123···Nρ = 1. Let us denote by n 0 the ratio of the largest eigenvalue N 0 to particle number N : The criterion of BEC due to Penrose and Onsager [14] is given as follows: If the largest eigenvalue N 0 is of order N , i.e., the ratio n 0 is nonzero and finite for large N , then we say that the system exhibits BEC, and we call n 0 the condensate fraction. Here we also define fractions n j by n j = N j /N for j = 1, 2, . . .. Table 1. Fraction n sat of the reduced density operator at the origin, ρ 1 (0, 0), to the density n, evaluated by taking the sum over a large number of eigenstates |µ with one particle and one hole (1p1h) or with two particles and two holes (2p2h) for N = L = 50 (n = 1): Numerically we calculate correlation function in Equation (15) by taking the sum over a large number of eigenstates with one particle and one hole (1p1h) and those with two particles and two holes (2p2h). In order to confirm the validity of the restricted sum, we have estimated the ratio of the one-particle reduced density operator at the origin to density n, ρ 1 (0, 0)/n, through the form factor expansion in Equation (15) for the excitations with 1p1h or 2p2h. We express it by n sat . The estimates of n sat are listed in Table 1. The graph of n sat approaches 1 for small coupling constant c, while it is larger than 0.98 for any value of c in the case of N = 50. 4.4. Evaluation of the one-particle reduced density matrix of the ground state For the LL model, the eigenfunctions of the one-particle reduced density matrix are given by plane waves for any nonzero and finite value of c. It is a consequence of the translational invariance of the Hamiltonian of the LL model. We thus have The eigenvalues of the one-particle reduced density matrix, N j , are expressed in terms of the form factor expansion. We consider the sum over all the form factors between the ground state, |λ , and such eigenstates, |µ , that have a given momentum P j as In the LL model we have P j := (2π/L)j. Solving the Bethe ansatz equations for a large number of eigenstates we observe numerically that eigenvalues N j are given in decreasing order with respect to integer j: N 0 > N 1 > N 2 > · · ·. It thus follows that condensate fraction which corresponds to the largest eigenvalue of the one-particle reduced density matrixρ 1 is indeed given by n 0 = N 0 /N , where N 0 has been defined by the sum of Equation (29) over all eigenstates with zero momentum. Condensate fraction in the weak coupling regime The estimates of condensate fraction n 0 are plotted against coupling constant c in the upper panel of Figure 14 over a wide range of c such as from c = 10 −3 to c = 10 3 for different values of particle number N such as N = 4, 10, . . . , 400. For each N , condensate fraction n 0 becomes 1.0 for small c such as c < 0.01, while it decreases with respect to c and approaches an asymptotic value in the large c region such as c > 100 or 1000. The asymptotic values depend on particle number N for N = 4, 10, . . . , 400, and they are consistent with the numerical estimates of occupation numbers for the impenetrable 1D Bose gas (see Equation (56) of Ref. [44]). In the lower panel of Figure 14, we plot fractions n j for j = 0, 1 and 2 against coupling constant c from c = 10 −3 to c = 10 3 with N = 20. The asymptotic values of n j for large c (i.e. c = 1000) are consistent with the numerical estimates for the impenetrable 1D Bose gas (for n 1 and n 2 , see Equations (57) and (58) of Ref. [44], respectively). We observe that condensate fraction n 0 decreases as particle number N increases where density n = N/L is fixed. It is the case for c < 0.1 in the upper panel of Figure 14. Condensate fraction n 0 decreases as N increases even for small c such as c = 0.01, as shown in Figure 15. Thus, it is necessary for coupling constant c to decrease with respect to N so that condensate fraction n 0 remains constant as N increases with fixed density n. Exact finite-size scaling We now show the finite-size scaling of condensate fraction n 0 . In Figure 16 each contour line gives the graph of interaction parameter γ as a function of the inverse of particle number N for a fixed value of condensate fraction n 0 . They are plotted for various values of n 0 from n 0 = 0.6 to 0.99, and are obtained by solving the Bethe-ansatz equations numerically. For different values of density such as n = 1, 2 and 5, we have plotted contour lines with fixed values of condensate fraction n 0 in the plane of interaction parameter γ versus inverse particle number 1/N . We have observed that the contours with the same condensate fraction n 0 but for the different densities coincided with each other in the γ Thus, condensate fraction n 0 is constant as particle number N becomes very large if interaction parameter γ is given by the power of particle number N as in Equation (30). Applying the finite-size scaling arguments, we suggest from Equation (30) that condensation fraction n 0 is given by a scaling function φ(·) of a single variable γN η : n 0 = φ(γN η ). Here we recall the coincidence of contours for the different values of density n in Figure 16. We thus observe that exponent η and amplitude A of Equation (30) are determined only by condensate fraction n 0 and are independent of density n. Let us consider amplitude A as a function of n 0 . We denote it by A = f (n 0 ). Then, the scaling function φ(·) is given by the inverse function: n 0 = f −1 (A). In Figure 17, exponent η increases with respect to n 0 , and amplitude A decreases monotonically with respect to n 0 . Quasi-BEC according to the Onsager-Penrose criterion It follows from (30) that BEC does not occur in the 1D Bose gas if we fix parameter γ and density n as system size L goes to infinity. However, if γ is small enough so that it satisfies Equation (30) for a given value of condensate fraction n 0 , the 1D Bose gas shows the quasi-BEC from the viewpoint of the Penrose and Onsager criterion. We suggest that if condensate fraction n 0 of a quantum state is nonzero and finite for large N , the mean-field approximation is valid for the quantum state. For instance, there exist such quantum states that correspond to classical dark-solitons of the GP equation [15], if parameter γ is small enough so that it satisfies Equation (30). Various limiting procedures With the scaling behavior expressed in Equation (30) we derive various ways of the thermodynamic limit such that condensate fraction n 0 is constant. For instance, we consider the case of a finite particle number, N = N f . Choosing a value of n 0 , we determine γ by Equation (30) as γ = A(n 0 )/N η(n 0 ) f . Then, the 1D Bose gas with N = N f has the same condensate fraction n 0 for any large value of L if coupling constant c is given by c = A(n 0 )N 1−η f /L. Let us set η = 1 and N f = 10, for simplicity. We have n 0 = 0.97 in Figure 17, and γ = 0.3 at 1/N = 0.1 in the contour of n 0 = 0.97 in Figure 16. By assuming n = 1, it corresponds to the case of L = 10 and c = 0.3, and we have A = cL = 3, which is consistent with Figure 17. Therefore, the 1D Bose gas with N f = 10 has n 0 = 0.97 for any large L if c is given by c = 0.3/L. Moreover, we may consider other types of thermodynamic limits. When density n is proportional to a power of L as L α , condensate fraction n 0 is constant as L goes to infinity if we set c ∝ L (1−η)(1+α)−1 . The scaling law in Equation (30) and the estimates of condensate fraction in the present paper should be useful for estimating conditions in experiments of trapped cold atomic gases in one dimension [45]. For instance, we suggest from Figure 14 that BEC may appear in 1D systems with a small number of bosons such as N = 20 or 40 for c = 1 or 10. Concluding remarks In the first part, we have shown that the density profile and the square amplitude evolved in time differently, in particular, for the equal weight case. In the former the notches were filled progressively, while the amplitude of the latter decreased gradually. Furthermore, the Gaussian weights led to the different depths for quantum double dark-solitons [22]. This gave the two notches of the quantum double dark-soliton the different speeds, and we observed the scattering of the two notches in the quantum double dark-soliton state exactly. Interestingly, the winding number of the quantum double dark-soliton state has changed when the two notches approach. Here we recall that it is not necessary for the winding number to be conserved in the time evolution of the quantum system, since it is defined for the corresponding classical system. In the second part, we exactly calculated the condensate fraction of the 1D Bose gas with repulsive interaction by the form factor expansion for the ground state. We have shown the finite-size scaling behavior such that condensate fraction n 0 is given by a scaling function of interaction parameter γ times some power of particle number N : n 0 = φ(γN η ). Consequently, if parameter γ decrease as γ = A/N η , condensate fraction n 0 remains nonzero and constant as particle number N becomes very large. By modifying the thermodynamic limit, the 1D Bose gas shows BEC from the viewpoint of the Penrose-Onsager criterion. Acknowledgements The present research is partially supported by Grant-in-Aid for Scientific Research No. 21K03398. K. K. is supported by the Japan Science Technology Agency (CREST Grant Number JPMJCR 19T4).
2022-07-04T06:40:54.870Z
2022-07-01T00:00:00.000
{ "year": 2022, "sha1": "e0ddba087193f6ce183ba316a781216310a0040d", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/1751-8121/acc496/pdf", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "8148ca358b8f7ac5f552aec7a773aa2dba3ff24a", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
221740361
pes2o/s2orc
v3-fos-license
The Relationship Between Glycemic Control and Concomitant Hypertension on Arterial Stiffness in Type II Diabetes Purpose The impact of glycemic control on macrovascular complications and arterial stiffness in type II diabetes (T2D), as well as the extent of additive effect of hypertension, is unclear. The aims of this study were to investigate the impact of glycemic control on the cardio-ankle vascular index (CAVI), an indicator of arterial stiffness, and to determine the relative risk of concomitant diabetes and hypertension with arterial stiffness. Methods One hundred and nine participants were enrolled and classified as non-diabetes (n= 37) and diabetes (n=72); the diabetic group was further identified as controllable and uncontrollable T2D depending on their hemoglobin A1c (HbA1c) levels. Univariate and multiple regression analyses were used to assess the association between CAVI and glycemic control status and hypertension. Relative risk analysis for abnormal CAVI with exposure to diabetes and hypertension was investigated. Results In all participants, age, systolic blood pressure, body mass index, and fasting blood sugar were independent predictors of CAVI. In diabetic participants, glycemic control status or HbA1c levels did not significantly correlate with CAVI. Systolic blood pressure was an independent predictor for CAVI with β = 0.26. In addition, the coexistence of diabetes together with hypertension was significantly associated with a 2.4-fold increase in the risk of abnormal CAVI (95% CI, 1.410–4.184; p <0.001). Conclusion This study demonstrates that HbA1c as well as fasting blood sugar levels in diabetic participants do not correlate with arterial stiffness. Concomitant diabetes and hypertension significantly increase the risk of arterial stiffness. Introduction Type 2 diabetes (T2D) is a major health problem worldwide with a substantially increasing incidence. 1 T2D increases risk of cardiovascular disease by two to four times, and its impact is as equivalent to that of coronary heart disease. [2][3][4] The major cause of death in T2D patients is cardiovascular disorders, ie coronary heart disease and stroke, which are related to macrovascular dysfunction, a crucial complication of diabetes. 5,6 The assessment of vascular function in diabetic patients is recommended as class IIa, for monitoring vascular complications and predicting cardiovascular events. 7 In the past, brachial-ankle pulse wave velocity (baPWV) was widely used as a gold standard for assess atherosclerosis and arterial stiffness. an equivalent to baPWV for the assessment of arterial stiffness, with blood pressure-independent characteristics. 8 Recent studies have demonstrated that CAVI is associated with plasma glucose levels. It is higher in diabetic when compared with non-diabetic subjects. 9,10 According to the joint statement between the American Heart Association and the American College of Cardiology on the prevention of cardiovascular disease in diabetes, glycated hemoglobin (HbA1c) has been added as a criterion for diagnosis and monitoring of diabetes. 11 The goal of diabetes care is, generally, to keep HbA1c < 7%. However, HbA1c can be kept at < 6.5% or < 8.0% depending on the characteristics of the patient. 11 Despite the fact that hyperglycemia is associated with cardiovascular diseases, studies have shown that the link between HbA1c and macrovascular complications is weaker than that of microvascular complications, and lowering HbA1c has little or no effect on cardiovascular risk. 7,12 Likewise, the impact of blood sugar control on arterial stiffness is controversial. Ibata et al reported the improvement of CAVI after two weeks of hospitalized hyperglycemia control. 10 Elias et al also demonstrated a higher PWV in T2D patients and found that the risk of arterial stiffness was over nine times higher in uncontrolled T2D compared to non-diabetic patients. 13 On the other hand, Chang et al did not observe any differences in CAVI between controlled and uncontrolled diabetes patients, when HbA1c at 7.5 was used as a cut-off value. 14 Similarly, Tian et al found that age, not HbA1c levels, is an independent predictor for CAVI in T2D. 15 One major confounder influencing vascular function in T2D is hypertension. The coexistence of hypertension and T2D presents in 30-80% of patients and increases the risk of cardiovascular disease. 16 There is evidence showing a correlation between HbA1c levels and arterial stiffness in people with resistance hypertension. 17 The extent of the association between glucose levels, blood pressure, and arterial stiffness parameters is unclear. Tedesco et al found the highest carotid-femoral PWV in the concomitant group compared to the diabetes and hypertension alone groups. The multivariate regression analysis showed that mean arterial blood pressure affected arterial stiffness less than the blood glucose level did. 18 The additive effect of hypertension has been shown in another study showing a weaker effect of diabetes. 19 Therefore, the objectives of this study were to investigate the impact of blood sugar control, determined by HbA1c, and hypertension on arterial stiffness in T2D participants. Study Design and Participants This study was a cross-sectional study approved by the Naresuan University Institutional Review Board (COA No. 360/2016). The study was performed in agreement with the principles of the Declaration of Helsinki. Data were collected at Wang I-Thok Health Promoting Hospital, which is a community hospital located in Wang I-Thok sub-district, Bang Rakam district, Phitsanulok province, Thailand between September and December 2016. One hundred and nine participants aged over 18 years old were recruited. Written informed consent was obtained from all participants. The participants who had renal failure, arrhythmia, alcohol or drug addiction, cerebrovascular disease or peripheral vascular disease were excluded. Participants who had no history of diabetes and blood glucose levels were < 126 mg/dL were classified as non-diabetic participants. The diabetic group comprised of the participants who were diagnosed as T2D by physicians. Type 2 diabetic participants then were divided into two groups including 1) controllable diabetes (HbA1c < 6.5%) and 2) uncontrollable diabetes (HbA1c ≥ 6.5%). 20 Clinical Variables Medical history and medications used were obtained by interviewing. Body weight, percentage of body fat, and percentage of visceral fat were measured using a body composition monitor (Omron Karada Scan Body Composition Monitor HBF-214, Japan). Body mass index (BMI) was calculated as body weight in kilograms divided by the square of height in meters. Waist circumference was measured at the approximate midpoint between the lower margin of the last rib and the top of the iliac crest and hip circumference was measured at the widest point of the buttocks in the standing position. 21 Blood pressure and heart rate were measured twice using an automatic brachial sphygmomanometer (HEM-7130, Omron, Japan). The participants were seated and relaxed for 5 minutes before the measurement. Blood tests were performed after a 12-hour fast including the lipid profile, creatinine, fasting blood sugar, and HbA1c. Estimated glomerular filtration rate was calculated from creatinine using the CKD-EPI creatinine equation. The blood test was performed according to the manufacturer's protocols (Human Diagnostics Worldwide, Germany). CAVI Measurements CAVI was measured using a vascular screening device (VaSera1500, Fukuda Denshi Co. Ltd, Tokyo). The measurement was performed with the participants lying in the supine position by applying four blood pressure cuffs to the bilateral upper arms and ankles, placing two electrocardiogram electrodes on both wrists and a microphone on the sternum between the second ribs to detect heart sounds. The examination was performed after 15 minutes of resting in a quiet and temperature-controlled room (25 ± 1°C). CAVI was automatically calculated using Equation 1 based on the stiffness parameter β and Bramwell-Hill equation. 8 CAVI < 8 is classified as normal, 8 ≤ CAVI < 9 is classified as borderline arterial stiffness, and CAVI ≥ 9 is considered as suspected arterial stiffness. In this study, an abnormal CAVI was defined as CAVI ≥ 8, which included borderline and suspected arterial stiffness. where ρ is the blood density of 1.05 g/mL; Ps and Pd are systolic and diastolic blood pressure, respectively; PWV is pulse wave velocity; and a and b are constants. Statistical Analysis The normal distribution of continuous data was tested by the Kolmogorov-Smirnov test. Continuous data are expressed as mean ± standard deviation (SD) for normally distributed data and median (interquartile range, IQR) for non-normally distributed data. In order to compare two independent means, Student's t-test and the Mann-Whitney U-test were performed for normally distributed data and non-normally distributed data, respectively. Categorical data are expressed as numbers and percentages and the differences between the two groups were analyzed using the chi-squared test. Pearson correlation analysis was performed to evaluate the association between CAVI and other clinical variables. Significant potential variables in the Pearson correlation analysis were further analyzed with a stepwise multiple regression analysis to identify independent variables associated with CAVI. To demonstrate the association between risk factors and abnormal CAVI, relative risk with 95% confidence intervals is presented. All data were analyzed using SPSS 23.0. A p-value <0.05 was considered statistically significant. Impact of Hyperglycemia on CAVI Out of 109 participants enrolled in this study, 37 were nondiabetic and 72 were diabetic participants. Table 1 shows the demographic characteristics of all participants. There was no significant difference in the gender balance between the two groups. Average age, systolic blood pressure, and heart rate of the diabetic subjects were significantly higher than those of the non-diabetic subjects. Regarding body composition, the body fat percentage and waist-hip ratio of the diabetic subjects were significantly higher than those of the non-diabetic subjects. The DovePress mean CAVI of the diabetic subjects (8.99 ± 1.23) was significantly higher than that of the non-diabetic subjects (7.89 ± 0.87) at p <0.001. To investigate the factors affecting CAVI, the participants were classified as normal CAVI or abnormal CAVI with a cut-off value of 8 and their basic characteristics were compared. It was found that age, systolic blood pressure, diabetes, and hypertension were significantly different between the two groups. Further analysis by univariate and multiple regression analysis demonstrated that age, systolic blood pressure, BMI, and fasting blood sugar were independent predictors of CAVI. The details are presented in Tables 2 and 3. Impact of Glycemic Control on CAVI The comparison of demographic characteristics and clinical variables between controllable diabetes and uncontrollable diabetes showed that only fat-related parameters (BMI and percentage of body fat) were significantly different (Table 4). We further assessed the factors involved in arterial stiffness in T2D participants and found that CAVI was significantly positively correlated with age, systolic blood pressure, and hypertension and significantly negatively correlated with BMI, body fat, and visceral fat. The stepwise multiple linear regression analysis showed that systolic blood pressure (β = 0.262, p = 0.013) and BMI (β = −0.443, p < 0.001) were independent predictors of CAVI with adjusted R 2 = 0.247. The details are presented in Table 5. Additional Effects of Hypertension and Diabetes on Abnormal CAVI The relative risk analysis was performed according to diabetes and hypertension exposure. It was shown that diabetes was significantly associated with a 2.27-fold increase in the risk of abnormal CAVI (95% CI, 1.293-3.984; p =0.002). Furthermore, diabetes and hypertension were significantly associated with a 2.43-fold increase in the risk of abnormal CAVI (95% CI, 1.410-4.184; p <0.001) ( Table 6). Discussion The main findings of our study were that having diabetes or hypertension was associated with arterial stiffness; however, HbA1c as well as fasting blood sugar levels did not correlate to CAVI in T2D participants. Being hypertensive increased the risk of abnormal CAVI in T2D participants with a relative risk of 2.43-fold compared to healthy participants. In this study, we used CAVI to assess arterial stiffness. It is a non-invasive, simple, reproducible measurement, independent of blood pressure, which is thought to be superior to baPWV. 22 CAVI determines the majority of arteries, ie all of the arterial segments from the heart to ankle, and shows a significant correlation with cardiovascular disease. CAVI was positively correlated with intima media thickness and 10year atherosclerotic cardiovascular disease risk in T2D patients. 23 Thus, CAVI is a simple but effective marker of arterial stiffness and cardiovascular disease. In this study, an abnormal CAVI was defined as CAVI ≥ 8, which included borderline and suspected arterial stiffness. As defined by the manufacturer, CAVI < 8 is classified as normal, 8 ≤ CAVI < 9 is classified as borderline arterial stiffness, and CAVI ≥ 9 is 24 For Thais, Yingchoncharoen and Sritara reported that the CAVI value of 8 was an optimal cutoff to predict coronary artery disease with sensitivity 92%, specificity 63%, and accuracy 70% compared to the presence of coronary artery disease as assessed by 64-slice coronary computed tomography angiography from a large cross-sectional study in Thais (n = 1,391 patients). 25 In the present study, the results show that the mean CAVI and systolic blood pressure of the diabetic participants were significantly higher than those of the nondiabetic participants. Furthermore, correlation analysis demonstrated that age, systolic blood pressure, and fasting blood sugar were independent variables associated with CAVI. Similarly, Namekata et al reported that systolic and diastolic blood pressure and mean CAVI of diabetes and prediabetic subjects were significantly higher than those of non-diabetic subjects. The prevalence of abnormally high CAVI (mean CAVI + one standard deviation) was significantly higher in a group of ≥40-year-old participants with diabetes and had an increasing trend with age in all participants. It was also found that abnormally high CAVI was significantly associated with diabetes (odds ratios of 2.41 times in men and 2.52 times in women) compared to participants with normal CAVI scores. 9 In the present study, we found that BMI as an independent variable was inversely correlated with CAVI, which supports previous reports. 26,27 The goals of glycemic control in T2D patients are to reduce microvascular complications and improve cardiovascular outcomes. 12,28 The impact of glycemic control on macrovascular complications is still controversial. 12,29 The present study demonstrates that HbA1c levels, or controlled vs uncontrolled DM, are not related to arterial stiffness in T2D patients. These data support previous studies that found no differences in PWV between controlled and uncontrolled DM. 14,15 Our results are also similar to the findings of large clinical trials, ie The Action to Control Cardiovascular Risk in Diabetes (ACCORD) and The Action in Diabetes and Vascular Disease: Preterax and Diamicron MR Controlled Evaluation (ADVANCE), which found that intensified DovePress glycemic control did not significantly reduce macrovascular events. 30,31 However, there is some evidence claiming an effect of glycemic control on arterial stiffness. 10,13 This discrepancy may be due to the differences in the duration and the level of glycemic control and other potential parameters such as obesity, hyperlipidemia, hypertension, and hyperinsulinemia. Particularly, insulin level, apart from hyperglycemia, has been found to affect arterial stiffness. 32,33 The coexistence of diabetes and hypertension is common since they share some pathophysiological aspects, ie obesity and insulin resistance. Our study shows that being hypertensive is an independent predictor of CAVI in all participants and T2D patients. The coexistence of diabetes and hypertension gave a relative risk of 2.43-fold increase in the risk of abnormal CVAI when compared to that of healthy participants, which was higher than that of diabetic or hypertensive alone participants. The result is related to the studies of Tedesco et al and de Oliveira Alvim et al, who used PWV as an indicator of arterial stiffness and also found the worst arterial stiffness in diabetic participants with hypertension. 18,19 The negative influence of diabetes and hypertension on arterial stiffness may occur through structural and functional changes to the vascular wall via both separate mechanisms and shared mechanisms. In diabetes, the main mechanism of increased arterial stiffness is the enhanced generation and accumulation of advanced glycation end products (AGEs) in the vascular wall, causing excessive crosslinking between AGEs and collagen molecules of the extracellular matrix (ECM) and resulting in intimal medial thickening and stiffening of arterial walls. 34 AGEs and their receptors also affect the arterial wall stiffness via a receptor-mediated endothelial dysfunction and inflammation process. 35,36 Additionally, oxidative stress is closely related to increased arterial stiffness. Chronic hyperglycemia can increase free radicals through glucose autooxidation, protein glycation, and activation of polyol pathway resulting in lipid peroxidation and protein oxidation of cellular structures. 37 These adverse events lead to progressive endothelial cell injury and it is implicated in accelerated arterial stiffening. There was evidence indicated the upregulation of metalloproteinases (MMPs) in diabetes, especially MMP-9 which mediated elastin fragmentation as well as medial arterial calcification. 38 MMP-12 is also associated with greater arterial stiffness. 39 Another mechanism is related to endothelial dysfunction in diabetes through alterations to many vasoactive substances including reduced nitric oxide (NO) bioavailability and activation of vascular reninangiotensin-aldosterone-system. 40 In hypertension, arterial stiffening occurs as a result of increased intraluminal pressure causes augment pulsatile stress resulting in elastin degradation and subsequent stimulation of collagen production. [41][42][43] Moreover, it has been found a possible link between increased levels of MMP-9 and arterial stiffness in hypertension. 44 Levels of MMP-9 associate with aortic PWV in hypertensive patients. Calcification and low-grade inflammation in the arterial wall also cause increased arterial stiffness in hypertension. 45,46 Furthermore, the elevation of angiotensin II and aldosterone levels in hypertension have been shown to be associated with collagen turnover and arterial wall fibrosis and subsequent vascular damage along with increased arterial 47 On the other hand, increased arterial stiffness has been shown to precede the development of hypertension. 48 Our limitation is that this cross-sectional study design did not allow us to follow the development of arterial stiffness and other vascular complications. Therefore, a more large-scale longitudinal study should be carried out to determine the complex cause-effect relationship between diabetes and hypertension on arterial stiffness. In this study, we did not measure HbA1c levels in healthy subjects. Therefore, only fasting blood sugar was used to study the relationship between glycemic status and CAVI in all participants. However, the close relationship between HbA1c and fasting blood sugar may imply the association between HbA1c and CAVI. 49 Further studies are needed to analyze some clinical aspects, especially drug therapy, disease duration, healthrelated quality of life, postprandial glucose tests, and insulin levels. These data might explain how some hypoglycemic agents and antihypertensive drugs differentially affect vascular function and arterial stiffness. In the past decade, the management of diabetes has been studied and updated extensively. It is recommended that the treatment of comorbidities such as hyperlipidemia and hypertension is necessary and might be more effective than lowering blood sugar alone in the prevention/reduction of cardiovascular events. 6 The dissociation between glycemic control and arterial stiffness and the additive effect of hypertension found in our study may be useful for clinical considerations in the monitoring and management of T2D patients, especially in individualized glycemic control. However, issues about intensified glucose control and the effects of insulin resistance should be clarified for better management and the search for novel anti-diabetic agents. Conclusion This study demonstrated that HbA1c as well as fasting blood sugar levels in diabetic participants do not correlate with arterial stiffness. Concomitant diabetes and hypertension significantly increase the risk of arterial stiffness. Publish your work in this journal Vascular Health and Risk Management is an international, peerreviewed journal of therapeutics and risk management, focusing on concise rapid reporting of clinical studies on the processes involved in the maintenance of vascular health; the monitoring, prevention and treatment of vascular disease and its sequelae; and the involvement of metabolic disorders, particularly diabetes. This journal is indexed on PubMed Central and MedLine. The manuscript management system is completely online and includes a very quick and fair peerreview system, which is all easy to use. Visit http://www.dovepress. com/testimonials.php to read real quotes from published authors.
2020-09-17T05:06:50.835Z
2020-08-25T00:00:00.000
{ "year": 2020, "sha1": "3bba561734e4f30cb96d646136fca75c60db7e95", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "3bba561734e4f30cb96d646136fca75c60db7e95", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
248290981
pes2o/s2orc
v3-fos-license
Iron Bioavailability from Ferrous Ammonium Phosphate, Ferrous Sulfate, and Ferric Pyrophosphate in an Instant Milk Drink—A Stable Isotope Study in Children Ferrous ammonium phosphate (FAP) is an iron salt that has been developed for the fortification of food matrices sensitive to color and flavor changes. The objective of the study was to measure iron absorption from FAP in young children and compare it to a previous evaluation of FAP in young women. A double-blind randomized crossover study with two parallel arms was used to evaluate the iron absorption from FAP added to reconstituted milk powder in comparison to that from ferrous sulfate (FeSO4) and ferric pyrophosphate (FePP). Iron absorption was measured in 39 children aged 3- to 6-years-old using erythrocyte incorporation of stable Fe isotopes (57Fe, 58Fe). The geometric mean iron absorption in iron replete children from FAP, FeSO4 and FePP from milk was 8.3%, 7.6% and 2.1%, respectively. Iron absorption from FAP and FeSO4 fortified milk was not significantly different (p = 0.199); however, it was significantly higher than from FePP fortified milk (p < 0.001). Iron bioavailability from FAP and FePP relative to FeSO4 (relative bioavailability (RBV)) was 110% and 33%, respectively. The RBV of FAP (110%) in iron replete children was higher than previously reported RBV (71%) in mainly iron deficient women. The difference in iron status between the children and women in the respective studies may explain the different RBV values and is discussed. Introduction Anemia affects a third of the world's population [1]. Based on 2011 global estimates, 43% of preschool children and 33% of nonpregnant women were anemic, with the highest burden in Africa and South Asia [2]. The etiology of anemia is varied and complex with iron deficiency (ID), inflammation, hemoglobinopathies, and hookworm being important causes [1,3]. Although ID is the main driver in high-income countries, in low-and middle-income counties, especially those in sub-Saharan Africa with widespread infections, inflammation may be the major cause of anemia. Recent estimates suggest that only 25% and 37% of anemia in, respectively, preschool children and women of reproductive age living in countries with widespread infections and inflammation is associated with ID [4]. Iron is essential for hemoglobin synthesis, for a range of key enzymes essential for normal brain development in the fetus and the child, for optimum immune defense, and for efficient energy production [5]. Food fortification with iron is generally regarded as the most cost effective and sustainable long-term approach for decreasing the prevalence of ID [6], and iron compounds used for food fortification must be carefully selected with respect to bioavailability and their potential to cause unacceptable sensory changes to the food product. For a given food product, the iron compound chosen is that which has the highest bioavailability without provoking unacceptable color, flavor, odor, or texture changes in the food during the production, storage, or preparation for consumption. In relation to milk powders, this includes changes after reconstitution with water. It is generally accepted that more water-soluble iron salts have a higher bioavailability, but also more potential to cause unacceptable sensory changes in the product. On the other hand, the more insoluble compounds create fewer sensory problems but are often less well absorbed. Water soluble iron compounds, such as ferrous sulphate (FeSO 4 ), the reference salt for iron absorption [7], frequently cause the most adverse sensory changes in sensitive foods. This is because free, solubilized iron has a distinct metallic taste; it can form unacceptable colored complexes with polyphenol compounds in fruits and vegetables, and can oxidize fats in lipid-containing foods such as wheat flour, whole milk or full cream milk powders [6]. Water insoluble compounds, on the other hand, cause no or few sensory changes but they may be less well absorbed as they may not dissolve completely in the dilute acid of gastric juice during digestion. Whole milk powders are commonly used as a fortified food source to provide additional iron to children older than 3 years. Ferric pyrophosphate (FePP) is widely used to fortify milk powders targeted at older children. It is insoluble in water and causes few if any sensory changes to foods. After processing and reconstitution with water, the FePP remains suspended in the milk drink attached to the other milk constituents. However, due to its poor solubility at gastric pH, iron absorption from FePP is low [8][9][10] and reported to be about 30% compared to the well absorbed FeSO 4 in milk [11]. Another iron compound widely used to fortify foods sensitive to sensory changes [6, [12][13][14][15] is ferrous fumarate. This compound is poorly soluble in water and highly soluble in gastric juice, as indicated by reports showing that ferrous fumarate has a similar fractional iron absorption to FeSO 4 in iron replete infants, school children and women [12][13][14]. However, it has a deep red color and is not a suitable fortificant for whole milk powder [16]. Ferrous ammonium phosphate (FAP) is light green and was developed for the fortification of milk powder. It has a molecular weight of 168.85 g/mol, contains approximately 30% iron (w/w), and its approximate cost per mg iron relative to FeSO 4 and FePP is 6.0 and 1.7, respectively [11]. It is much more soluble in dilute acid than FePP e.g., 57.8% vs. 11.6% at pH 1.7 [11], but somewhat less soluble than ferrous fumarate (i.e., 100% at pH 2) [17]. It has been classed as generally regarded as safe (GRAS) for the general population aged over 3 years by the Joint FAO/WHO Expert Committee on Food Additives (JECFA) [18] and by the European Food Safety Authority [19]. The relative absorption of FAP was first measured in young women consuming a full cream instant milk drink fortified with FAP and compared to that of FePP and FeSO 4 [11]. The authors reported significantly lower iron absorption from both FAP and FePP than from FeSO 4 (7.4% and 3.3%, respectively, vs. 10.4%, p = 0.001, corresponding to a relative iron bioavailability to FeSO 4 (RBV) of 71 and 33%); yet iron absorption from FAP was significantly better than from FePP (p < 0.0001). As the target consumer group for fortified full cream milk powder are children, the study has been repeated in this population. The objective of this trial was to compare the fractional iron absorption in children aged 3-6 years from FAP added to a reconstituted whole milk powder with that of FeSO 4 and FePP. Iron absorption was measured using the erythrocyte incorporation of stable isotopic labels 14 days after the consumption of the isotopically labeled test drinks. Subjects A total of 40 apparently healthy children were recruited at the Kindergarten (Bicutan Elementary School) in Metro Manila, Philippines by the Food and Nutrition Research Institute (FNRI). Forty children fulfilling the inclusion criteria, i.e., age 3-6 years (inclusive), a normal BMI for age according to the WHO standard, non-anemic according to the WHO cutoff, apparently healthy, no intake of medication or vitamin or mineral supplements 2 weeks before the start of the study and during the study, no participation in another clinical trial, and the ability to comply with the study protocol, were included. The study was conducted at the FNRI in November and December 2009 according to the guidelines laid down in the Declaration of Helsinki. The study protocol was approved by the FNRI Institutional Ethics Review Committee, Manila, Philippines and the ETH Ethics Committee, Zurich, Switzerland, and written informed consent was obtained from all parents of the participating children. The study was conducted according to GCP guidelines. Study Design A double-blind randomized crossover study with two parallel arms was applied to limit the duration of the study. Within each arm, the subjects crossed over on two treatments, i.e., FeSO 4 vs. FAP (A vs. B) in the first arm, and FeSO 4 vs. FePP (A vs. C) in the second one over 4 days. In arm 1, 20 subjects were randomly assigned to start the drink consumption by the sequence ABAB or BABA. In the second arm, 10 subjects were randomly assigned to the sequence ACAC, and 10 others to CACA. The sample size was established assuming the comparable effect size and variability as observed in adults [11], with two-sided alpha = 5% and power = 80%. Venous blood samples were drawn after an overnight fast for the determination of the iron status parameters, i.e., hemoglobin (Hb), ferritin, and C-reactive protein (CRP) as an inflammation marker, in the week preceding the test meal administration. The body weight and height were measured. On day 1, the first labeled test meal was administered after an overnight fast. The following day (day 2), the second test meal was administered according to the same procedure, as well as on days 3 and 4. There was no washout period between the 4 days of labelled test drink consumption to be close to the pattern of milk consumption in this age group, i.e., one serving of milk per day. A second venous blood sample was drawn 14 days after the consumption of the last test drink (day 18). The study design is provided in Supplementary Material Figure S1. Isotopic Labels Isotopically labeled FeSO 4 ( 58 Fe) and FAP ( 57 Fe) and FePP ( 57 Fe) were prepared by Dr. Paul Lohmann GmbH (Emmerthal, Germany) from isotopically enriched elemental iron (Chemgas, Boulogne, France) using a lab scale procedure that follows closely the procedures employed for the production of commercially available products. The iron content of the compounds was assessed via isotope dilution mass spectrometry using a gravimetric standard prepared from a certified iron isotopic reference material (IRM-014, EU JRC Institute of Reference Material and Measurements, Geel, Belgium). The isotopic enrichment of 58 Fe as FeSO 4 was 99.5%, 57 Fe as FAP was 97.5% and 57 Fe as FePP was 97.9%. The isotopically labeled compounds were weighed into vials at the Human Nutrition Laboratory (HNL), ETH Zurich, transported to the study site, and shortly before consumption the respective vial was emptied into the test meal. The exact amount of the compound was determined by weighing the vial before and after emptying. A straw was used to mix the isotopically labeled compound with the test drink and the straw was used to consume the test drink. The emptied glasses were rinsed twice with 20 mL water and the washings were consumed to ensure the complete intake of the isotopically labeled compounds and test drink. Test Drink The test drinks consisted of full cream milk powder produced at Nestlé Product Technology Center, Konolfingen, Switzerland and reconstituted with purified water. The milk powder was produced according to the specifications for a commercial milk powder but without added iron. The milk powder (26 g) was weighed into a plastic glass; then, 180 g water was added and mixed with a plastic spoon. The milk drink contained 6.5 g protein, 7.4 g total fat, 9.7 g carbohydrates, and 22.6 mg ascorbic acid. Isotopically labeled iron compounds were added, as described above, at the level of 2 mg iron per test drink, thus corresponding to a fortification level of 10 mg iron per liter of prepared milk. Test Drink Administration The subjects were reminded to refrain from the intake of foods (after 8 pm) and drinks (after midnight) until the administration of the test drinks on the following morning. On day 1, the first labeled test drink was administered after an overnight fast and the following day (day 2), the second test meal was administered according to the same procedure. This was repeated on days 3 and 4. The test drinks were fed under strictly standardized conditions under close supervision of the investigators. No intake of food and fluids was allowed 3 h after the feeding and the children were under supervision at FNRI during this time. Compliance with the protocol was monitored by questioning the subjects on the following visit. Blood Sampling and Analysis Blood samples were drawn by experienced medical technologists using EDTA coated vacutainers. The first blood sample was drawn in the week preceding the test drink administration to determine the iron status and a second blood sample 14 days after administration of the last test drink for isotopic analysis. Hb was measured in whole blood on the day of collection using the cyanmethemoglobin method. Plasma was separated for ferritin and CRP analysis using a radioimmuno and immuno-turbidimetric assay, respectively. These measurements were performed at FNRI and a service laboratory in Manila. Anemia was defined as Hb < 110 g/L for children <60 months and Hb < 115 g/L for children 60-72 months. Iron deficiency was defined as ferritin <15 µg/L [6]. The normal range for CRP was 0.1-2.8 mg/L. Isotopically enriched blood samples were analyzed in duplicate for iron isotopic composition using chemical blank monitoring. Whole blood samples were mineralized by microwave assisted digestion (MLS Ethos, MLS, Leutkirch, Germany) using a mixture of HNO 3 and H 2 O 2 , which was followed by the separation of the sample iron from the matrix by anion-exchange chromatography and a solventsolvent extraction step into diethylether [20]. All isotopic analyses were performed at HNL by negative thermal ionization mass spectrometry (NTI-MS) using a magnetic sector field mass spectrometer (MAT 262, Thermo-Finnigan, Bremen, Germany) equipped with a multi-collector system for the simultaneous detection of generated FeF 4 − ions [20]. Calculation of Iron Absorption Based on the shift of the iron isotope ratios in the blood samples and the amount of iron circulating in the body, the amounts of 57 Fe and 58 Fe isotopic label present in the blood 14 days after the last test meal administrations was calculated based on the principles of isotope dilution and considering that the iron isotopic labels are not monoisotopic [21]. The circulating iron was calculated based on blood volume and Hb concentration [22]. The blood volume calculations were based on the body weight and height [23]. For the calculations of fractional absorption, 90% incorporation of the absorbed iron into red blood cells was assumed. Food Analysis The iron and calcium content of the milk powder were determined in triplicate analysis, respectively, by graphite furnace and flame atomic absorption spectrometry (AA240, Agilent Technologies, Santa Clara, CA, USA), after microwave-assisted mineralization in a mixture of HNO 3 and H 2 O 2 (MLS Ethos). Statistical Analysis The iron absorption was log10-transformed to achieve normality. Log10 iron absorption was analyzed by a mixed model that contained the sequence and treatment (FePP, FAP, FeSO 4 ) as fixed effects and the subject as a random effect. The mixed model modeled the two crossover arms jointly. The treatment differences for FeSO 4 vs. FAP and FeSO 4 vs. FePP were estimated within the subjects, and the treatment difference for FAP vs. FePP was estimated between the subjects. The two treatment differences were estimated from the mixed model via appropriate contrasts. Additionally, the p values were adjusted for multiple tests. Thus, the experiment-wise false-positive rate was controlled at a 5% level. The model-based estimates of the log10(x) mean absorption rates were back-transformed and called geometric means. An analysis was performed with R (Version 2.6.1, R Foundation for Statistical Computing, Vienna, Austria) using the libraries "nlme" and "multcomp". Subjects Twenty children were included in each arm; one subject from arm 2 was excluded after the second visit due to increased body temperature. Thus, 20 and 19 subjects completed the study in arm 1 and 2, respectively. The subjects' characteristics and iron status are presented in Table 1. There was no significant difference between the two groups for any of the indicators. None of the subjects was anemic or iron deficient. Elevated CRP values were detected in 8 children of the 40 selected for the study during the screening (i.e., CRP > 2.8 mg/L; 3 subjects in arm 1 and 5 subjects in arm 2), and they were not retested on the days of feeding. However, none of the children (except one excluded at the 2nd visit) had signs of infections as determined by body temperature on the days of feeding. Tested Drink The native iron and calcium content of the test drinks were 56.2 ± 0.5 µg and 245 ± 7 mg per 200 mL serving, respectively. The average amount of added labelled iron as FeSO 4 , FAP and FePP was 1.9 ± 0.24 mg per 200 mL serving. The calculated molar ratio of ascorbic acid to iron was 3.5:1. Iron Absorption The results of iron absorption (geometric mean (−SD; +SD) from the fortified milk are shown in Table 2. The iron absorption from FAP was 8.3% (4.36, 15.84) and slightly, but not significantly higher than from the milk fortified with FeSO 4 , which was 7.6% (3.93, 14.68) (p = 0.199). In arm 2, the geometric mean iron absorption from the milk fortified with FePP was low at 2.1% (1.08, 4.05), while the absorption from the same drink fortified with FeSO 4 was significantly higher at 6.24% (3.25, 11.96) (p < 0.001). The relative iron absorption to FeSO 4 (RBV) from FAP and FePP were 110% and 33%, respectively. A comparison between the arms showed no difference for FeSO 4 (p = 0.358); however, the iron absorption from FAP was significantly higher than that from FePP (p < 0.001). Table 2. Iron absorption in children, descriptive statistics (Geometric mean % −1SD; +1SD, and log10 (Mean ± SD)) per treatment, and comparison in each study arm, as well as the estimated effect between the study arms. The effect of plasma ferritin on iron absorption was investigated using mixed models. The slopes were found to be not significantly different from 0 and were not influenced by the salt type, neither in arm 1 (p = 0.911) nor in arm 2 (p = 0.838). Discussion This study is the first study to evaluate iron absorption from FAP in children. It demonstrated that the fractional iron absorption by young children from FAP added to reconstituted whole milk powder is similar to that of FeSO 4 , and 3-fold higher than FePP. The result, however, was somewhat unexpected as, in the earlier study, iron absorption from FAP in young women was only 71% of that from FeSO 4 (p < 0.0002), as showed in Table 3. One possible reason for this difference in the RBV between the two studies is that the subjects in the previous FAP study [11] were women with low iron status (mean serum ferritin 18 µg/L, 7 subjects out of 19 with iron deficiency), while the children in the present FAP study were all iron replete (mean serum ferritin 59 µg/L, no iron deficiency). Evidence to support this hypothesis comes from reports that other poorly water-soluble iron compounds, such as ferrous fumarate and FePP, give much lower RBV values in subjects with ID than they do in iron replete subjects. For instance, in infants, young children (2-5 yo), and adult women consuming a milk-maize drink, iron from ferrous fumarate was absorbed to the same extent as from FeSO 4 (the RBV was close to 100% in all population groups) [12]. This contrasts with results from studies made in Mexico [24] and Bangladesh [25], which indeed showed that iron absorption from ferrous fumarate, in mostly iron deficient young children, was only 30% of that from FeSO 4 (RBV = 30). Differences in iron status have also been reported to influence the RBV of FePP in adults [7,26,27], with RBVs in single subjects calculated to be as low as 15% in a women with ID (serum ferritin < 5 µg /L) and approaching 100% for women with adequate iron status (serum ferritin >35 µg /L ) [7]. The explanation given for this observation with FePP, was that iron deficient subjects upregulated iron absorption more efficiently from FeSO 4 , a compound that readily dissolves in the gastric juice during digestion, than from FePP, a compound that only poorly dissolves in the gastric juice during digestion [26,27]. It is not clear whether the same explanation could explain the low RBV values for ferrous fumarate reported in iron deficient children, as ferrous fumarate would be expected to be close to a complete dissolution during digestion, in both iron replete and iron deficient subjects, but the iron would dissolve at a much slower rate. It is possible, therefore, that the rate of dissolution of the iron compound in the gastric juice also influences the efficiency of upregulating iron absorption in ID. Another explanation could be the efficiency of gastric acid dissolution of ferrous fumarate was somehow diminished in the malnourished, young, iron deficient children. When the body upregulates iron absorption because of iron deficiency, the lower iron concentrations in the intestinal cells from these less soluble iron compounds may not be able to match the rate of iron transfer from the enterocyte into the plasma that is achieved when FeSO 4 is used for iron fortification. In subjects with normal iron status, when iron absorption is not upregulated, the iron concentration in the enterocytes originating from ferrous fumarate is sufficient to be transferred into the plasma at the same rate as FeSO 4 . FAP is more soluble in dilute acid than FePP, but less soluble than ferrous fumarate, so would also be expected to give lower RBV values in iron deficient subjects, explaining the lower RBV of FAP reported earlier in the women with ID [11]. Consequently, it is probable that the absorption of iron from FAP in iron deficient children would also be significantly lower than that of FeSO 4 . It should be noted that the RBV values reported are obtained from a single meal absorption study and that, over long-term consumption of a fortified food, the children would gradually recover from iron deficiency. At this stage, the absorption value of FAP (and ferrous fumarate) would likely be similar to that of FeSO 4 . Milk powders and milk products are frequently used as vehicles for iron fortification, as well as vehicles for the provision of vitamins and other trace elements to populations at risk of micronutrient deficiencies. In relation to iron fortification, commercially fortified whole milk powder targeted at children >3 years is usually fortified with FePP, so as to obtain the optimum sensory properties, and ascorbic acid is added to improve iron absorption. Milk contains two components that can impair iron absorption. These are calcium [28] and proteins (casein and whey when evaluated from a semi-synthetic liquid meal) [29]. Their impact on iron absorption is reported to be much less strong than phytic acid or polyphenols and their modest inhibition in milk products is adequately overcome by the addition of ascorbic acid [30,31]. The WHO recommends that ascorbic acid be added with iron at a 2:1 molar ratio in milk matrices [6]. Ratios lower than this have resulted in very low fractional iron absorption from FePP added to milk-based products, even in mildly anemic children [10]. Our study indicates that FAP would be a much better iron compound than FePP for the fortification of whole dried milk powders targeted at children >3 years. Iron absorption from FAP by iron replete children was about 8% and, with an iron concentration of 10 mg per liter, a serving of 200 mL fortified milk would provide about 160 µg of absorbed iron. The daily iron requirement for absorbed iron in children aged 4-6 years is 0.5 mg per day [32], and a serving of FAP fortified milk co-fortified with ascorbic acid would cover about 30% of this requirement. Conversely, taking 2.1% as the iron absorption from FePP, a serving of FePP fortified milk would only cover about 8% of the -requirement for absorbed iron in this age group. Such a low contribution may be part of the explanation for the significant but still marginal effect of the fortified milk and cereals reported in children >5 years old in decreasing iron deficiency anemia when FePP is used [33]. A way to compensate for the low FePP bioavailability would be to increase the level of fortification as advised by the WHO [6,34]. Although iron deficient children would likely upregulate iron absorption from FAP to a lesser extent than FeSO 4 , fractional iron absorption from FAP fortified reconstituted milk powder is likely be at least similar, if not higher, than that reported in the present study. Regular consumption would, thus, gradually correct iron deficiency, and then the iron fortified milk would help to maintain an adequate iron status. An alternative compound that could be used to fortify whole milk powders is ferrous bisglycinate. This iron chelate is reported to not dissociate, or to partially dissociate, in dilute acid. Until now, ferrous bisglycinate (without the addition of ascorbic acid) has been the only iron compound used to fortify liquid milk. This chelate overcomes, to some extent, the inhibitors of iron absorption in milk and is better absorbed than FeSO 4 (in the absence of ascorbic acid). It is reported to be used in the National Costa Rican fortification program for liquid milk and powdered milk (i.e., a level of 1.4 mg iron/serving) [35,36]. Its wider use, however, is prevented by its high cost and potential to cause sensory changes at higher dose and/or in some sensitive foods [37]. Another promising recent approach is the development of a soluble casein-iron-phosphate complex, which can be added to milk with no unacceptable sensory changes. This compound, described as ferric phosphate clusters stabilized in solution by casein molecules, has been reported to be as well absorbed as FeSO 4 from milk both in vitro and in isotopic studies in human subjects [38,39]. As with FeSO 4 , the casein-iron-phosphate complex would require the addition of ascorbic acid to ensure an optimal absorption. Our study has several strengths: it is the first study evaluating iron absorption from FAP in milk in children, a target group that consumes fortified milk; and the use of stable isotopes and intrinsic labelling has allowed sensitive and specific iron measurement. A limitation of the study is that the RBVs of FAP and FePP to FeSO 4 were measured in two separate study arms to keep the same design as in the previous study in women, while the absorption of the three iron salts could have been measured in a same group using 54 Fe. Nevertheless, as the iron status of the subjects in the two arms was not significantly different, the comparison of the iron absorption from FAP and FePP was possible without a normalization of the data for this parameter. In conclusion, the iron from milk fortified with FAP and ascorbic acid was well absorbed. The fractional iron absorption from FAP in iron replete children was similar to that from the soluble FeSO 4 reference compound. The RBV of FAP (110%) in children of good iron status was higher than the RBV (71%) previously reported in women, with iron deficiency. The difference in iron status between the children and the women in the respective studies is suggested to be the reason for the different RBV values. Supplementary Materials: The following are available online at https://www.mdpi.com/article/10 .3390/nu14081640/s1, Figure S1: Study design. Stable isotopes were administered over 4 alternative days with the test drink (milk), i.e., A: ferrous sulfate labelled with 58 Fe; B: ferrous ammonium phosphate labelled with 57 Fe; C ferric pyrophosphate labelled with 57 Fe, after the subjects had fasted overnight. Author Contributions: The authors' contributions were as follows: R.F.H., T.P.T., A.C.M., R.S.S., P.K., Q.L. and I.E., designed the study; I.E., T.P.T., A.C.M., R.S.S., J.T.F. and C.Z., conducted the study; I.E., C.Z., P.K., A.R., M.S. and R.F.H., analyzed the data and performed the statistical analyses. All authors participated in the data interpretation; R.F.H., I.E., A.R. and M.S., wrote the first draft of the manuscript and all authors edited, read, and approved the final manuscript. All authors have read and agreed to the published version of the manuscript. Informed Consent Statement: Written informed consent was obtained from all parents of the participating children. The study was conducted according to GCP guidelines. Data Availability Statement: The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy.
2022-04-21T15:23:15.027Z
2022-04-01T00:00:00.000
{ "year": 2022, "sha1": "0162c610a7af4343976e0faa578afa8005d966ff", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6643/14/8/1640/pdf?version=1649931346", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "57388bd08c59f13532c98ca9d80d910d819412fe", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
119626599
pes2o/s2orc
v3-fos-license
How the permutation of edges of a metric graph affects the number of points moving along the edges We consider a dynamical system on a metric graph, that corresponds to a semiclassical solution of a time-dependent Schr\"odinger equation. We omit all details concerning mathematical physics and work with a purely discrete problem. We find a weak inequality representation for the number of points coming out of the vertex of an arbitrary tree graph. We apply this construction to an"H-junction"graph. We calculate the difference between numbers of moving points corresponding to the permutation of edges. Then we find a symmetrical difference of the number of points moving along the edges of a metric graph. Introduction Let us consider a finite metric graph (edges of this graph are regular smooth curves with finite length, e.g., [1]) and the following dynamical system on it. Let one point move along the graph at the initial time. In the interior vertices of the graph, it can be divided as follows: if k points came to the vertex of valence v at the same time, then v points would be released, i.e. one point will correspond to one edge. Reflection occurs in vertices of valence one. Time for passing each individual edge (travel or propagation time) is fixed. It is assumed that there are no turning points on the edges. The problem is to analyze the asymptotic behavior of the number of such points on the graph as time increases. The above discrete formulation is a simplification of the problem that arises in the analysis of semiclassical solutions of the Schrödinger equation and in particular in the study of the behavior of Gaussian packets on a metric graph (e.g., article [5] and references therein). Differential equations and analysis on metric graphs continue to attract great interest among mathematicians and physicists. Books [1], [2] and references in them can be recommended for interested readers. A number of experts now are engaged in quantum mechanics on graphs, for instance, articles [3], [4]. All necessary definitions related to the study of the statistics of Gaussian packets on a metric graph can be found in article [5]. It was shown in [5] that the leading coefficient of the asymptotics for the number of moving points on a finite compact metric graph with increasing time for almost all incommensurate propagation times is determined only by the number of edges, the number of vertices of the graph, the sum and the product of the propagation times of all edges. The next question arises naturally: what characteristics will determine the following members of an asymptotic expansion? It is impossible to obtain an explicit formula in the general case, but for almost all edge propagation times it can be done, if we construct an asymptotic expansion for the number of lattice points in an expanding simplex of dimension greater than two. Overview of the results associated with this well-known problem can be found in [6]. In the present article, we turn to the discrete formulation and show how to reduce the problem of finding the number of moving points for a finite tree graph to the number-theoretic problem. To do this, we write an exact formula for the number of points by expressing it in terms of the number of integer nonnegative solutions of weak inequalities, which from a geometric point of view corresponds to simplices. In the calculations we use a function, which can be linked with the Sprague-Grundy function (e.g., [8]), for some games. Taking into account the fact that the smaller terms of the asymptotic expansion of the number of lattice points in a simplex are symmetrically included in the asymptotic expansion of the number of moving points, we can consider the difference between the number of points for the two graphs that are identical from topological point of view, but have different propagation times. This means that two graphs have the same number of vertices and set of edge propagation times, but have different order of edges. We apply this approach for a H-junction (see [9]), i.e. a tree graph Γ H , with five edges and six vertices, two of which are inner vertices and four have valence one. We consider two variants of composition of Γ H from the same set of edges. It is shown that the difference of the number of moving points is of the order T 3 and the leading coefficient of the difference between the number of points is explicitly expressed in terms of propagation times of all edges except a "jumper". It turns out that the second term of the asymptotic number of points depends on where the initial data is taken, therefore we can consider the symmetric difference of the number of moving points over all possible pairs of internal vertices. It turns out that it is of the order T 2 and the leading coefficient is explicitly written out. Computer experiments have been conducted together with O. V. Sobolev, where expressions obtained in section 3 were calculated directly. The results are in accordance with those obtained analytically. The transition to a set of weak inequalities We introduce the following notation. Let E(Γ) = {e i } E i=1 be a set of edges of the graph Γ. Propagation times of the point along the edges E(Γ) are, respectively, Suppose that there is a tree Γ with the root A. Let A be a starting point. Let us recall how dynamics on a graph is constructed. One point moves along the graph at the initial time. In the interior vertices of the graph, it can be divided as follows: if k points came to the vertex of valence v at the same time, then v points will start to move over all the edges (one point on an edge). In vertices of valence one reflection occurs. Edge propagation time for each individual edge is fixed. It is assumed that there are no turning points on the edges. We want to find the number of points moving along the graph Γ, at the time T . We will find times in which new points have been formed, and then sum the number of new points over such times. Each birth time corresponds to a subtree with root in A. The subtree consists of the edges {e i 1 , . . . , e i L }, which the point has passed before returning to A. A set of times of the form {2t i 1 n i 1 + . . . + 2t i L n i L } corresponds to each subtree (the point has passed 2n i times along the edge e i ). If we want to find the number of times that are less than T, it is necessary to find the number of integer points in the simplex {2t i 1 n i 1 + . . . + 2t i L n i L ≤ T, n i > 0}. For instance, for the graph Γ H from section 3 there will be 10 subtrees with one root and 10 subtrees with another root. Thus, the number of points may be represented as a linear combination of numbers of integer points in 20 expanding simplices. However, if we consider slightly larger simplices and allow coordinates to be zero, then there will be fewer simplices: 11 instead of 20. In this section, we will present a formula for the number of points born at the vertex A, where all simplices will be given via weak inequalities. In the following text n i ∈ N ∪ {0}. Let us define #[system of inequalities on n i ] as the number of solutions of the system. Statement 2.1 The following relation holds: Proof is elementary, by induction. Let N(Γ, A, X, T ) be the number of new points born at the vertex X ∈ Γ to the moment T , with the condition that the point have started from the vertex A ∈ Γ at the initial time. Let the vertex A be incident to edges Theorem 2.1 The following relation holds for the quantity N(Γ, A, A, T ) : where c G is defined as follows. We enumerate edges incident to A, so that G intersect only E(Γ e 1 ), . . . , E(Γ e k ) (deg(A) = n ≥ k). Then where σ j are elementary symmetric polynomials, z i = z i (Γ, A). Proof. Let us consider a tree Γ e 1 such that only one edge e 1 is incident to the root. We find F (Γ e 1 , A, T ), being the number of times less than T , at which a point returns to the root A. A tree corresponds to such time. The tree consists of edges that were engaged in the route. A subtree Γ ′ ⊂ Γ e 1 with edges e 1 , e i 1 , . . . e i k−1 corresponds to a set of times Using the statement 2.1, we obtain: If we interchange the order of summation the right-hand side will have the form of a sum over all subsets of edges G ⊂ {e 1 , . . . e E(Γe 1 ) }: Suppose now that the vertex A is incident to two edges e 1 , e 2 . Γ = Γ e 1 ∪ Γ e 2 . Then, in the similar way: We divide the G = G 1 ∪ G 2 , G 1 = E(Γ e 1 ) ∩ G, G 2 = E(Γ e 2 ) ∩ G, then the last formula will have the form: Similarly, we obtain the proof in the general case. Note that the function z(Γ, ∅, A) can be calculated recursively. Consider that is, we add a term corresponding to the Γ ′ , consisting of a single vertex. Then 1) If the trees Γ 1 and Γ 2 intersect only in a single vertex A, then 2) If only one edge (A, B) is incident the vertex A of the graph Γ 1 and Γ 2 = Γ 1 \ (A, B), then Note that the function z 0 A can be interpreted as a denial of the Sprague-Grundy function (e.g., [8]) for some game, as follows. Consider two players moving down the tree from the root, who make moves in turn. The aim of the game is to make the last move (to get to the vertex of the valence one). Winning strategy: move to the vertices labeled 1. The first player wins the game, using the right strategy, when only when the root is labeled 0. From vertices labeled 1, only vertices labeled 0 can be reached. From vertices labeled 0 there always a possibility to move to 1, therefore, initiative always belongs to the player in the vertex labeled 0, and he can make the last move (i.e., get to the vertex of the valence one). Results for the graph Γ H Consider a H − junction, i.e. a graph Γ H consisting of edges e 1 , e 2 , e 3 , e 4 , e 5 , in which propagation times are t 1 , t 2 , t 3 , t 4 , t 5 respectively. There are only two vertices with the degree 3 in the graph: A, incident to e 1 , e 2 , e 3 , and B incident to e 3 , e 4 , e 5 . Here we assume that all t i are linearly independent over Q. Integers n i are non-negative in the following inequalities, unless otherwise stated. Statement 3.1 The following relation holds: Sum of the times from 1 and 2 gives N(Γ, B, B, (T − t 3 )) (expression derived from N(Γ, A, A, T ) by replacing e 1 by e 4 , e 2 by e 5 , T by T − t 3 ). If we sum these functions we obtain N(Γ, A, B, T ). Assumption 1 Numbers {t 1 , t 2 , t 3 , t 4 , t 5 } are linearly independent over Q. Suppose that for all 1 ≤ m ≤ 4 and all m-element subsets {s i 1 , . . . , s im } ⊂ {t 1 , t 2 , t 3 , t 4 , t 5 } the following holds: From the work [7] it follows that the Assumption 1 holds for almost all {t 1 , t 2 , t 3 , t 4 , t 5 }, but there is no rigorous proof of this statement yet. Statement 3.3 For times {t 1 , t 2 , t 3 , t 4 , t 5 }, satisfying the Assumption 1, the following holds: Thus, we can get the leading coefficient for the difference between the number of points on the graphs, obtained from each other by the permutation of edges. For example, if Γ ′ H is obtained from Γ H by the permutation of e 1 and e 5 , then the following statement holds. Statement 3.4 For times {t 1 , t 2 , t 3 , t 4 , t 5 }, satisfying the Assumption 1, it holds that: One can note that the difference of the number of moving points is of the order T 3 and the leading coefficient of the difference of the number of points is explicitly expressed in terms of propagation times of all edges except a "jumper", i.e. t 3 . Suppose that for all 1 ≤ m ≤ 4 and all m-element subsets {s i 1 , . . . , s im } ⊂ {t 1 , t 2 , t 3 , t 4 , t 5 } the following holds: Let us denote internal vertices of one variant of the composition of Γ H by A and B, and of the other variant by X and Y . Since the second term of the asymptotic number of points depends on the initial position of the point, we can consider different variants of differences of the number of points depending on which two vertices are fixed. To eliminate the arbitrary choice of pairs, we can take the sum of the differences of the number of points over all possible permutations. It turns out that such symmetric difference is of the order T 2 , not T 3 . More precisely, the following proposition holds. Statement 3.5 If we denote the difference between the number of points issued from the vertex A of the first graph and the number of points issued from the vertex X of the second graph as d(A, X), than for times {t 1 , t 2 , t 3 , t 4 , t 5 }, satisfying the Assumption 2, it holds that: 1 The proof is analogous to that of the previous statements and comes down to the consideration of a system of weak inequalities. A computerbased experiment has been carried out, in which expressions st out in the Statements 3.4 and 3.5, were obtained by direct calculation. The graph Γ H was taken, with edge propagations times: t 1 = 1, t 2 = √ 2, t 3 = √ 3, t 4 = √ 5, t 5 = √ 7. Points were coming out of one of the two internal vertices at the initial moment. The results of the experiment are in accordance with the analytical statements. The experiment was carried out in cooperation with O. V. Sobolev.
2014-10-18T23:09:31.000Z
2014-10-18T00:00:00.000
{ "year": 2014, "sha1": "6ae314680db11322335d58e156b9ed47211af876", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "6ae314680db11322335d58e156b9ed47211af876", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
53702278
pes2o/s2orc
v3-fos-license
Revealing the control of migratory fueling : An integrated approach combining laboratory and field studies in northern wheatears Oenanthe oenanthe Migratory birds rely on fueling prior to migratory flights. Fueling in migrants is controlled by intrinsic as well as extrinsic factors. From captive studies we have started understanding the internal mechanisms controlling bird migration. Field studies have demonstrated the effects of external factors, such as food availability, weather, competitors, parasites or diseases, on the stopover behavior of migrants. However, an integrated approach is still missing to study coherently how the innate migration program interacts with the varying environmental cues and to estimate the contribution of the innate migration program and the environment to realized migration. The northern wheatear Oenanthe oenanthe offers a unique opportunity for integrated studies. It breeds across almost the whole Holarctic with just a “gap” between eastern Canada and Alaska. All breeding populations overwinter in sub-Saharan Africa which makes the northern wheatear one of the most long-distant migratory songbirds with extraordinary long non-stop flights across oceans. It is a nocturnal migrant which travels without parental or social aid/guidance. Thus, young birds rely entirely on endogenous mechanisms of timing, route selection and fueling on their first outbound migration. By establishing indoor housing under controlled conditions the endogenous control mechanisms of northern wheatear migration could be revealed. At the same time, environmental factors controlling fueling could be investigated in the field. On migration wheatears occur in a variety of habitats with sparse vegetation where their stopover behavior could be quantitatively studied in the light of “optimal migration” theory by the use of remote balances, radio-tagging and even experimentally manipulated food availability. The present paper summarizes our approach to understand the control of migration in northern wheatears by combining field and laboratory studies at various spatial and temporal scales, and linking various sub-disciplines [Current Zoology 59 (3): Introduction Since the pioneering work by Eberhard Gwinner and Peter Berthold we have learned that migratory songbirds dispose of innate dispositions for migratory restlessness and migratory fueling (for reviews see Gwinner, 1986;Bairlein and Gwinner, 1994;Berthold, 1996;Gwinner, 1996).Hand-reared naïve captive birds reveal nocturnal restlessness as well as migratory body mass gain roughly at the same time as their wild conspecifics despite living in an artificial environment without external cues.In addition, between-species and within-species variation of the amount of migratory restlessness reflect overall migration distance.In line with this, short-distance migratory songbirds accumulate smaller amounts of migratory fuel than long-distance migrants.Migratory fueling is mainly achieved by increasing food intake, increasing assimilation efficiency of ingested food, selecting particular diets and nutrients, and through metabolic and physiological adjustments (Bairlein, 1990;Ramenofsky, 1990;Bairlein, 2002;Bairlein, 2003;Jenni-Eiermann and Jenni, 2003;Ramenofsky et al., 2003;McWilliams et al., 2004;Ramenofsky and Wingfield, 2006;Lyons et al., 2008). Despite these fundamental findings and the optimal migration theory (Alerstam and Lindström, 1990), comparatively little is known about external factors such as food availability, weather, competitors, parasites or diseases that might influence migration in particular with respect to migratory fueling in free-ranging birds (e.g.Moore and Kerlinger, 1987;Moore and Yong, 1991;Moore et al., 2003).Moreover, this has not yet been resolved in an integrated approach linking captive studies revealing the innate migration program with field studies demonstrating the realized migration behavior.Understanding the interplay between genes (intrinsic factors) and environment (extrinsic factors) is crucial for understanding the adaptability of migrating birds in a rapidly changing world. The Wheatear Model As the model species for an integrative approach to study migratory fueling we selected a small songbird, the northern wheatear Oenanthe oenanthe (thereafter wheatear).The species has a nearly circumpolar distribution and presents a fascinating migration system, as all breeding wheatears spend the northern winter in northern sub-Saharan Africa.It was speculated for a long time (Conder, 1989) but only recently revealed by light-level geolocation (Bairlein et al., 2012) that even the Canadian and Alaskan breeding birds spend the non-breeding season in Africa.The latter show a migration distance of annually about 30,000 km, the longest of any songbird reported so far.The wheatear is a typical nocturnal migrant when flying over land, but needs to perform long non-stop flights when flying over water. On migration it occurs in a variety of habitats including meadows, arable land, beaches and other habitats with sparse vegetation (Glutz von Blotzheim and Bauer, 1988;Cramp, 1988).In its western breeding range, three subspecies are distinguished.The subspecies seebohmi is confined to the Atlas Mountains of Morocco while the nominate oenanthe wheatear (thereafter oenanthe wheatear) breeds in Great Britain and in an area ranging from continental Europe via Siberia as far east as Alaska (Cramp, 1988).The 'Greenland Wheatear' O. o. leucorhoa (thereafter leucorhoa wheatear) breeds on Iceland, Greenland and in eastern Canada.It is one of the few passerine migrants regularly covering distances of more than 1,000 km over sea. During both fall and spring migration, the two northern subspecies occur together at stopover sites in northern and western Europe including the small German island of Helgoland in the North Sea.There, oenanthe wheatears of Scandinavian origin mingle with leucorhoa wheatears breeding in Greenland and Iceland (Dierschke and Delingat, 2001).Whereas Scandinavian birds face sea crossings of 50-500 km when flying towards the East and North, much longer flights are necessary for leucorhoa wheatears to reach stopover sites in Scotland (c.800 km) or the breeding areas (up to 4,000 km) (Schmaljohann et al., 2011;Schmaljohann and Naef-Daenzer, 2011;Bairlein et al., 2012). Field studies on wheatears are facilitated by it being a species of open landscapes and comparatively easy to catch using baited spring traps.Once color-banded, they are easy to observe at stopover sites owing to their habitat choice and visibility (Dierschke and Delingat, 2001).Moreover, they can be attracted to remote-controlled baited balances placed in their habitats so that data on refueling can be gathered without re-trapping (Fig. 1; Schmaljohann and Dierschke, 2005).Wheatears can also be easily kept in captivity under controlled conditions to study the endogenous basis of their migratory behavior (Maggini and Bairlein, 2010).Furthermore, captive breeding allows estimating the heritability of migratory traits.These circumstances and the general habits of wheatears provide an unique opportunity for taking a comparative approach for examining intrinsic disposition and extrinsic factors that control stopover behavior and decisions of a long-distance migrant species. Innate Migration Program The migration behavior of wheatears is governed by innate mechanisms.Migratory activity, as revealed by nocturnal migratory restlessness as well as migratory fueling, are under endogenous control (Maggini and Bairlein, 2010;Bulte and Bairlein, 2013).Hand-reared naïve birds, taken from wild nests and kept individually in controlled indoor conditions at a constant photoperiod of 12 hrs light and 12 hrs dark, constant tempera-ture and constant food, revealed seasonal body mass variation corresponding to the time of migration and fueling in wild birds (Fig. 2; Maggini and Bairlein, 2010). Fig. 2 Seasonal variation of fuel load of first year handraised captive Icelandic northern wheatears Oenanthe oenanthe leucorhoa [after Maggini and Bairlein (2010)] Interval P1 starts at the age of 60 days when the birds were transferred from LD14:10 to LD 12:12. In addition, the pattern and amount of migratory body mass gain is population-specific.Keeping wheatears from different populations with different migration routines in identical captive condition, so-called "common garden" experiments, revealed fueling which reflects their population-specific differences in migration routes and strategies.Icelandic birds showed a greater increase of their body mass in fall than Norwegian or Moroccan birds (Maggini and Bairlein, 2010).This indicates preparation for the initial ecological barrier crossing in Icelandic birds which is absent in the two other populations (Maggini and Bairlein, 2010).In all three populations, body mass increased to a greater extent in fall than in spring, whereas nocturnal activity was higher in spring than in fall (Maggini and Bairlein, 2010).This suggests that the endogenous program responds to specific seasonal needs, with more time invested in storing fuel for the journey in fall than in spring and more time invested in flying to reach the migratory goal faster in spring than in fall.Contrary to expectations, the timing of onset of body mass increase and nocturnal restlessness in spring did not differ between populations (Maggini and Bairlein, 2010).This might be explained by the lack of external cues, most likely photoperiod, which are responsible for the fine tuning of the expression of migratory behavior (Gwinner, 1986).When we kept the birds under a simulated photoperiod that reflected the one they would have experienced during migration in the wild, differences in seasonal onset of body mass gain and migratory restlessness between the populations became more evident than under constant photoperiod (Maggini, 2009).These observations confirm that there is a strong population-specific endogenous control of the events relating to migration in wheatears which does not depend on a changing photoperiod, though photoperiod may finetune migration in the natural world (Gwinner, 1986).They also gave evidence that overall migration distance is not the only factor driving selection on the evolution of endogenous population-specific differences of migratory traits (Gwinner, 1986), but the geographic components such as the presence of an ecological barrier plays a role too (Maggini and Bairlein, 2010). Wheatears breeding in Alaska travel for 14,500 km across Asia and the Arabian Peninsula to winter in eastern Africa (Bairlein et al., 2012).When kept in captivity indoors in the same setting as the other populations, Alaskan birds also revealed spontaneous seasonal patterns of migratory fueling and nocturnal migratory restlessness (Bulte and Bairlein, 2013).However, as compared to the other populations, their amount of fall migratory restlessness was significantly higher.In comparison with birds from Iceland, Alaskan birds showed a higher peak value and a longer lasting period of migratory restlessness.Hence, the amount of migratory restlessness is positively correlated with the length of the migration route of the corresponding wild populations.These results are in agreement with the findings in other migrants (for review see Berthold, 1996).However, the amount of migratory fueling was much smaller than expected for the extremely long migration distance in Alaskan wheatears.These birds showed just half the amount of fuel load observed in Icelandic birds (Fig. 3; Bulte and Bairlein, 2013).However, the patterns of migratory fueling during fall differed between both populations.While Icelandic wheatears exhibited a steep fueling increase and high levels of fueling early in the season, Alaskan birds started with low fueling rates and reached their highest fuel load later in the season (Bulte and Bairlein, 2013).This relates most likely to differences in their migratory challenges.The Icelandic wheatears have to cross a large part of the North Atlantic during early migration, while the Alaskan birds migrate mostly over benign land that offers feeding opportunities en route.At the end of fall migration both populations have to cross the Sahara desert.This appears to be also reflected in their similar fueling pattern towards the end of their fall migrations.Thus, while the endogenous pattern of migratory restlessness corresponds to the migratory distance, migratory fueling appears to reflect the environmental conditions the populations are facing during their journeys. In many migratory bird species, males arrive at the breeding grounds before females, and wheatears are no exception.Male wheatears migrate earlier in spring (Spina et al., 1994, Dierschke et al. 2005) and arrive at the breeding grounds earlier than females (Currie et al., 2000;Pärt, 2001).The evolutionary causes of protandry have been debated quite rigorously (Morbey and Ydenberg, 2001;Coppack and Pulido, 2009), but it remained open whether protandry has an endogenous component as well.We were able to show that captive male wheatears kept under constant conditions for their first year of life started their spring migratory activity and migratory fueling significantly earlier than females, even in the absence of environmental cues (Maggini and Bairlein, 2012).This indicates that protandry in the wheatear has an endogenous basis. Cost of migration Migrants spend less time and energy during flight than during stopover.The ratios were theoretically predicted to be close to 1:7 (time) and 1:2 (energy), see (Hedenström and Alerstam, 1997).Extrapolation from a field study using doubly labeled water indicated that the energy expenditure during flight represented approximately 30% of the total energy expenditure during the entire migration (Wikelski et al., 2003).Estimating the time and energy costs of the entire migration requires a high spatiotemporal resolution of migration and information about the meteorological conditions encountered en route.Such analyses have been impossible for small birds until the recent miniaturization of light-level geolocators.As movements with strong latitudinal components are less convenient to investigate than longitudinal movements (Hill 1994) East-West migration offers a better opportunity to locate migratory routes and stopover sites on a fairly accurate spatiotemporal scale (Bairlein et al., 2012;Schmaljohann et al., 2012b).For the Alaskan wheatears, which have a strong longitudinal component in their migration, we provided the very first estimates of the time and energy devoted to the flight and stopover stages on the entire migration.To do so, we modeled the total time and energy costs of migration for flying and resting by considering different physiological and aerodynamic approaches and the daily environmental conditions en route (Schmaljohann et al., 2012b).The ratio of time in migratory flight (on average 306 h) to time on the ground (1954 h) in fall was 1:6.35 (Schmaljohann et al., 2012b), close to the theoretical predictions (Hedenström and Alerstam, 1997).In spring, this ratio was 1:3.25 (Schmaljohann et al., 2012b). Calculating the energy costs for flying for the entire migration depends very much on the model chosen.Energy models and aerodynamic models revealed total flying costs of 2,000 to 5,500 kJ.Using a body mass model, the birds lost on average 115 g during their entire migratory flights, equivalent to 2,570 kJ on a dietary protein/fat ratio of 10:90 and 3,199 kJ on a protein/fat ratio of 5:95.For the entire time on the ground (stopover), the total energy costs were 5,085 kJ, resembling a total loss of 306 g (Schmaljohann et al., 2012b).The total energy cost for the entire fall migration appears to be divided between flying and stopping over at a ratio of approximately 1:2 (Schmaljohann et al., 2012b), which is close to theoretical considerations (Hedenström and Alerstam, 1997) and extrapolation of a field study (Wikelski et al., 2003). The total energy costs (flight and stopover combined) relative to the distance covered were significantly lower in spring than in fall.In spring, the bias towards energy and time costs during stopover diminished, indicating that the time for stopover was minimized, leading to an overall faster and energetically more economic migration with lower energy costs per migration unit in spring than in fall (Schmaljohann et al., 2012b). Optimal migration strategies? From an evolutionary point of view migratory birds should minimize either the time spent on migration or their total energy expenditure, with predation risk as a further criterion to be considered (Alerstam and Lindström, 1990;Alerstam, 2011).The higher the fuel deposition rate the faster birds obtain the necessary fuel load for their next migratory stage.A major determinant of the overall migration speed is, hence, the fuel deposition rate of the bird.A high fuel deposition rate reduces the total time spent for stopover, which in turn minimizes the overall time of migration (Alerstam and Lindström, 1990;Lindström and Alerstam, 1992).Time-minimizers experiencing a high fuel deposition rate are expected to exploit the stopover site and depart with high fuel loads. If, however, their fuel deposition rate is low they resume migration regardless of fuel load.Consequently, departure fuel load is positively correlated with fuel deposition rate (Alerstam and Lindström, 1990;Lindström and Alerstam, 1992).In contrast, birds that minimize the overall energy costs of transport should depart from a stopover site independently of fuel deposition rate and stopover duration, but just carrying as much fuel as required for the next flight stage (Hedenström and Alerstam, 1997).Thus, the correlation between departure fuel load and fuel deposition rate may reveal the basic strategy a migrant is following on its journey.We found indications that wheatears may differ in their migration strategy.In spring male leucorhoa wheatears behaved on Helgoland as expected for time-minimizers, whereas in leucorhoa females fuel deposition rate and departure fuel load did not correlate significantly indicating in general an energy saving strategy (Fig. 4; Dierschke et al., 2005;Delingat et al., 2006).However, departure fuel loads of leucorhoa females were higher than predicted for the minimization of overall energy costs of transport.Though sample size is small for oenanthe wheatears, spring data support predictions for energy minimization (Fig. 4).During fall migration first year wheatears of both subspecies behaved accordingly to the time minimization strategy both when leaving Iceland (Delingat et al., 2008) and Helgoland (Schmaljohann and Dierschke, 2005).In contrast, first outbound Alaskan wheatears behaved as expected for energy minimizers (minimization of the total energy cost of migration; Schmaljohann et al., 2013).Although departure fuel load was independent of fuel deposition rate and hence, in general accordance with an energy-minimization strategy, the juvenile Alaskan wheatears in fall and leucorhoa females in spring carried all considerable surplus fuel load at departure which was several times higher than would be While departure fuel load did not correlate significantly with daily fuel deposition rate in oenanthe and leucorhoa females respectively, reflecting a mostly energy minimizing migration strategy, the significant correlation in leucorhoa males reveals their time-minimizing strategy. expected for a single-night flight.A similar phenomenon was observed at Fair Isle where leucorhoa wheatears heading to their breeding areas in spring carried a higher fuel load then necessary for the upcoming Atlantic crossing (Delingat et al., 2008).This speaks against the minimization of overall energy costs of transport, as carrying surplus of fuel load is energetically costly during flight (Hedenström and Alerstam, 1997).Surplus fuel load enables migrants to by-pass future stopover sites which is typical for time-minimizers (Alerstam and Lindström, 1990;Gudmundsson et al., 1991;Weber et al., 1994;Hedenström and Alerstam, 1997;Dierschke et al., 2005).It should be considered that differences in time and energy minimizing strategies may be only small, if search and settling times/costs are low (Alerstam and Lindström, 1990;Hedenström and Alerstam, 1997).These data may suggest that exploration times at new stopover sites are relatively short in wheatears (Delingat et al., 2006;Schmaljohann et al., 2012b). The case of the wheatear demonstrates that even individuals within the same species do not necessarily behave accordingly to the same optimal migration strategy.The differences in strategy may be related to season, sex and subspecies, i.e., migration route or the type of barriers to be crossed. Fueling and stopover Birds spend up to about 85% of the entire migration period at stopovers in order to store or to replenish fuel for the next flight (Hedenström and Alerstam, 1997;Schmaljohann et al., 2012b).Consequently, under-standing stopover and how birds adjust stopover decisions with respect to their migration strategy is crucial to an understanding of how migrating birds organize their journey.The rate of fuel deposition and departure fuel load are the two major determinants affecting departure decisions (Alerstam and Lindström, 1990). Mean fuel loads of wheatears at various stopover sites in western Europe were found to be rather low as long as no significant ecological barrier is encountered.Flight range estimates suggest that these wheatears most likely refuel daily after each nocturnal flight depositing enough fuel for some five to seven hours flight, equivalent with a nocturnal flight range of 230 to 330 km (Delingat et al., 2006;2008).Leucorhoa wheatears carried higher fuel loads than oenanthe wheatears, but differences were moderate suggesting that leucorhoa wheatears also refuel after each nocturnal flight when crossing continental Europe.It appears that in wheatears selection acts on migratory behavior to favor a 'numerous-stops-and-flights strategy' migration over continental Europe (Delingat et al., 2006). However, when facing an open sea crossing, migrants are faced with the need to generously refuel.On Helgoland, departure fuel loads of wheatears are significantly higher than arrival fuel loads, and individual leucorhoa birds exhibit fuel loads of more than 100% of lean body mass (Fig. 5 In both subspecies, departure fuel load was significantly higher than arrival fuel load (Wilcoxon rank sum tests: P < 0.0007).Arrival fuel load did not differ significantly between the subspecies (Wilcoxon rank sum test: W = 121, P = 0.34).Departure fuel load was significantly higher in the leucorhoa subspecies than in the oenanthe northern wheatears (Wilcoxon rank sum test: W = 12, P = 0.0002) [after Dierschke et al. (2005)].Fuel load is given relative to the lean body mass of the birds. 2006 ).Moreover, oenanthe wheatears depart from Helgoland northbound with less fuel aboard, and they are less selective for weather conditions at departure than leucorhoa wheatears, possibly because the latter face longer migration distances and a more extent sea crossing en route to their breeding areas (Dierschke and Delingat, 2001;Dierschke et al., 2005;Schmaljohann et al., 2011;Schmaljohann and Naef-Daenzer, 2011).In contrast, in Alaskan wheatears, which face only a short sea crossing across the Bering Strait to the Russian mainland in fall, the departure probability increased with evening fuel load (Schmaljohann et al., 2013).Although most outbound young Alaskan wheatears resting on the American side of the Bering Strait carried sufficient fuel load for the short sea crossing, they performed rather lengthy stopovers to put on even more fuel likely as a safety margin for the subsequent Taiga crossing (cf Schmaljohann et al., 2012b). Oenanthe and leucorhoa wheatears also differ in the extent of stopover and stopover duration.During spring passage on Helgoland, only 9% of male and 14% of female oenanthe resided on the island for more than one day, while in leucorhoa 40% of males and 30% of females stayed for at least one day (Dierschke and Delingat, 2001).The difference between subspecies was significant for both sexes.However, among birds not departing on the day of ringing, the stopover duration did not differ significantly between subspecies.Still, most oenanthe wheatears stayed for only one day, while most long-stayers were leucorhoa wheatears (Dierschke and Delingat, 2001). Fueling and predation risk Predation risk affects bird migration behavior (Lank et al., 2003).According to the optimal migration theory, predation risk is assumed to be mass-dependent (Lind et al., 1999;Kullberg et al., 2000), but see Dierschke (2003).Hence, optimal departure fuel load might be affected by predation risk in such a way that birds depart with lower fuel loads than predicted for timeminimizers, as predicted by stochastic dynamic modeling (Weber et al., 1998).In wheatears, predation risk did not directly influence birds' departure decision, as predation risk did not differ between days when birds stayed on the island and days when birds decided to resume migration (Dierschke and Delingat, 2001;Schmaljohann and Dierschke, 2005).However, predation risk affected their fuel deposition rate: higher predation risk was associated with lower fuel deposition rate (Fig. 6; Schmaljohann and Dierschke, 2005).As wheatears behave as time minimizers in fall, a reduced fuel deposition rate is supposed to increase bird's departure probability (see above, Optimal migration strategies).Hence, predation risk may influence stopover decision indirectly.However, departure fuel load was independent of the cost-benefit relation between predation risk and fuel deposition rate (Schmaljohann and Dierschke, 2005).Thus, it seems that predation risk does not necessarily modulate the optimal migration strategy adapted by a species at a stopover site. Fueling and social rank In migrants defending territories during stopover, social status can determine stopover behavior and fueling (Moore et al., 2003, and references therein).In wheatears during spring stopover on the island of Helgoland, social status dictated territorial behavior but the consequences of social status for fueling depended on food availability (Dierschke and Delingat, 2001;Dierschke et al., 2005;Arizaga et al., 2011).During spring stopover passage dominant birds defended territories while subordinates revealed extended vagrancy.In springs with low food abundance subordinates tended to have lower foraging rates, fly less and stopover at the site for shorter periods and revealed smaller fuel loads (on average 11% of lean mass) than dominants (22%), irrespective of sex and subspecies (Arizaga et al., 2011).However, fuel deposition rate did not differ between territorial and non-territorial birds when food was not limited.In such years the non-territorial subordinates compensated for restricted access to food resources with a more efficient exploitation by taking more food per unit time leading to the same energy intake as that of dominant and territorial birds (Dierschke et al., 2005).Moreover, although male wheatears were more often territorial and on average attained a higher social rank than females, this did not result in higher refueling rates.Thus, there appears no evidence for competition between the sexes leading to differential timing of migration of male and female wheatears. Fueling and weather In the context of optimal migration strategies, the momentary individual decision to depart from a certain stopover site at the day-to-day level is modulated by environmental cues as well.Hence, migrants' short-term departure decisions might overrule species' general optimal migration strategies.In oenanthe wheatears, birds depart from Helgoland under more cloudy skies than leucorhoa wheatears in spring (Dierschke and Delingat, 2001).This is possibly a strategy adopted by leucorhoa wheatears to reduce the probability of encountering rain during the long sea crossing.An even more important variable is wind (Liechti and Bruderer, 1998).As wind speed is approximately the same order of magnitude or even greater than songbirds' air speed, the choice of favorable wind conditions for flying has a major effect on the birds' flight range (Liechti and Bruderer, 1998;Liechti, 2006).Departing at low wind speed was hypothesized to be a generally successful strategy (Liechti, 2006) because even if the wind direction changes with altitude, due to topographic modulation, the bird will only be flying with a slow headwind which is not unfavorable (Erni et al., 2002).Wheatears avoid strong headwinds, and they time their departure with favorable wind conditions when having generally low fuel loads indicating their capacity to take account of wind conditions (Delingat et al., 2008;Schmaljohann and Naef-Daenzer, 2011). Regarding fuel deposition rate, the temperature on the ground is an important factor influencing food abundance (airborne insects) for insectivorous birds, which in turn affects the fuel deposition rate (Schmaljohann and Dierschke, 2005).Additionally, energy costs on the ground rise with decreasing ambient temperature (Wikelski et al., 2003;Schmaljohann et al., 2012b).Hence, resuming migration at relatively low temperatures can be a reaction to unfavorable feeding conditions and/or to the increasing energy costs on the ground, (Schmaljohann et al., 2012b).In Alaskan wheatears, the probability of departing from a stopover site increased significantly with decreasing surface temperature (Schmaljohann et al., 2013).Alternatively, a decrease in temperature may indicate a change in air pressure and wind conditions, which often coincide with departure decisions (Liechti, 2006). Using data from Alaskan wheatears tracked with light-level geolocators, we characterized the meteorological conditions, surface temperature, surface wind speed and surface precipitation for each individual noon and midnight fix.In fall, the departure decision was significantly associated with lower surface temperature and lower surface wind speed, whereas stopover was preferred at higher surface temperatures and higher surface wind speeds.None of the variables considered played a significant role in spring (Schmaljohann et al., 2012b). Fuel load and nocturnal departure When nocturnal migrants leave for their migratory flights and whether nocturnal departure times might be organized with respect to body condition, environmental cues (wind), the length of the night and the remaining migration distance is poorly known.This is, however, crucial in order to determine the potential nocturnal flight duration.Early nocturnal take-off and flight until sunrise maximizes the migrants' nocturnal travel range which, as a seasonal average, defines the overall number of stopovers needed during migration.Because more time is spent on the ground than flying (Schmaljohann et al., 2012b), the total number of stopovers significantly contributes to the overall speed and costs of migration (Alerstam and Lindström, 1990;Hedenström and Alerstam, 1997).In general, we found that take-off occurred after the end of nautical twilight (Schmaljohann et al., 2011;Schmaljohann and Naef-Daenzer, 2011) when the skylight polarization pattern may used to calibrate the birds' compass systems (Cochran et al., 2004;Muheim et al., 2006;Chernetsov et al., 2011;Schmaljohann et al., 2013). In leucorhoa wheatears departing from Helgoland, fuel load and the northward component in the departure direction each explained 20% of the variation in the nocturnal take-off time (Schmaljohann and Naef-Daenzer, 2011).Lean birds might depart either early or late at night to aim for nearby stopover sites and possibly decide several times during the night whether departure conditions are sufficient to set off from Helgoland.Leucorhoa wheatears with high fuel loads, i.e., long potential flight vectors flying in the principal seasonal migration direction may have a shorter time window for their departure decision during the first half of the night only (Schmaljohann and Naef-Daenzer, 2011). In contrast to European wheatears showing a wide scatter of nocturnal departure times, Alaskan wheatears departed within a relative small time window shortly after sunset and at a relatively high sun elevation from an Alaskan stopover site in fall (Schmaljohann et al., 2013).The general time window in which migrants can decide to depart is smaller when nights are shorter.Thus, a simple rule might be to take off early when nights are short.This overall pattern is modified by the fact that nocturnal departure time is also influenced by body condition (Schmaljohann and Naef-Daenzer, 2011). Fuel load and departure direction Migrants should depart from a stopover site in the seasonally appropriate migration direction (Åkesson et al., 2001;Åkesson et al., 2002;Schmaljohann et al., 2013).However, cage experiments have shown that lean birds orient away from a barrier, whereas physically fit migrants are less likely to detour from the principal migratory direction if a barrier is ahead (Sandberg, 1994;Sandberg and Moore, 1996;Sandberg, 2003;Deutschlander and Muheim, 2009).Such behavior could explain the frequently observed phenomenon of reverse migration, in which birds fly in seasonally inappropriate migratory directions (Lewis, 1939;Alerstam, 1978;Åkesson et al., 1996;Åkesson, 1999;Phillips, 2000;Williams et al., 2001;Zehnder et al., 2002;Komenda-Zehnder et al., 2002).We showed that free-flying leucorhoa wheatears departed with headings towards their breeding grounds only with sufficient fuel load aboard while lighter birds did so only under favorable tail wind conditions (Fig. 7; Schmaljohann and Naef-Daenzer, 2011).With high fuel loads and favorable wind conditions, birds were likely to depart for a long non-stop flight across the sea.This choice represents a risky but direct (and thus fast) migratory route towards the breeding areas on Iceland, Greenland and in eastern Canada.In contrast, birds that set off under unfavorable conditions, i.e., low fuel load and bad weather, flew a safer route towards the nearby mainland within a 50-100 km range.As the minimum sea barrier to be crossed correlated with the physical condition of the birds, visual cues were likely used in the departure direction decision (Schmaljohann and Naef-Daenzer, 2011).Adaptive behavioral adjustments of migratory direction are critical for crossing ecological barriers (Alerstam, 2001;Henningsson and Alerstam, 2005).The leucorhoa wheatear's variation of departure direction in relation to fuel load and wind conditions reveals the capacity for such behavioral responses (Schmaljohann and Naef-Daenzer, 2011) and indicated that the relevant phenotypic trait was a behavioral response to both in- ternal information (body condition) and external information (wind support).Furthermore, leucorhoa wheatears incorporate a physiological safety margin, in terms of fuel, when selecting a route for their next migration stage (Schmaljohann and Naef-Daenzer, 2011). Fueling and corticosterone Several studies found that exogenous corticosterone affects food intake and even fattening.However, whether endogenous corticosterone actually facilitates migratory fueling in wild birds is at present unclear and results are contradictory (Eikenaar et al., 2013, and references therein).We therefore conducted a study with wheatears on Helgoland, measuring corticosterone levels in both oenanthe and leucorhoa birds during their simultaneous spring stopovers and relating them to their rate of fueling.If corticosterone promotes refueling we expected that (a) leucorhoa wheatears should have higher corticosterone levels than oenanthe wheatears, because leucorhoa birds deposit more fuel, more rapidly than oenanthe birds, and (b) fuel deposition rate should be positively correlated with corticosterone level.However, our results did not reveal a stimulating effect of corticosterone on migratory fueling in wheatears (Eikenaar et al., 2013).Corticosterone levels were lower in leucorhoa than oenanthe wheatears, and the actual fuel deposition rate was negatively correlated with corticosterone level.We also observed a positive correlation between corticosterone level and fuel stores.These findings suggest that rather than promoting migratory fueling corticosterone may function as a readiness cue, with levels increasing towards departure from the stopover site (Eikenaar et al., 2013) as suggested previously by Landys-Ciannelli et al. (2002) and Lohmus et al. (2003). Conclusions Avian migratory behavior has been studied for centuries either in the wild or captivity (e.g.Berthold, 1996Berthold, , 2001;;Alerstam, 2008).However, in a few species have both field and lab studies been integrated.On the one hand, there are various studies exploring endogenous migratory traits in small songbirds in captive setting.On the other hand, migratory behavior in the wild is mainly studied on the individual level in larger species, because they are more easily observed. We aimed to overcome these challenges with the northern wheatear by combining tracking technologies, experimental stopover studies, common garden experiments and captive breeding.These approaches provide the opportunity to compare behavior in detail at various stopover sites along species' migration route indicating how birds manage their journeys in free-ranging natural conditions being confronted with various environmental challenges in the framework of innate behavioral and physiological pre-dispositions.By this, a significant contribution to a better understanding of the so-called "migratory syndrome" (Piersma et al., 2005) could be made. A common feature of the syndrome is fueling to accommodate infrequent and often unpredictable opportunities to feed.As shown in the wheatear, it is a complex trait which is under internal control but influenced by environmental conditions.Its understanding needs a comparative integrated approach to investigate the genetic and physiological architecture by linking field and captive studies.This is of biological importance with conservation application as many migratory species are in serious decline.However, fueling does not only shape and determine the migratory journey it also can carry over as breeding success depends to a large degree on body reserves obtained already on the wintering grounds or during stopover (e.g.Bairlein and Henneberg, 2000, Smith and Moore, 2003, Drent et al., 2006).Therefore, effective conservation of migratory species needs knowledge about when and where and to what extent fueling is required.Furthermore, understanding the interplay between internal and environmental control of migratory behavior may also have implications for a better understanding of the micro-evolutionary consequences of climate driven changes in migratory birds (e .g. Bairlein and Hüppop, 2004;Pulido andBerthold, 2004, 2010). Fig. 1 Fig. 1 Color-ringed juvenile northern wheatear perched on a digital balance placed in the field in Wales Alaska (H.Schmaljohann) Fig. 3 Fig. 3 Seasonal pattern of average fuel load (in relation to lean body mass) of Alaskan wheatears (filled squares) and Icelandic northern wheatears (open squares) Error bars 95 % confidence interval of the mean value.P1-P12: Five-day periods starting since the start of the experiment in mid August.The differences are significant (repeated measures ANOVA , F = 17.45,P<0.001; after Bulte and Bairlein 2013). Fig. 5 Fig. 5 Box plot of arrival fuel load (open boxes, AFL) and departure fuel load (grey boxes, DFL) leucorhoa and oenanthe northern wheatears on Helgoland during spring passage Fig. 6 Fig. 6 Difference between total fuel deposition rate for northern wheatears having perceived a low averaged predation risk (< 1 raptor flyover per hour) and a high averaged predation risk (> 1 raptor flyover per hour) during their stopover on Helgoland Wilcoxon rank sum test: W = 36, P = 0.007; sample size is given within the boxplot.Only northern wheatears staying more than 3 days beyond the day of arrival were considered (after Schmaljohann and Dierschke, 2005). Fig. 7 Fig. 7 Departure directions over the maximum flight range of Greenland/Iceland northern wheatears Oenanthe oenanthe leucorhoa departing from Helgoland in spring Maximum flight range was estimated by the departure fuel load of the birds and considering their current wind profit towards Iceland (circular-linear correlation: n = 30, F 2,30-3 = 7.88, R c-l = 0.23, P = 0.030).The shaded area indicates the 'fast and risky' direct way across the North Sea.Dashed line indicates the farthest distance across the North Sea towards Great Britain (800 km) and dotted line the distance towards the nearest breeding areas on Greenland (2500 km) (after Schmaljohann and Naef-Daenzer, 2011).
2018-11-15T12:43:29.492Z
2013-06-01T00:00:00.000
{ "year": 2013, "sha1": "521d3dd6d6c1c7733a969dd7a7fba4046bd4d19b", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/cz/article-pdf/59/3/381/32968956/czoolo59-0381.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "521d3dd6d6c1c7733a969dd7a7fba4046bd4d19b", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Geography" ] }
225718972
pes2o/s2orc
v3-fos-license
The Impact of Brand Relationships on Corporate Brand Identity and Reputation—An Integrative Model : The current literature focuses on the cocreation of brands in dynamic contexts, but the impact of the relationships among brands on branding is poorly documented. To address this gap a concept is proposed concerning the relationships between brands and a model is developed, showing the influence of the latter on the identity and reputation of brands. Therefore, the goal of this study is to develop a brand relationships concept and to build a framework relating it with corporate brand identity and reputation, in a higher consumer involvement context like higher education. Structural equation modelling (SEM) was used for this purpose. In line with this, interviews, cooperatively developed by higher education lecturers and brand managers, were carried out with focus groups of higher education students, and questionnaires conducted, with 216 complete surveys obtained. Data are analyzed using confirmatory factor analysis and structural equation modelling. Results demonstrate that the concept of brand relationships comprises three dimensions: trust, commitment, and motivation. The structural model reveals robustness regarding the selected fit indicators, demonstrating that the relationships between brands influence brand identity and reputation. This suggests that managers must choose and promote brand relationships that gel with the identity and reputation of the primary brand they manage, to develop an integrated balanced product range. Introduction Consumer brand knowledge is multidimensional and needs to be understood and accounted for, in order to provide the right perspective and background for research on branding as it relates to consumers (Keller 2003). This research aims at developing a concept that defines the relationships among brands and analyzes the influences on brand identity and the reputation of corporate brands. The context under study is higher education. We propose a model for higher education institutions which integrates the particularities of brand relationships in the management of corporate brand identity and reputation. Academics and professionals value reputation as a precious asset, as it reduces stakeholders' uncertainty about the future and increases the value of goods and services. Where branding is concerned, the strength of reputation lies in the corporate brand's promise, therefore companies should keep it as a means of managing corporate reputation (Argenti and Druckenmiller 2004). The scientific community believe that brand reputation depends on brand identity, so a good brand reputation is the result of a good management of that identity. While brand relationships are known to have impacts on brand identity, the literature on this subject is scarce. Relationships have been traditionally positioned in the theory of networks among companies (Ford et al. 2003;Hakansson and Ford 2002;Snehota 1989, 1995). Although previous studies may acknowledge the influence of brand relationships on the identity of organizations Snehota 1989, 1995), no empirical studies have supported this. In the current context, where the environment is increasingly dynamic and transformations are difficult to predict, the development of technology results in increasing interactions among corporate brands, as well as between corporate brands and their consumers. These end-users are now, more than ever, considered as cocreators of brands (Hatch and Schultz 2010;Madden et al. 2006;Payne et al. 2009;Prahalad and Ramaswamy 2004;Da Silveira et al. 2013). Similarly, we argue that the identity of a corporate brand is developed as it adapts to consumers' demands. It develops alongside other recognized brands to build an identity with a desirable reputation among all stakeholders, especially consumers. In line with the development of the proposed model regarding branding, a number of researchers, Kapferer (1986Kapferer ( , 2008, Fombrum (1996Fombrum ( , 2006 and Vidaver-Cohen (2007), focus on reputation. Other recent studies on reputation in higher education (Priporas and Kamenidou 2011;Suomi 2014) were based on the reputation of researchers and consultants, so that the model could provide insights from academics with responsibilities in the field. The current study is intended to constitute policy advice to general managers and to those in positions of responsibility for higher education brands. This study is distinctive because it: -helps fill a gap in the literature by supplying a concept of brand relationships introduces the concept of brand relationships in the management of brand identity and reputation in higher education; -relates the concept of corporate brand reputation with the management of corporate brand identity; -leads brand managers into new perspectives for building a new dynamic construct under a brand relationship approach. This paper is organized as follows. Section 2 highlights the relevant literature and describes the structure of the proposed framework for managing corporate brand identity under a relational approach. Section 3 provides an explanation of methods used to assess the concept of corporate brand relationships and the structural model, together with a brief description of the sample. The hypotheses and definitions of the measures used are provided in this section. Section 4 reports our findings and summarizes the model validity and applicability. Section 5 offers a brief discussion of the results and draws the conclusion together with recommendations for future research. Literature Review This review provides detailed information about the conceptualization of the constructs and measures used in the developed model, to manage corporate brand identity under a relational approach. The methodology used to assess the references was search and analysis of the databases at our disposal, like B-ON, Science Direct, JSTOR, ISI Web of knowledge, Scopus, Springer Link, and others. Brand Relationships The concept of brand relationships needs clarification in order to investigate the influence of relationships on corporate brand identity, since relationships are vital for the interactions between consumers and brands. Consumer-brand interactions extend beyond mere utilitarian benefits (Aggarwal 2004). According to Fournier (1998), relationships constitute a series of repeated exchanges between two parties known to each other, who also evolve in response to these exchanges and to fluctuations in the contextual environment. Fournier (1998) and Muniz and Muniz and O'Guinn (2001) argue that people form relationships with brands in the same way that they form relationships with each other in social contexts. We can extend this approach to the relationships between brands and state that brands tend to relate to each other in a social context and that this association can be used to attract specific members of the public. This is not the same thing as a brand alliance, because such alliances involve all joint marketing activities in which two or more brands are simultaneously presented to consumers (Rao et al. 1999;Simonin and Ruth 1998). In this study, brand relationships are mutually oriented interactions among corporate brands whose target is education (universities and other higher education institutions) and other reputed brands which may attract students to create a commitment. The definition of relationships between companies (Hakansson and Snehota 1995) supports this perspective. A relationship is a mutually oriented interaction between two reciprocally committed parties (p. 25). The parties agree that the notion of a relationship is defined by concepts of mutual orientation and commitment over time, which are common in interactions between brands. The specific characteristics of corporate brands make them different from other brands: Their bases are brand promise, multidisciplinary roots, and medium to long-term gestation. Their focus is external focused, but they are largely supported by internal stakeholders, who value highly communication and visual identity (Balmer and Gray 2003); these facts make it necessary to adapt the dimensions of brand relationships to these notions. This required that we review the literature on services focused on the theory of relational networks and branding and search for characteristics that suited the concept of the relationships among brands connected to education services. Five different but related dimensions were used to assess the quality of the relationships in the context of services in the B2B markets: recognized quality of the service, trust, commitment, satisfaction, and service quality (Rauyruen and Miller 2007), but there is little empirical investigation on the subject. However, the empirical studies of Dwyer et al. (1987) and Moorman et al. (1992) concluded that the quality of relationships is characterized by three dimensions: trust, commitment, and satisfaction. Berry (1995) emphasizes the relationships that customers have with service companies. Beatty et al. (1988) are in favor of trust and commitment to explain the mechanisms underlying stable preferences. Other researchers examined the roles of trust and commitment in the relationships that customers develop with service companies (Garbarino and Johnson 1999;Sirieix and Dubois 1999). Chaudhuri and Holbrook (2001) and Kennedy et al. (2000) found a positive relationship between trust and commitment to consumer products. Most recently, Alkhawaldeh et al. (2020) accessed the effect of brand familiarity and perceived service quality on brand image as and explored the position of brand image on student´s satisfaction. The findings showed that familiarity with the brand and perceived quality of service had an important and beneficial connection with the image of the brand and there was an important and positive connection between brand image and students´satisfaction. Yet, these results were tested in the private field, and ours is focused on public institutions. Next to trust we take commitment, recently described as an important major aspect of strategic partnerships (Søderberg et al. 2013). We followed the definition of Hardwick and Ford (1986) and Wilson (1995). Commitment influences or benefits internal and external stakeholders' perceptions of future value. Failing to find a scale characterizing the commitment among brands, we developed a scale procedure to select items for this dimension. Motivation has to do with the internal and external variables stakeholders consider when choosing an educational institution. It is also based on the relationships that the university/institution is able to provide. The scale procedure that we followed had to be adapted, so we decided to develop a scale procedure to select items for this dimension as well, because we could not find a suitable scale in the literature. Corporate Brand Identity The past few years have witnessed a burgeoning interest-among both practitioners and academics-in consumers' "love" for brands (Batra and Bagozzi 2012). Brands are frequently represented in the minds of consumers as a set of humanlike characteristics (van der Lans et al. 2014). In this context, recognized higher education institutions tend to evoke feelings and emotions like "love" in students and prospective students. Most of the recognized faculties in the country in which this research was conducted behave like corporate brands by demonstrating specific characteristics that distinguish them from their peers. Legally, they are part of a university that aggregates them, but brand images of faculties are so strong and distinctive from one another that they can be considered as corporate brands. According to Muniz and O'Guinn (2001) there are brand communities of faculties. These authors define a community as a core construct in social thought and a brand community is a specialized, nongeographically-bound community, based on a structured set of social relations among admirers. We readily become aware of these faculty brand communities when students choose one in which to study after finishing high school. Balmer et al. (2010) used business schools as a model to investigate corporate brand management and identification. In addition, according to Han et al. (2018), the establishment of good interpersonal relationships among community members will enable members to have a sense of belonging and social identity, thereby enhancing customer satisfaction within the community. Kapferer (1986Kapferer ( , 2008 refers to the prism of brand identity as consisting of an internal partbrand "culture," "personality," and "self-image", as well as an external part-"physical dimension," "relation," and "reflected consumer." He considers the external part of the identity prism highly important, especially in the case of corporate brands, since it is exposed to constant interactions with the public. "Reflected consumer" is an external and intangible dimension which reflects the way the consumer wishes to be regarded for "using" a certain brand (Kapferer 1986(Kapferer , 2008. This dimension is characterized by the following features: being better prepared for the labor market; being more capable of creating/innovating as successful professionals; and professionals with high credibility. The relation dimension has tangible and intangible aspects. It defines the behavior that identifies the brand and the way it interacts with its consumers (Kapferer 1986(Kapferer , 2008. It is characterized by the following: friendliness, respect, trust, motherly and close. Finally, the "physical" dimension of brand identity is defined by Kapferer (2008), as an exterior dimension that communicates the physical traits, colors, forms, and qualities of the brand. This dimension has features such as: the physical traits of the university/institution; modernity, sophistication, functional, and adequate. Brand Reputation Reputation is considered the most valuable asset of an organization, for the following reasons: its positive effects on reducing stakeholder uncertainty about future performance; the trust it creates in the public; the expectation of being rewarded for the excellence of goods and services. Fortune Magazine published a list of The World´s Most Admired Companies, which reveals that a 5 percent increase in reputation of an entity corresponds to a 3 percent increase in its market value. According to Fombrum (1996), such an organization attracts qualified employees and external investors; so, the defense of reputation is the cause of the growing interest in corporate brands. Vidaver-Cohen (2007) based her concept of reputation on the Rep Trak model (Fombrum 2006), which was successfully adapted to a business school. Suomi (2014) and Priporas and Kamenidou (2011) followed the same model in their studies of branding and reputation in higher education. The prime objectives of this study are: to measure and define the concept of brand relationships (relationships among brands) and demonstrate the validity and reliability of its dimensions; to integrate the concept of brand relationships in the management of corporate brand identity as an antecedent of the external part of identity; and to integrate the concept of brand reputation in the management of corporate brand identity, showing that it is a result of the management of the external part of identity under a relational approach. Methodology Service brands act in dynamic contexts, where brand building is developed with the help of consumers. In higher education, this is particularly visible, as students are consumers (they pay to attend university) and staff are part of the university´s identity. We thought it would be appropriate to interview a sample of engineering students, as engineering faculties are recognized for developing highly salient brand identities based on their societal interventions (e.g. building bridges and private infrastructure, developing innovative artifacts, processes, and technologies for industries that are frequently funded by national/international research centers). Research Stages We developed this research into the two stages explained below. 1. Exploratory research used a case study methodology developed in two engineering faculties to find items to characterize the dimensions proposed in the model; and 2. Confirmatory research was pursued by developing a questionnaire for higher education engineering students. A total of 216 complete surveys were obtained. In the first stage, we followed King (1991), Balmer (2001), and Aaker (2004), who stated that senior management members must be selected as informants because they are important in terms of corporate brand management. Further, informants who had day-to-day strategic management responsibilities were also selected. We conducted in-depth interviews with lecturers/researchers and focus groups with students. Interviews were developed for senior management and staff, and focus groups were created for students at undergraduate, master, and doctoral levels. Before the interviews were conducted, several preparatory procedures were undertaken. These included discussions with academics and practitioners national and internationally recognized in higher education (Barros et al. 2011). These discussions indicated the necessity of having a protocol in the interviews and focus groups. This initial study marshaled insights from thirteen in-depth interviews (seven in one faculty and six in the other), following a predesigned interview protocol. Each interview lasted for about two hours, and some informants were interviewed more than once. All interviews were recorded with the permission of interviewees. Four focus groups of students were created, two in each institution. Each focus group had six to eight students. To ensure the accuracy of interview data, we conducted member checks (Lincoln and Guba 1985). In addition to interviews, desk research was conducted by consulting faculties' websites and media news. Data were coded first by hand, because we thought this would bring us closer to the data. Both stages were coded separately. In accordance with the general protocol for a previously designed qualitative study, data collection, analysis, and interpretation were undertaken simultaneously, generating tables of synthesized data. Simultaneously, several long meetings were held between the authors to obtain an in-depth understanding of the phenomena under study. This exploratory research suggested that, in contexts of high consumer involvement, the relationships of a corporate brand with highly recognized brands have a definite impact on the identity and reputation of the corporate brand, by influencing the perceptions of the stakeholders and the educational services being offered. This initial research suggested that corporate brand relationships with recognized brands have impacts on identity and reputation. To confirm this conclusion, a second stage was designed, in which the proposed model (with the selected dimensions and items previously selected in the first stage) was tested. See Figure 1. A questionnaire was developed for higher education engineering students; 216 complete surveys were obtained. The data permitted us to validate a new concept defining the relationships among brands from the students' point of view. The investigated relationships were the ones among corporate brands whose mission was education; these included universities and other higher education institutions and strategic partnerships with national reputed research centers or international reputed universities such as MIT, Harvard, and Oxford, with which these brands interact in the context of conjoint degrees, international mobility, or other forms of interaction. To define each dimension, we adopted a holistic perspective for reviewing the literature on several fields of study, including B2B marketing, psychology, and organizational studies. We developed a procedure to determine the pool of items to use in this research; these are shown in Table 1. J. Risk Financial Manag. 2020, 13, x FOR PEER REVIEW 6 of 21 universities and other higher education institutions and strategic partnerships with national reputed research centers or international reputed universities such as MIT, Harvard, and Oxford, with which these brands interact in the context of conjoint degrees, international mobility, or other forms of interaction. To define each dimension, we adopted a holistic perspective for reviewing the literature on several fields of study, including B2B marketing, psychology, and organizational studies. We developed a procedure to determine the pool of items to use in this research; these are shown in Table 1. 1-Develop a theory Literature review and discussion with experts 2-Generate an initial pool of items for each dimension/scale Theory, secondary data, and thirteen interviews with lecturers and university managers, four focus groups of students (bachelor, master, and doctoral) 3-Select a reduced set of items based on qualitative judgment Panel of ten experts (national and international, academics, and practitioners) 4-Collect data from a large pretest sample Pretest on a sample of eighty higher education Students 5-Perform statistical analysis Reliability; factor analysis 6-Purify the measures Analysis of the results of the pretest sample and discussion with experts 7-Collect data Survey of higher education students (216 complete surveys) 8-Assess reliability and unidimensionality Cronbach's alpha and factor analysis 9-Assess validity Construct (AVE and CR), discriminant (comparison between the squared root of AVE and the simple correlations), and nomological validity (significant simple correlations examination) Sources: Adapted from Churchill (1979) and Malhotra (1981Malhotra ( , 2004. AVE-Average Variance Extracted; CR-Construct Validity. Proposed Model and Testing Regarding the first construct-brand relationships-we found that it is formed by three dimensions: trust, commitment, and motivation. Trust was adapted from existing scales in the literature, but motivation and commitment (although based on the concepts of Hardwick and Ford (1986) and Wilson (1995)) were developed in this research, by using confirmatory factor analysis (CFA). The scales used to define the brand relationships construct were found to be valid and reliable. To test the structural model, we used corporate brand identity (external part), which was developed in a previous study. The items used to characterize the physical dimension, the relation and the reflected consumer dimension were the result of previous research pursued by Barros et al. (2016). The authors used the external part of the brand identity prism to argue that the relationships among brands (brand relationships) influence the external part of corporate brand identity and reputation. The result of a well-managed corporate brand identity is a positive reputation. Therefore, brand reputation is the expected result of an active corporate brand identity management under a relational approach. It is widely suggested in the literature that identity precedes reputation (Burmann et al. 2009;de Chernatony 1999;Kapferer 1986Kapferer , 2008. Corporate brands should actively choose and select recognized brands with which to develop relationships, to bridge the gap between brand identity and reputation. The result of this management should be an increase in brand reputation. We also used the reputation concept unidimensionality, developed by Vidaver-Cohen (2007), to connect with this research. Data were analyzed using CFA and structural equation modeling (SEM). A structural equation model was developed to test the brand relationships concept as an antecedent to corporate brand identity and reputation. According to Nachtigall et al. (2001), SEM represents the relationship between latent variables (brand relationships, corporate brand identity, and brand reputation in our model) and their manifest or observable indicators (the items that characterize the latent variables). The most prominent feature of SEM is the capability to deal with latent variables. These variables are connected to observable ones by a measurement model (Edwards and Bagozzi 2000). Research Hypotheses Authors like de Chernatony (1999) and Kapferer (1986Kapferer ( , 2008 state that brand identity precedes brand reputation. It is our aim to confirm this hypothesis, in order to be able to argue that the management of corporate brand identity is developed under a relational approach. It follows that the choice and selection of recognized brands to develop should be carried out by the brand management team, taking into account the fact that brand identity develops and interacts with the external dynamic environment. We propose three research hypotheses: Hypothesis 1 (H1). The constructs trust, commitment, and motivation are a part of a higher dimension construct named brand relationships; Hypothesis 2 (H2). The brand relationships construct influences the external part of corporate brand identity; and Hypothesis 3 (H3). The external part of corporate brand identity influences brand reputation. We conducted CFA with the three second-order constructs: brand relationships, corporate brand identity, and brand reputation, using a total of 34 measures, detailed as follows: (a) A list of eighteen items was obtained from qualitative research to measure the constructs that define the brand relationships concept: trust (7 items), motivation (7 items), and commitment (four items); (b) Eight items were considered before testing the validity of the measurement model. The guidelines followed by the literature regarding SEM suggested a drop of T4. In line with this, the trust dimension was characterized by seven items; (c) A list of thirteen items was derived from previous research by Barros et al. (2016), regarding corporate brand identity (external part) and its measures: physical (four items); relation (five items); reflected consumer (four items); (d) A list of four items was adapted from the brand reputation scale developed by Vidaver-Cohen (2007). Previously, ten items had been selected from the framework, but we found that this concept was bidimensional, so, we selected one dimension that we considered to be more connected with this research. After analyzing the measurement model, we decided to maintain three of the four items. We began by developing measures for the concepts we intended to connect: brand relationships, corporate brand identity (external part), and brand reputation. First, we tested construct reliability and unidimensionality for the proposed measures for brand relationships: trust, commitment, and motivation. The same procedure was followed for brand reputation. The measures that formed corporate brand identity have been analyzed previously, and the construct has been found to be reliable and unidimensional. Next, we developed the measurement model for the brand relationships concept (using CFA). The results regarding the selected fit indices were considered acceptable. After dropping one item from the trust dimension, we developed the second-order model. The results revealed robustness regarding the selected criteria. Finally, we tested the structural model, using brand relationships as the cause of the salience of the external part of corporate brand identity and brand reputation as the result of the management of corporate brand identity (external part), using a relational approach. Unidimensionality and Reliability of Scales for Measuring Brand Relationships, Reputation and Corporate Identity The first-order model had three factors (trust, commitment, and motivation) and nineteen corresponding reflective indicators, as listed in Tables 2 and 3. The goal of most research projects is not just to develop unidimensional and reliable measurement scales, but to build and test theory. To summarize the data in terms of a set of underlying constructs, a factor analysis was conducted. We measured the unidimensionality and reliability of the proposed scales. To measure unidimensionality, we conducted principal component analysis with varimax rotation and Kaiser normalization to each scale. The scale items that did not show factorial stability were candidates for elimination. To measure reliability, we selected Cronbach's alpha. Morgan and Hunt (1994) and Gurviez and Korchia (2002) Commitment Attending this university/institution allows me C1-to achieve (have access to) important relationship networks C2-to be able to play a major professional and social role C3-to be influential C4-to reach technical and scientific excellence Concept based on Hardwick and Ford (1986) and Wilson (1995) Rep2.10-financial performance (fees and value-added programs.) From Rep2.5 to Rep2.10 all deleted after analyzing the dimensionality of the construct, because SEM demands unidimentionality of the scales as previously mentioned-see Table 3 Adapted from Vidaver-Cohen (2007) * Items measured on a five-point Likert scale, ranging from (1) strongly disagree to (5) strongly agree. Next, we analyze the measures of the brand relationships construct. We start by analyzing Trust, commitment and Motivation. Then we define guidelines and criteria to assess a model for Brand Relationships. Trust This scale was adapted from Morgan and Hunt (1994) and Gurviez and Korchia (2002) and had eight reflexive items. We measured the reliability of the scale defined by the selected items. Cronbach's alpha was 0.898 (higher than the 0.8 suggested by Nunnally (1978)). Dekovic et al. (1991) and Holden et al. (1991) characterized reliabilities of 0.60 or 0.70 as good or adequate. However, Ping (2004) stated that higher reliability measures tend to avoid low average variance extracted (AVE) when running the CFA. Regarding dimensionality, the scale was shown to be unidimensional, with an explained variance of 59.213 percent extracted by that component. Commitment This was a new scale proposed for this research and consisted of four reflexive items. Regarding the reliability of the scale, Cronbach's alpha was high (0.819). We then analyzed the dimensionality of the scale and found that the scale was unidimensional, with an explained variance of 64.898 percent by that component. Motivation This was also a new scale proposed for this research and consisted of seven reflexive items. Assessing the reliability, the Cronbach's alpha was high (0.886). Analyzing the dimensionality, we found that the scale was unidimensional, with an explained variance of 60.417 percent. Results regarding the other constructs (external brand identity and brand reputation), the initial measures, the analysis of the dimensionality of reputation, and the final research measures are summarized in Tables 2 and 3. More information regarding the technical procedures can be provided on request. Following these guidelines, we applied the first-order measurement model to the brand relationships concept. A summary of the psychometric properties for the first-order constructs is provided in Table 4. Discriminant validity was tested, and after dropping item T4, no problems were reported, as can be seen in Table 5. Taking these results into account, we tested the second-order model for the brand relationships construct. The results showed robustness regarding the selected indicators (see Table 6). We assessed the reliability and validity of the second-order factor for the brand relationships construct. Construct validity is demonstrated by plausible correlations of the second-order construct with first-order indicators, whereas convergent validity can be suggested by an AVE for the second-order construct that is greater than 0.5 (Bagozzi et al. 1991;Ping 2004). The values of CR = 0.87 and AVE = 0.68 are greater than the recommended values, suggesting higher reliabilities and convergent validity for the second-order construct. In line with this, we can conclude that the results support the first hypothesis (H1) and state that the constructs of trust, commitment, and motivation are a part of a higher dimension construct of brand relationships. Model Evaluation The first analysis of the proposed measurement model suggested that the item Rep2.4 (innovation) be dropped. We re-calculated the reliability and unidimensionality of the scale and found the following for the new three items. The brand reputation scale had a Cronbach's α of 0.777 (higher than the threshold of 0.7 defined by Bland and Altman 1997;DeVellis 2003;Nunnally 1978;Nunnally and Bernstein 1994) and a percentage of explained variance of 68.481 percent, which is highly acceptable. In line with these findings, we re-specified the model and conducted CFA again. The results are summarized in Table 7. These fit indices were satisfactory according to the selected guidelines. This means that the second-order construct named brand relationships was related to the second-order corporate brand identity construct (external part) and to the brand construct reputation formed by three measures. An analysis of all loadings showed that all except one were higher than the threshold of 0.5. The "physical" dimension was the exception; it contributed poorly to the external part of the corporate brand identity construct (0.420 < 0.5). Even so, the model fit was satisfactory. We can conclude that, in contrast to what Kapferer (1986Kapferer ( , 2008 suggests, the used sample did not greatly value the physical dimension of corporate brand identity (external part). This is consistent with the sample, which was composed of goal-oriented engineering students. They demonstrated that they assign more value to the dimensions reflecting consumer (loading: 0.784) and relation (loading: 0.750), because they believe that these dimensions are more connected with their lives as students and future professionals. The reflected consumer dimension (the one with the highest loading) was strictly connected with the aspirations of students. However, this finding should be further investigated in other contexts, using other samples. The following standardized residual values also deserve further attention: 2.629 between F4 and Rep2.2; 2.731 between R5 and C3; and 2.716 between R5 and C2. Rep2.4 (innovation), that was immediately deleted because it had a high standardized residual. Rep2.2 (network performance) also had a relatively high standardized residual, yet we had to maintain one of them because CFA demands at least three items to run an analysis. We considered Rep2.2 more in line with the theoretical background and the factor loadings gave us the same cue (Rep2.2 0.813 vs. Rep2.4 0.685). All the other standardized residuals were below the cut-off point of 2.58, as suggested by Jöreskog and Sörbom (2001). The other items were a part of other second-order constructs, which were previously analyzed and evaluated and revealed as valid (convergent, discriminant, and nomological). Therefore, considering that the mentioned values were far from the cut-off point of 4.0 (Hair et al. 2006) and required no further considerations and that the model fit was satisfactory, we decided to keep these items and test the structural model. Regarding the modification indices, the one between R5 and C3 had a value of 11.588 (>11). This was expected, given the standardized residual value between both items. However, as mentioned above, the difference was very small, and it was decided to keep both items. All other modification indexes (Mis) had values below 11. No problems regarding multicollinearity were found, and no other indices required our attention; with these findings, we tested the structural model. Final Structural Model Estimation and Testing By developing this causal model, we aimed to demonstrate that universities/institutes of higher education need to invest in and select recognized brands for developing relationships, as well as manage the corporate brand identity in the part that is more exposed to interaction with the public. In the proposed model, the brand relationships construct was an antecedent of the corporate brand identity construct (external part), and the brand identity (external part) was an antecedent of the brand reputation construct. Corporate brand identity (external part) and reputation were latent variables. Consistent with Hair et al. (2006), Marôco (2010), andJames et al. (1982), we added a parsimony fit index (PCFI) to the analysis. We selected PCFI because it represents the result of applying James et al. (1982) parsimony adjustment to the CFI: where d is the degree of freedom for the model being evaluated, and db is the degree of freedom for the baseline model. Values are between [0-1], and better fits are closer to 1. Table 8 summarizes the indices of fit of the structural model: As expected, the χ 2 was higher than the one calculated with the measurement model, because a recursive structural model cannot fit better (to have a lower χ 2 ) than the overall CFA. The difference between both χ 2 was quite small (727.239 − 726.149 = 1.09), demonstrating that the model was strongly suggestive of adequate fit (Hair et al. 2006). The loadings, standardized residuals, and modification indices maintained approximately the same values. Regarding the standardized residuals: 2.704 between F4 and Rep2.2;2.805 between R5 and C3;and 2.787 between R5 and C2. The problematic items relating to the modification indices are: −R5 and C3 = 11.752 These small differences did not require further analysis, because, at this stage, the focus was on diagnosing the relationships among constructs. A good model fit alone is insufficient to support a structural theory. It is also necessary to examine the individual parameter estimates that represent each specific hypothesis (Hair et al. 2006). Table 9 summarizes the main indicators and conclusions. Examining the paths among constructs showed that they were all statistically significant in the predicted direction. The path that represented the weight between brand relationships and external corporate brand identity was characterized by βBR.ECBI = 0.652; S.E. = 0.135; βBR.ECBI = 0.876; p < 0.001. This means that the regression weight for brand relationships in the prediction of external corporate brand identity was significantly different from zero at the 0.001 level (two-tailed). The path that represented the weight between external corporate brand identity and reputation was characterized by βECBI.Rep = 1.302; S.E. = 0.260; βECBI.Rep = 0.824; p < 0.001, meaning that the regression weight for external corporate brand identity in the prediction of reputation was significantly different from zero at the 0.001 level (two-tailed). We analyzed the variance explained estimates for the endogenous constructs in Table 10 and found that the predictors of the physical construct explained 17.7 percent of variance. This means that the error variance of the physical dimension was approximately 82.3 percent of the variance of this dimension itself. As for the other constructs, no problems were found. We can conclude that our model supported both Hypotheses 2 and 3. Therefore, the relationships among brands (brand relationships) influenced external corporate brand identity, and later, the brand reputation. Because theory has become essential in assessing the validity of a structural model, we examined an equivalent model, with the purpose of testing an alternative theory. For the previous model, we dropped the physical dimension, for comparison purposes. In line with these findings, we accepted the second and third hypotheses and concluded that the brand relationships construct influences the external part of corporate brand identity (H2) and that the brand identity influences brand reputation (H3). Therefore, the management of corporate brand identity depends on the investment and selection of strong relationships with reputed brands, to attract students and increase brand reputation. Discussion and Conclusions This study presents empirical findings in the field of higher education branding, where studies are mainly limited to business schools (Balmer and Liao 2007;Priporas and Kamenidou 2011;Suomi 2014;Vidaver-Cohen 2007). It contributes to filling a gap in the literature regarding the relationships among brands, as well as their influence on brand identity management and reputation. Students' perceptions of relationships among their higher education institutions indicate that the concept of brand relationships is formed by three dimensions: trust, commitment, and motivation. Trust and commitment are also considered relevant variables in the car industry (Morgan and Hunt 1994), as well as in a branding context: development of a scale to brand confidence (Gurviez and Korchia 2002). The relationships concept has been traditionally positioned in the theory of networks among companies (Ford et al. 2003;Hakansson and Ford 2002;Snehota 1989, 1995); however, although the literature may acknowledge corporate brand identity's influence on the organizational identity Snehota 1989, 1995), empirical research on this topic is scarce. An initial step is to further examine the relationships and clients' experience (Keller and Lehmann 2006). This study empirically supports the statements of Snehota (1989, 1995), by connecting brand relationships with the corporate brand identity construct. This finding empirically proves that brand identity can also be managed by issues considered external to identity. Previous researchers have established links between corporate brand and reputation (de Chernatony 1999), as well as between brand identity and reputation (de Chernatony and Harris 2000); and between reputation, satisfaction, and loyalty (Helm 2007). But few authors have examined the links among brand relationships and the impact of those relationships on corporate brands' identity or reputation. Our research establishes these missing links by empirically testing this impact. It is important to analyze brands in the services sector, because of its particular characteristics, especially the intangibility of the relationships that allow services to materialize. We particularly selected higher education because of its higher consumer involvement. In the higher education context, students are internal stakeholders and consumers at the same time. It is our own view that students' base part of their appreciation of the university/institution they attend on the relationships it has with other recognized brands, by means of trust, commitment and motivation. Such features improve the visibility of the reflected consumer and their image in society. We measured the external corporate brand identity in line with the proposed definition of external brand identity by Kapferer (1986Kapferer ( , 2008. We concluded quantitatively that the three dimensions: relation, reflected consumer and tangible physical, make sense together and that there is a higher external dimension formed by these three factors. This a very important input for academics and also for brand managers in order to adapt the external dimensions of the corporate brand identity to their publics. Moreover, the use of quantitative methods allowed us to find a higher dimension called corporate brand identity, formed by five of the six factors proposed by Kapferer (1986Kapferer ( , 2008: self-image, personality, relation, reflected consumer and tangible physical. The brand identity prism of the mentioned author also includes the culture dimension. We also included it in this research by using the findings revealed by Deshpande et al. (1993). We were able to identify the perceived culture by each student regarding their university/institution. In line with this, we demonstrated that cultures perceived as being performance oriented develop more salient corporate brand identities (measured by the model fit). We divided the sample into two groups in accordance with Deshpande et al. (1993) and verified that the sample compound by the students that perceived their university/institution as being performance oriented revealed better identity salience than the other sample. We consider this of great importance to the management of brand identity in universities/higher education institutions. It reveals the power of the students' perceptions and its influence in the corporate brand identity dimensions. The perceptions regarding brand culture must be managed by the brand managers so as to create the desired perceptions in the students making the desired corporate brand identity coincident with the existing one. This finding also reveals the influence of the culture dimension in the other dimensions of the corporate brand identity, something that we have not found in previous studies. This research also revealed the importance of joining qualitative and quantitative methodologies and proved that the latter is also applicable to a field of studies where quantitative studies are scarce. As far as our knowledge is concerned, it is the first time that the brand identity prism developed by Kapferer (1986Kapferer ( , 2008 is measured in the mentioned context. Limitations of the Research, Future Directions and Contributions Even though the sample of engineering students was adequate for the purposes of this research, it would be extremely useful to compare these findings with those of other samples, consisting of students with other characteristics. Such studies would confirm our findings and improve generalizability. A new perspective of the physical dimension in the brand identity prism was revealed. We named it "Intangible Physical". This dimension is present in the physical dimension defined by Kapferer (1986Kapferer ( , 2008). Yet, taking in account the used sample, the research revealed that this dimension, although valid and reliable, did not show enough discriminant validity to be considered a single differentiated factor. Therefore, we consider that other samples with different characteristics should be studied. Furthermore, other services with high levels of consumer involvement should be tested for generalization purposes, such as insurance or medical services. Regarding the contributions to the literature and to brand management, the conclusions of this research highlight the importance of designing, choosing, and investing in relationships with brands. These relationships should be coherent with the desired brand identity and reputation, in such a way that they cocreate value for stakeholders. The brand managers of higher education corporate brands should pay more attention to the process of engaging with other brands that are perceived by students and stakeholders as providing value to their institution.
2020-06-25T09:09:28.307Z
2020-06-22T00:00:00.000
{ "year": 2020, "sha1": "0f2c29e8c8a2d5b0fee88a88c61310cf1badfc83", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1911-8074/13/6/133/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "5eba51f95fd935d1fdd097d4a3a934d0581ec5fb", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Psychology" ] }
238241041
pes2o/s2orc
v3-fos-license
Is Aspergillus isolated from respiratory cultures clinically significant? Aspergillus is ubiquitous, so the significance of the finding depends on the patient’s symptoms, underlying lung condition, immune status, and radiologic fi ndings. It depends on the patient's symptoms, underlying lung condition, immune status, and radiologic fi ndings. Because Aspergillus is ubiquitous, many patients have false-positive fi ndings on respiratory culture and need no additional workup or treatment. But positive respiratory cultures may also indicate underlying serious lung disease. A thorough history to detect symptoms, underlying chronic lung disease, or an immunocompromising state followed by targeted laboratory tests and radiologic evaluation are adequate to ascertain the signifi cance of this fi nding in the vast majority of patients. ■ THREE MAJOR GROUPS OF DISEASE Aspergillus is an environmentally ubiquitous and easily aerosolized mold encountered through daily exposure. 1 Broadly, Aspergillusrelated lung diseases can be categorized into 3 major groups (Figure 1). Allergic bronchopulmonary aspergillosis (ABPA) is an infl ammatory lung condition caused by hypersensitivity reaction to Aspergillus antigens that almost exclusively occurs in patients with asthma or cystic fi brosis. 2 Allergic reactions that do not fulfi ll the criteria for ABPA include Aspergillus sensitization and severe asthma with fungal sensitization. Invasive pulmonary aspergillosis (IPA). IPA, unlike ABPA and chronic aspergillosis, is a severe, life-threatening, and often systemic disease process caused by Aspergillus species invading blood vessels, classically presenting in severely immunocompromised hosts and critically ill patients. 3 A rare form of IPA is invasive Aspergillus tracheobronchitis. Chronic pulmonary aspergillosis is an umbrella term for a spectrum of disease patterns typically occurring in immunocompetent hosts with underlying lung diseases such as tuberculosis, chronic obstructive pulmonary disease, sarcoidosis, lung cancer, and lung radiation exposure and presenting with cavitary lesions that may progress slowly over time. 4 ■ WHEN IS A POSITIVE CULTURE CLINICALLY SIGNIFICANT? Aspergillus infections, most commonly with A fumigatus and A fl avus, account for approximately 15,000 hospitalizations and an estimated $1.2 billion in hospital costs annually across the United States. 5 Therefore, it is not uncommon for physicians to encounter an Aspergillus-positive respiratory culture in the clinical setting. This begets the question, Is the fi nding clinically signifi cant? In an adult patient without signifi cant medical history, isolation of Aspergillus species in respiratory culture is likely a false-positive fi nding due to contamination or colonization of the respiratory fl ora by these ubiquitous fungal organisms. In hospitalized patients who undergo routine respiratory cultures, 80% to 90% of those with positive Aspergillus fi ndings do not have signifi cant aspergillosis lung disease. 6,7 Even in patients with proven Aspergillus pulmonary infection, respiratory cultures are positive in 20% to 50% of patients, and as such the isolation of Aspergillus in respiratory cultures is neither sensitive nor specifi c in the diagnosis of most fungal respiratory infections and is not an integral part of the diagnostic criteria for Aspergillus-related lung diseases. 8 Under these circumstances, in a patient who has no underlying lung disease and no immunocompromised state, we recommend obser- On the other hand, in a patient with respiratory symptoms, critical illness, underlying chronic lung disease, or an immunocompromising condition, detection of Aspergillus in respiratory culture may indicate underlying Aspergillus lung disease. 3 In these situations, we recommend additional workup, and if the Aspergillus is proven to be the causative agent, then appropriate treatment should be started. ■ THE HISTORY AND PHYSICAL It is imperative to assess the patient's history to quickly identify risk factors for pulmonary aspergillosis. We recommend fi rst obtaining a thorough history and physical examination for all patients. Key factors to consider include symptoms such as hemoptysis, chest pain, fever, and recent respiratory illness. Carefully assess for underlying chronic lung conditions including asthma, cystic fi brosis, chronic obstructive pulmonary disease, tuberculosis, lung surgery, radiation, pneumoconiosis, or sarcoidosis. In addition, a thorough evaluation should be done for conditions that may affect the immune system including leukemia, hematopoietic stem cell or solid-organ transplant, immunosuppressive therapy, and chronic corticosteroid therapy. 3,[5][6][7][8][9][10] In immunocompromised patients who present with sepsis and demonstrate tachypnea, tachycardia, fever, hypotension, and hypoxia, IPA should be considered, and rapid identifi cation and treatment of the causative agent are crucial, as the mortality rate is high. ■ LABORATORY TESTS AND IMAGING In patients with clinical presentations suggestive of aspergillosis, we suggest pairing a basic laboratory assessment (ie, a complete blood cell count) with radiographic imaging. Initial laboratory fi ndings may narrow the differential diagnosis by identifying eosinophilia, which suggests ABPA, or severe neutropenia, which suggests IPA. For imaging, we recommend high-resolution computed tomography (CT) of the chest rather than chest radiography to evaluate for Aspergillus-related lung disease, as it has superior ability to identify nodules, consolidation, cavitary lesions, and bronchiectasis. The fi nding of a cavitary lesion with or without intra- EL-BABA AND COLLEAGUES cavitary radiopacity suggests chronic aspergillosis, whereas the "halo" sign or "air crescent" sign suggests IPA (Figure 2), and bronchiectasis is seen in patients with ABPA. 11 In evaluating chest CT fi ndings, it is always useful to compare against previous imaging results and to consider other conditions that may coexist with positive Aspergillus in the respiratory sample. Galactomannan and beta-D-glucan In patients with risk factors and suspicious imaging fi ndings, we recommend next testing for the serologic markers galactomannan and beta-D-glucan. The specifi city and sensitivity of these tests in the diagnosis of IPA depend on the host and cutoff value. When a cutoff assay index of 0.5 is used, the combined sensitivity for serum galactomannan has been calculated as 74% (95% confi dence interval [CI] 64-82) and its sensitivity as 85% (95% CI 77-90). Serum beta-D-glucan had a sensitivity of 81% (95% CI 73-87) and specifi city of 61% (95% CI 46-75). 10 The detection of galactomannan in bronchoalveolar lavage fl uid is more sensitive and specifi c in the diagnosis of IPA, with a combined sensitivity of 79% (95% CI 65-88) and specifi city of 84% (95% CI 74-91). The procedure is relatively safe and should be considered in patients who have risk factors or have signifi cant radiologic fi ndings that suggest Aspergillus lung disease. If the clinical or radiologic picture suggests ABPA, measuring serum total and Aspergillusspecifi c immunoglobulin E levels is needed to confi rm the diagnosis. Biopsy is the gold standard but rarely needed The gold standard for diagnosis of most cases of Aspergillus-related lung disease is surgical biopsy and histopathologic confi rmation. Unfortunately, biopsy often cannot be done owing to concomitant pulmonary comorbidities, severe immunocompromise, or critical illness with respiratory failure. Innovations in bronchoscopic procedures for microbiologic and pathologic samples, coupled with advances in radiology and Aspergillus biomarkers, have signifi cantly reduced the need for surgical lung biopsy in these patients. ■ MANAGEMENT Management depends on the Aspergillus-related diagnosis and the patient's clinical status. When considering conditions such as ABPA or chronic aspergillosis, we suggest waiting until the diagnosis is confi rmed before initiating treatment. However, IPA is more rapidly progressive and has a high mortality rate. Therefore, if clinical suspicion is high, therapy should not be delayed for the establishment of the diagnosis of proven or probable disease. In these situations, we suggest starting empiric therapy with a triazole agent while waiting for the results of cultures and biomarkers. ■ ALWAYS CONSIDER THE CLINICAL PICTURE Due to the ubiquity of Aspergillus, many patients have false-positive fi ndings on respiratory culture and require no additional workup or treatment. However, Aspergillus-positive respiratory cultures may be an indication of underlying serious Aspergillus lung disease. A thorough history to detect symptoms, underlying chronic lung disease, or immunocompromising state, followed by targeted laboratory tests and radiologic evaluation, is adequate to ascertain the significance of this fi nding in most patients. ■ ■ DISCLOSURES The authors report no relevant fi nancial relationships which, in the context of their contributions, could be perceived as a potential confl ict of interest. Figure 2. Computed tomography shows multiple pulmonary nodules, some surrounded by ground-glass changes consistent with the "halo" sign (arrow) in a patient with invasive pulmonary aspergillosis.
2021-10-02T13:08:14.874Z
2021-10-01T00:00:00.000
{ "year": 2021, "sha1": "2e573f5a7e5f703eb270a728acf2eaff7935ee9c", "oa_license": null, "oa_url": "https://www.ccjm.org/content/ccjom/88/10/543.full.pdf", "oa_status": "GOLD", "pdf_src": "Highwire", "pdf_hash": "2e573f5a7e5f703eb270a728acf2eaff7935ee9c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
241076797
pes2o/s2orc
v3-fos-license
Understanding the perceived impact and value of research advocacy initiatives for colorectal cancer Research advocacy utilizes patient insight to progress research, ensuring that patient values remain a priority. It is integral to inform activities such as designing clinical trials, providing perspectives on Institutional Review Boards (IRB’s), and reviewing research grants. As a leader in colorectal cancer (CRC) advocacy, Fight Colorectal Cancer (Fight CRC) developed a formal research advocacy training program in 2015 with the goal of preparing CRC advocates to become the most educated patient voice at the research table. Methods Introduction The idea of patient centricity has been de ned as, "putting the patient rst in an open and sustained engagement of the patient to respectfully and compassionately achieve the best experience and outcome for that person and their family." 1 This concept is not new, and now more than ever, pharmaceutical companies, healthcare professionals, advocacy organizations and cancer organizations are leveraging the voice, thoughts, values, preferences, strengths and weaknesses of the patient to excel research, ensuring that patients, the consumers of healthcare, are actively involved. 2 Increasingly, regulatory agencies are requiring the involvement of patients, caregivers and champions of a speci c disease type along the research continuum. Also known as research advocates, these individuals in their respective elds are integral to activities such as informing clinical trial designs, providing insight on Institutional Review Boards (IRB) and voting on research grants. In the early 2000s, it was apparent that the non-scienti c viewpoint was not at the center of research. As a representing the collective patient voice. After getting involved in regulatory advocacy, Roach realized that there were very few "squeaky wheels" in cancer research, and research advocacy could impact the community as a whole. Fight CRC also looked to those already in the eld, speci cally in breast cancer, and predicated the RATS program based on the bene ts it was having in other disease types. Since the creation of the RATS program, over forty CRC research advocates have completed the foundational training. There are currently over twenty active research advocates who serve on panels with organizations and governmental agencies such as the Department of Defense (DoD) Peer Reviewed Cancer Research Program (PRCRP), the Southwest Oncology Group (SWOG), the Patient Centered Outcomes Research Institute (PCORI), the National Cancer Institute (NCI) and the American Society of Clinical Oncology (ASCO), as well as state-based cancer programs, local institutional review boards, and industry partner advisory boards. With the uptick in the use of research advocates in oncology, it becomes critical to contribute to the body of evidence based on lessons learned, and understand and de ne the e cacy of a research advocacy training program and subsequently, research advocacy as a whole. This includes the perceived impact and value, as well as gaps and opportunities to enhance the ability of research advocates to effectively serve in the research process. As the RATS program enters its fth year as a formal and sustainable training program, it is necessary to evaluate the perceived value of both the program and research advocacy as a whole, and its impact on CRC research. Engaging Stakeholders Utilizing Research Advocates To understand the perceived value of research advocacy from the perspective of professionals in oncology, Fight Different activities in which this group of stakeholders engaged with research advocates included: Reviewing a role-play script to help educate physicians on how to have a best practice shared decision-making conversation with their CRC patients. Developing guidelines in Mismatch Repair (MMR) and Microsatellite Instability (MSI) testing for patients being considered for immunotherapy. Providing commentary during meetings on what issues are important to them as patients and caregivers. Steering committee/task force reviews which are responsible for reviewing clinical trial protocols for approval. Developing an EAO CRC research agenda. Reviewing an R01 proposal for clinical trials. Serving on peer review panels alongside scienti c reviewers to provide their input on the potential impact a proposal may have on the community if successful. Performance of Research Advocates On a scale of one to ve, with one being completely unprepared, and ve being completely prepared, we asked stakeholders to evaluate the research advocates. The average score of respondents was 4.51 out of 5. Additionally, participants were asked to rate their level of agreement (strongly disagree to strongly agree) on the following statements regarding their interactions with research advocates. 89% strongly agreed or agreed that the research advocate was engaged in the discussions at an appropriate level and that the group bene tted and saw the value of having a patient advocate voice. 5% strongly agreed or agreed that the research advocate showed up with the right orientation and training to serve in this role as a patient representative. 84% strongly agreed or agreed that the research advocate provided a unique perspective that wouldn't have otherwise been captured. 100% strongly agreed or agreed that the research advocate was willing to learn and that they would engage with Fight CRC research advocates again. Responses highlighting how the research advocate contributed to shaping the conversation or leading to change included: The input was invaluable in identifying in particular 1) the level of detail some patients may wish to have explained by their physician and 2) phrases that were too technical and needed to be updated to be more clearly understood by patients. The [research advocate] set the tone for an honest, empathetic conversation regarding the realities of living with a cancer diagnosis and how the lives of family members and loved ones are changed following a diagnosis. Additionally [the advocate] brought up multiple ideas for types of educational tools that patients might nd valuable. We also asked what value the Fight CRC research advocates provide the scienti c community and their role in cancer research. Responses included: The CRC advocates have not only provided input into CRC, but also in other Gastrointestinal (GI) cancers. They are willing to learn about other cancers and contribute on a wider scale. They have been so essential since we have had many concepts to review in the past six months. They have been contributing and adding value with good written reviews and presenting their reviews on the phone calls with our team. There are two values that I think research advocates will bring to research: (1) constantly reminding those that design and perform research that there are real people who will bene t from their work and (2) pointing out that sometimes what doctors/researchers assume that patients prefer is not actually what patients prefer. Areas of improvement for research advocates that the scienti c community suggested included better representation of the collective patient experience and not a single person's perspective, preparing ahead of time the message that advocates want to convey, and increased biologic and therapeutic knowledge. Overall, on a scale of one to ve, with one being completely unsatis ed, and ve being completely satis ed, the average ranking was 4.68. Additionally, 95% of respondents indicated a 4-or 5-star rating of their engagement with research advocates. In order to understand how the RATS program affected the research advocates' learning experience, we evaluated both change in knowledge and change in con dence. These metrics help measure the contributions of the RATS program on direct outcomes, including the impact of the training program. Research Advocate Demographics The top three topics that advocates increased their knowledge of the most included checkpoint inhibitors (56% increase in knowledge), the gut microbiome (55% increase) and precision medicine (55% increase). The three areas where research advocates gained the most con dence included joining a panel, board or research study as a research advocate (59% increase in con dence), being able to set realistic and timely goals as a research advocate (55%) and representing the collective patient voice as a research advocate (55%). From the research advocate's perspective, the majority saw the program having value and impact on research. Comments included: The RATS program is a great way to get prepared advocates onto panels and other volunteer opportunities which require a knowledgeable patient voice. It provides support and peers to bounce ideas off of. It also provides info regarding open opportunities. All of these things can help push forward research by putting a face to the work researchers do and by helping to increase clinical trial accruals. Experience and training gained from a reliable source. When people note you are a "RATS" member, they know you are in the group of lifelong learning and that you have put time and effort into your advocacy journey. The connections as well as knowledge of our fellow RATS is loaded with experience and knowledge. Research advocates were asked to rate how well the RATS program equipped them to sit on panels and provide effective input on a scale from one to ten (one being not well prepared to ten being very well prepared). The average score was 8.4. Additionally, 100% of respondents believed their authentic patient voice has been taken seriously by the research community. Research advocates identi ed several gaps in the RATS program which included creating a mentorship program, continuing to grow the use of community tools and resources, additional basic science training and annual in-person meetings. Discussion Prior reviews have identi ed common themes for effective patient centricity, including: "1) authentic and sustained engagement across the research continuum and beyond, 2) clarity in the roles and expectations of all parties engaged in the research, 3) mutual trust and respect, 4) commitment to colearning and co-production, and 5) access to the appropriate resources, supports and training." 3 It is apparent that for patient engagement to positively impact research, multifaceted approaches are necessary and numerous factors should be considered. Creating a Highly E cacious Training Program In order for research advocacy to have a signi cant impact and a high perceived value on research, it is important to have a highly e cacious research advocacy training program which can determine the success of research advocates' engagement in the research process. A well designed and executed program will equip research advocates with the necessary tools to successfully work in tandem with the research and medical community. Previous research conducted by Ivlev et. all has shown that online training developed for patients increases both their knowledge and skills to effectively review PCORI research with a patient centric focus. 4 Other organizations have evaluated training programs in the "basic competencies of evidence-based medicine (EBM) for selected and motivated patient and consumer representatives" and found that it is feasible and may have a positive impact on advocacy work. 5 After conducting an evaluation of the current RATS program, it is clear that Fight CRC's training program prepares research advocates to effectively engage in the research process. Based on feedback from the research community, the research advocates were very well prepared to engage in scienti cally focused conversations, showed up with the right orientation and training, provided a unique perspective and were bene cial to the conversation. From the patient's perspective, the majority increased their scienti c knowledge and con dence levels after completing training provided by Fight CRC, agreed that the RATS program equipped them to sit on panels and provide effective input and believed that their authentic patient voice has been taken seriously by the research community. This type of evaluation feedback from both the research community and research advocates supports the notion that a training program can provide advocates with the skills necessary to engage with the scienti c community and can support the opportunity for authentic and sustained engagement across the research continuum. Gaps and Opportunities in Research Advocacy Training Based on the areas of improvement suggested by the scienti c community and research advocates, several gaps in research advocacy training were identi ed. Although research advocates indicated that they gained con dence representing the patient voice, the research community indicated that research advocates could continue to improve their skills by better representing a group's experiences rather than a single person's perspective. This gap is not unique to the RATS program, or even research advocacy in general. Prior research has determined that biases may exist in research advocacy and that patient advocates may not be entirely representative of the collective patient experience. According to researchers it is important to "validate any insight through a variety of other means, such as database analyses or market research." 6 According to the article titled: "How Patient Advocacy Helps Advance Cancer Research: A Conversation on Collaboration" published by ASCO, one of the recommendations to effectively serve as a patient advocate in the research eld is to "remain grounded in the patient communities." As a research advocate, "this means working with newly diagnosed patients, attending support groups, and keeping in touch with how today's patients experience medical care and treatment." 7 An additional gap was the need to increase advocate's knowledge around biology and therapeutic knowledge. According to an evaluation conducted by the Southwest Oncology Group (SWOG) on patient engagement, a primary gap in training included "patient advocates' knowledge regarding clinical trials." 8 Both of these gaps identi ed by the research community can be addressed and improved by leveraging the RATS training framework to further implement evidence based, adult learning techniques. Bryan et al. identi ed essential learning principles in health promotion practice including: 9 Adults' previous experience must be respected and built upon. Adults need learning approaches that match their background and diversity Adults need to be actively involved in the learning process. We recommend training programs like RATS utilize these principles to address the gaps identi ed. This can include involving advocates in the development of the training, understanding that each advocate comes from different professional/technical backgrounds. The training must also be comprehensive, and utilize different learning techniques such as role plays, case studies, in-person events and online modules. By providing co-learning and co-production opportunities, access to the appropriate resources, supports and training, these gaps can be addressed. The Impact of Advocacy on Research Understanding how research advocacy impacts the scienti c eld is essential to continue developing effective training programs and improving the ability of research advocates to work with the research community. Based on the feedback from a diverse community of researchers and advocates, the impact of research advocacy ranges from: Identifying the level of detail patients wish to receive about their diagnosis from their physician. Updating technical jargon to make research easier to understand for the patient. Setting the tone for an honest and empathetic conversation and providing insight not otherwise noted. Providing diverse insight from a range of backgrounds. Pointing to speci c factors that are important to patients and caregivers. Increasing clinical trial accruals. Shaping outcome measures in clinical trials. Deverka et. al. argues patient advocates can have a positive impact in research by, "identifying relevant research questions; alerting researchers to barriers or facilitators to enrollment; characterizing end points that matter to patients and may be differentially impacted by treatment; distinguishing informed consent or data collection issues that are unclear or burdensome; facilitating peer discussions to obtain a collective patient perspective; and assisting with dissemination and implementation of study results." 8 It becomes clear that research advocacy is not just a concept created out of principle, but rather a need that is essential to integrate along the entire cancer continuum to improve health outcomes for patients in the oncology setting. Conclusions and Future Recommendations for the Research Community Based on the impact research advocates can have, it is necessary for researchers and the scienti c community to understand how advocates can positively in uence their work so they may leverage research advocacy to its fullest potential. This includes understanding that "mutual respect is essential, which requires honesty and authenticity. Transparency and commitment from both parties should begin on day one." 10 Additionally, it is necessary to have, "reciprocal relationships in which both parties recognize the value of the other." 11 It is also essential to work with advocates from an array of backgrounds and communities to understand the entire patient experience. Hickey et al. argues, "There are often power differentials between the public and researchers. This is particularly so when the focus is on groups, perhaps considered as marginalized or seldom heard." 11 Not only are there opportunities for research advocacy training programs to improve the value of research advocacy, but opportunities for the Figure 1 Types of events in which stakeholders have engaged with research advocates
2020-09-10T10:02:41.615Z
2020-09-04T00:00:00.000
{ "year": 2020, "sha1": "812dd35d276ea36b8754543ec7b8e0359d6410ec", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-69337/v1.pdf?c=1600384725000", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "e6c94866d175208d4981b79573f8a51dd3c2b8fd", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
252793224
pes2o/s2orc
v3-fos-license
COVID-19 in Bangladesh: A systematic review of the literature COVID-19 in Bangladesh: A systematic review of the literature from March 2020 to March 2021 from March 2020 to March 2021 Abstract COVID-19 has undoubtedly absorbed the global public's angst. It has quickly disturbed global life and will have long-term and short-term consequences on several sectors. This comprehensive review study's goal is to discover what research has been done in a year since the beginning of coronavirus in Bangladesh. Consequently, the current study examined the pervasiveness and affiliation of social and economic issues, health and psychological issues and individual perceptions, key challenges, strategies and policy systems, public health, online education, agriculture and food security, criminal activities related to outbreaks, Rohingya refugee issues, and the quality of domestic violence behaviors. The review used electronic databases such as Web of Science, PubMed, PubMed Central, and Scopus to find published material. This study reviewed chosen papers, removing redundancies and including 43 pertinent articles. Among the 43 study items, fourteen were qualitative (32.56%), six mixed (13.95%), and the rest were quantitative (53.49 %). This study helps to identify the issues with current documentation by focusing on interconnected factors and studying COVID-19 events and scenarios. Thus, governments and other stakeholders should reassess these controversial issues to formulate a policy that takes into consideration the situation in Bangladesh and other COVID-19 affected countries. Introduction The pandemic COVID-19, originated from the city of Wuhan in China at the end of 2019 has touched the border of more than 200 countries and territories (Worldometer, 2021).Since the devastating Spanish Influenza pandemic in 1918, the 20 th century has experienced two more hazardous pandemics: The Asian Flu in 1957 and the Hong Kong Flu in 1968 and both of them have ensued in Asia.At least four epidemics have been documented in the twenty-first century alone: The Bird Flu in 2009, SARS in 2002, MERS in 2012, and Ebola between 2013and 2014. (Baldwin et al., 2020).These viruses have an accelerating rate of transformation, and cases of mass dissemination from individual-to-individual.Severe complications for the patient of corona are noted that includes but not limited to acute respiratory syndrome.This lethal virus alone has afflicted around 219 nation and have caused a surge in the prevalence of cases (Alam et al., 2020). World Health Organization (WHO) on March 11, 2020, declared that COVID-19 is a pandemic of global scale because of its expansion (WHO, 2020).It emphasizes that COVID-19 has intensified the pain of the people all over the world.In this connection, as economic concerns are linked to public health crises, socioeconomic, and livelihood issues, the expansion of COVID-19 has undoubtedly forced the global economy to bend its knees. Comprehending people's response to infectious diseases is also challenging since there is no evidence of the anticipated growth curve for the pandemic.In this regard, different public health interventions have been put in place throughout South Asia.Since September 10, 2020, there have been more than 5 million illnesses and 94,000 deaths in Southern Asia alone which is a worrisome condition as one-third of the world's population (1.7 billion) lives in highly populated areas with little resources to support their necessities (Siam et al., 2020).In this connection, Bangladesh is a fascinating scenario to investigate from such a South Asian aspect as the hallmark of the pandemic.Bangladesh, a developing country in South Asia with a high population density of little over one person per square kilometer, has an overall population of 161.3 million (inadequate medium income).On March 8 th , 2020, Bangladesh discovered its first COVID-19 case and soon after on March 18 th , 2020, Bangladesh reported its first death from coronavirus infection.Reports found that the patient was a 70-year-old man with a history of numerous medical issues (Reuters Staff, 2020).As of March 7, 2021, there were 5,50,330 cases and 8,462 deaths in Bangladesh, making it one of the worst-hit countries (WHO, 2021).60% of the population of Bangladesh is between the ages of 15 and 64 years; therefore, only 4.7% are over 65 (Islam et al., 2020a) that refers that a great population is always in the threat of COVID-19 infection.Bangladesh is particularly sensitive to infectious diseases as the population is high and infectious viruses like COVID-19 spread faster in communities where population is high in density. Consequently, Bangladesh has the significant challenge in restricting new cases and minimizing fatality of corona patient.Anxiety, depression, phobias, insomnia, and trauma-related with lockdowns has all exacerbated by the fear of contacting with an infectious virus especially in a time when the anxiety of losing a loved one is high.The rumor and misinformation regarding COVID-19 has also been disseminated in the society and the scarcity of adequate patient treatment units are all contributing factors behind this fear (Ahorsu et al., 2020).Such a pandemic crisis has generated a global and therefore substantial social, economic, and public health problem at an unprecedented and multidimensional stressor. In this comprehensive evaluation, researchers in Bangladesh are looking at the types of studies that were carried out a year after the outbreak began.This report also examines the effects of COVID-19 on different sectors of the country's pandemic preparedness efforts.This study is empirical in nature and conducted from March 2020 to March 2021, and 43 original papers were evaluated for a meta-analysis.This paper has also studied the accepted norms, standards and practices in preventing the virus from spreading in addition this study makes necessary suggestions for future investigators. Inclusion and Exclusion Criteria Only original articles are included in this study which are peer-reviewed, published in full-length format in English language between March 2020 and March 2021 in the Web of Science, PubMed, PubMed Central and Scopus databases (see Figure 1).As we aimed to check the studies conducted on the COVID-19 situation in Bangladesh, articles published in different languages were excluded.In addition, different types of articles such as case report, commentaries, pictorial essay, review article, editorial etc. were excluded too. The authors sought to evaluate the empirical data on COVID-19 research in Bangladesh from March 2020 to March 2021.A literature search was conducted to identify the articles in Web of Science, PubMed, PubMed Central and Scopus with the search term COVID-19 in Bangladesh, socio-economic impact of COVID-19 in Bangladesh, psychological impact of COVID-19 in Bangladesh, and COVID-19's impacts on migrant workers in Bangladesh.Initial screening was done by checking the title and the abstract of the article and subsequently, the full text was assessed.After removing the duplicates, 43 articles were considered for the study (Figure 1).The empirical study also identified the year of publication of the article or research paper, the applied methodologies, significant findings, authors' distribution, cooperation between countries, and future directions (Table 1).The analysis shows that (32.56%) of the studies followed a qualitative design, (53.49%) were quantitative in nature, and the rest (13.95%) followed a mixed-method approach.These review articles cover different fields such as the socio-economic area, mental health and psychological issues, people's perception regarding COVID-19, COVID-19 and key challenges, strategy and policymaking system, public health sector, online education during lockdown and COVID-19, agriculture and food security, criminal activities amidst the outbreak, refugees and Rohingya issues and the quality and quantity of domestic violence during pandemic. Outcome Variables Author's contribution, study design, methodology of the study, publishing year, a summary of the articles, domains of the study, and funding details were outcome variables for this study.Affiliated institutions were considered to ascertain the authors' geographical locations. Distribution of the Studies We have searched the original papers in Web of Science, PubMed, PubMed Central and Scopus.We have appraised 43 pieces for this review and articles published from March 2020 to March 2021 are also included (Table 1). Data Analysis To conduct the research and analyses for this study, we used the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines (PRISMA; Moher et al., 2009).PRISMA is a basic set of features for reporting in evidence-based systematic reviews and meta-analyses.A fourstage process flowchart and a checklist of 27 items make up the PRISMA guidelines (Moher et al., 2009).The reports relevant to a review are identified, screened, and included according to the criteria described in the flow diagram (Fleming et al., 2014;Selçuk, 2019).This checklist has 27 items covering areas including titles, abstracts, introductions, techniques, findings, discussions, and budgets for the conducted studies (Selçuk, 2019). During the initial phases of COVID-19 pandemic, hundreds of publications maintaining the situation in Bangladesh were published.In this connection, after the abstract-identification process, 67 paper were selected based on the initial aim of electronic search resulting in the elimination of 12 papers that were previously found to be submitted twice.From the found 55 papers, 6 were unsuitable for further review as the minute reading of the 6 articles did not provide required information for the current research.After fixing the total number to 49 articles, a full text reading and comprehending process was undertaken and it was found that another 6 articles did not fulfill the requirement and eligibility criteria for the current research.While maintaining the ethical guidelines, requirements of the research questions, criteria for inclusion, and the impetus for conducting the research, we finally opted to include 43 papers. Domains In the assessment, eleven studies have been identified from a socio-economic point of view (25.58%),COVID-19 and Psychological issues (13.95%),five studies (11.62%) encircled COVID-19 and key challenges and people's perception during the pandemic, both strategy and policymaking systems and public health covered (9.30%), online education (6.98%), agriculture and food security (4.65%), at trice criminal activities, Rohingya issues and domestic violence possessed (4.44%) area of the whole assessment (Table 1).Majority of the studies conducted last year in 2020 has been considered for examination while considering the socio-economic factors, mental health, significant difficulties and perceptual points of view.In contrast, this year's studies (2021) evaluated the later subjects like the quality of domestic violence, Rohingya issue, online education, criminal activities, COVID-19 policymaking systems and so on (Table 1).The COVID-19 epidemic has prompted researchers to look at the incidence of intimate partner violence and the factors that contribute to it.The study focused on married women in Bangladesh and collected data from them and their partners.Interpersonal violence among spouses has been linked to a variety of characteristics in this epidemic.Finally, amid the COVID-19 pandemic crisis, this paper IPV on women in Bangladesh showed in-depth, but various tactics (which might not have been addressed in this work) that the authors have probably disregarded. 3. Authors: (Paul et al., 2021) Title: Psychological and livelihood impacts of COVID-19 on Bangladeshi lower income people Methods: Mixed method Sample Size: 576 Population: Lower income group of people According to the study, the influence of COVID-19 on low-income households in Bangladesh's suburbs was investigated.The impact of COVID-19 on the mental health and economic well-being of lower-income people of Bangladesh is examined in this paper.It's possible that the authors have overlooked any potential policy consequences of their research (which may or may not have been addressed in this paper). 4. Authors: (Karim et This study examines the state of food security and coping strategies in two rural Bangladeshi towns.As a result of the prolonged lockdown, inhabitants (particularly low-income groups) were unable to find work and faced substantial or complete income losses.There is no difference in the techniques and value of social capital between local administrations and large cities when it comes to food distribution.On the disadvantage side, it appears that the authors are entirely uninformed on the subject.In order to curtail the outbreak, the people of Bangladesh had their understanding, attitude, and practice of COVID-19 and that is studied and examined in this article.This report also acknowledges that some essential information is not yet available in public space because of the massive information influx.An examination of social, psychological, and health issues is also required in order to provide the most accurate version of the study. 12. Authors: (Anwar et al., 2020) Title: COVID-19 and Bangladesh: Challenges and how to address them Methods: Qualitative Sample Size: Not specific Population: Undefined Travel prohibitions, social isolation, remote workplaces, and land locking are among the non-therapeutic techniques being used by numerous countries to limit the spread of coronavirus epidemics.There are specific difficulties in implementing these regulations in Bangladesh because of its high population density.Mitigating measures are difficult to implement in many sections of the country because of the government's limited resources.To put it another way, the primary dataset required here depicts the entire Bangladeshi context scenario in all of its complexity. 13. Study: (Imtiaz et According to this study, young adults' preventive behavior is critical in the fight against pandemics.In spite of the fact that the country's young adult population constitutes a third of the total population, their mental health is of paramount importance.In the eyes of authors, higher education and mental health are the most crucial factors that regulate their significant behavior.As a result of this study's focus on urban educated adults young, the authors put minimum significance on elderly adults or illiterate young people. Authors: (Rana et The impact of the outbreak on the Khulna City Corporation (KCC) in Bangladesh was quantified in this paper.A survey found that 35% of respondents were dissatisfied with the situation, while only 1% were extremely satisfied.The current socioeconomic elements influencing participants' livelihood and personal well-being were evaluated quantitatively.This study's quantitative analysis may not fully capture the spectrum of qualitative mood evaluations. 24. Authors: (Baird et The COVID-19 epidemic is now hitting Bangladesh, albeit mental health issues have received less attention than physical health issues.Results showed that twothirds of the subjects had depression or anxiety, and a third was extremely apprehensive or tensed.Mental health difficulties were more common in people between the age of 18 and 30.An online survey or cross-sectional study has inherent limitations, and there is the potential for psychiatric concerns to be misjudged. 30. Authors: (Biswas et al., 2020) Title: A systematic assessment on COVID-19 preparedness and transition strategy in Bangladesh Methods: Qualitative Sample Size: Not specific Population: Undefined The current study assessed Bangladesh's pandemic preparedness and made recommendations for adapting with new reality and recovering normalcy over the long term.Global health systems have been devastated by the COVID-19 epidemic of the year 2020.If COVID-19 readiness and transition strategy are to be thoroughly evaluated in rural and especially the backward parts of Bangladesh, it will not be possible.To get better outcomes and policies, we'll need new and compatible strategies. 31. Authors: (Shoaib & Arafat, 2020) Title: Impacts of COVID-19 on agriculture in Bangladesh Methods: Qualitative Sample Size: 57 Population: Different backgrounds participated relates on agricultural sector The agro-food industry has been legally shielded from any responsibility for the nationwide spread of COVID-19 infections by legislation.No unneeded area or resources should be wasted to keep the production process running smoothly.Preprinted survey forms were used to conduct one-minute phone and email polls.A qualitative analysis alone cannot accurately assess COVID-19's impact on Bangladesh's agriculture and agri-based food industry.The detailed visual analysis is necessary with a mixed approach. The goal of this study is to obtain an understanding of COVID-19 administration in Bangladesh and discover aspects that are significant to developing nations management of the pandemic.The main flaws of this work are that it largely depends on its contents from secondary sources and the material supplied has inaccuracies of data.Once the facts of the matter are known, support and opposition can be determined. 37. Authors: (Islam et al., 2020b) Title: COVID-19 pandemic and level of responses in Bangladesh Methods: Qualitative Sample Size: Not specific Population: Undefined Dhaka, Bangladesh's dynamic center for research consultation and publication is the subject of this descriptive secondary literature study.According to the findings, males with COVID-19 positive status were more likely to die than females with COVID-19 positive status.This was a brief secondary literary evaluation, but the situation in every sector is quite critical.There may be some discrepancies between the study's findings and the original problem scenarios. 38. Authors: (Islam et al., 2020c) Title: Exploring COVID-19 stress and its factors in Bangladesh: A perception-based study Methods: Quantitative Sample Size: 340 Population: Bangladeshi adult populations COVID-19 epidemic has caused widespread mental health issues.An online poll on Bangladeshis (mean age 26.23, SD 6.39, 65.90%) was utilized to study the COVID-19 pandemic.However, despite the self-developed question used to address Bangladeshi pandemic culture, a human stress evaluation connected to COVID-19 may be limited by a lack of proven tools. 39. Authors: (Nath et al., 2020) Title: Analyzing COVID-19 challenges in Bangladesh Methods: Mixed method Sample Size: Undefined Population: Different sectors of Bangladesh The authors show the impact of COVID-19 in poor nations like Bangladesh.They said pandemics might treble our poverty rate.It also affects our socio-economic and educational sectors.Behind the pandemic scenario, there are other issues such as the problem of medical equipment, education, agriculture, and industrial sectors.Authors haven't researched other areas like unemployment, unconsumed people, and social life. 40. Author: (Mohiuddin, 2020) Title: COVID-19 and 20 resolutions for Bangladesh Methods: Quantitative Sample Size: Undefined Population: Bangladesh Slum dwellers in Dhaka are fast increasing in great numbers as Bangladesh is a low-middle-income country.A huge number of slum dwellers abandon their jobs every day, according to IEDCR and BRAC which cost an overall loose of BDT 33 billion.Unemployment compels these people to break the law as well.A shortage of basic needs, medical facilities, increased unemployment and less of knowledge regarding NGO's are the concern of the authors. 41. Authors: (Aktar et The research described the existing problems of COVID-19 in Bangladesh in a clear and comprehensible manner.This paper's main goal is to analyze present condition of Bangladesh and to anticipate infections and mortality on a long-term and a short-term scale, using a method known as Infection Trajectory-Pathway Strategy (ITPS).Possible policy (which may not have been addressed in this work) was likely overlooked by the writers. 43. Authors: (Begum et COVID-19 and socio-economic views dealt with issues such as unemployment, family utilization of funds, and reduced cash flow.Long periods of lockdowns have restricted residents' livelihood options, causing severe or total income loss.Similarly, people worry about poverty, hunger, and job loss that has led to non-compliance.COVID-19 looks to be limiting Bangladesh's economic growth.Results shows that even the rich people are facing economic challenge and are becoming poorer compared to their previous position.In contrast in the past, the corona has harmed the oppressed (formal settlers) and micro-entities (formal or informal).People are straining to make ends meet all around the country.The disease would double the poverty rate in United States.This epidemic has reduced monthly income too.COVID-19 also affects mental health.In this connection, human stress is linked to economic hardships, food scarcity, lack of formal education, and limited career alternatives. According to a well-known study by Begum et al. (2021), 26.9% of people suffer from anxiety, 52.0% from depression, and 55.6% from severe stress.Every day, the stress, concern, depression, and other behavioral disorders made routine doctor visits impossible.Men and women in their mid-20s and early 30s were more likely than other age groups to have poor mental health.Stress, anxiety, and depression are more common in this age group and having COVID-19-like symptoms then the overall population.Students are increasingly addicted to Facebooking and online gaming, causing loss of morality and mental disease.People in Bangladesh especially in the rural areas are not aware of the impact of the outbreak.In this regard, nearly all (98.7%) of the people wore facial masks in crowded places, 98.8% have reported to health officials, and 93.8 percent clean their hands with soap and water (Ferdous et al., 2020).Participants commonly misdiagnosed COVID-19's key clinical symptoms.They had no knowledge that chronically unwell older people are the most susceptible cases of Corona. New information can help prevent COVID-19 infection but due to the invasion of information, primary data is not yet available to the public.Other issues include but not limited to study the lack of medical equipment, education, agriculture, and the industrial side of the pandemic.Unexpected decisions, lack of kit and testing opportunities are some of the explored explanations.The deficiency of medical equipment and human behavior can contribute to the spread of the virus as a result, infection among male and mortality rates among male is higher compared to female.With limited resources, many mitigating measures are difficult to implement, making social separation impossible. Most if not all governments did not have any specific strategy for COVID-19.As the coronavirus outbreak expands globally, governments have imposed non-therapeutic measures including travel bans, quarantines, and social divisions in addition they also have targeted areas of COVID-19 hotspots, and awareness building programs using social media and satellite television.These efforts also aim to increase the number of beds in private and community hospitals.Our research shows that staying at home can significantly reduce the number of COVID-19 cases and in some place by half percent. As a result of the COVID-19 outbreak, thousands of people died every day.The health agency lacked pandemic coherence.A smaller rural population and particular eating habits may also help reduce the high infection rate.Tackling COVID-19 has both short-and long-term health concern and implication issue for Bangladesh.In April alone, there were 5780 cases of infections, 347 deaths, 775 ICU patients, and 694 severe cases (Khan & Hossain, 2020) and the data shows that the death of male is higher compared to female.Forensic evidence from the Regent and JKG hospitals revealed deep-seated wrongdoing.In this connection, the health policy of any country should be transparent and accountable in all aspects and to all stakeholders.Notwithstanding, the cost of health care, medicine and drugs, and necessary food items have almost doubled and the resources in people's hand have been limitized within this time (Islam et al., 2021).This pandemic's worst victim is the education sector.The main challenge was to give or deliver online education to rural population.Even developed countries have faced the challenge of internet connectivity, impractical internet, and energy incoherence.Rural and urban learners differ greatly in many ways in the online classroom due to background and social mobility.The quality of education depends on several factors such as socio-economic status, access to technology and technological equipment.parental capacity to provide resource, etc. COVID-19 may have a significant impact on students' education, pedagogy, and syllabus modification and reflection of this has already being viewed in SSC and HSC examination. Especial attention has been given to agro-food based food supply chain, but the condition is worrisome due to the impact and spread of COVID-19.It is highly encouraged to leave no land barren, and government is encouraging to reduce wastage.The outbreak has deteriorated Bangladesh's already precarious food and nutrition security. An unbroken historical examination shall analyze the immediate repercussions of the official stayat-home command following COVID-19.They found that the total incidence of drug trafficking and arrests have increased by 75%.(Rashid, 2020).While actual arrests for illegal drug trafficking have increased by 75%, and actual and predicted total arrests for car theft have decreased.Another important piece of proof of rising drug trafficking is found in print media.Extreme poverty is rising, and people are losing jobs.Finally, they are involving in illegal activities and crimes including drug trafficking and hijacking.During the pandemic, the lack of routine monitoring of drug paddling have benefited smugglers. Due to limited access from outsiders, transmission among the refugees were limited to the community (Homaira et al., 2020).In addition, medical facilities in the camp have saved hundreds of lives.This district was one of the first to test for COVID-19.Sample collection and processing were delayed because of the limitation of diagnostic centers in Rohingya camps.This pandemic has causes societal unrest, raises disparities and disproportionately that has affected women and girls more than man.Domestic violence has increased to a severe level and have become a public health concern.Domestic violence against women diminishes women's quality of life.Recently, 45.29 percent of female respondents reported IPV, 44.12 percent reported emotional abuse, 15.29 percent physical abuse, and 10.59 percent for sexual abuse (Rayhan & Akter, 2021).Educated men are less likely to perpetrate IPV than uneducated man.Thus, the pandemic also forces to rethink the position of women in the society and considering laws to tackle problem of domestic violence during epidemic of this nature. Conclusions The Novel Coronavirus has already taken hundreds of lives and wreaked devastation on nations worldwide.COVID-19 has become a global emergency affecting everyone's lives.The unprecedented multidimensional stressor of this pandemic has caused a global social, economic, and medical crisis.The economic burden of this epidemic has adversely impacted food insecurity and the health care system.This situation will negatively impact the economy and other industries too.It is affecting the health economics and industrial production in several countries.Current commodity prices hike, both in the regular market and during crises is a major concern for middleincome people.It has also been connected to an increase in criminal behavior.Bangladesh has suffered greatly since COVID-19 pandemic.A well-thought-out plan both for short and long term is essential at a policy level.Based on our previous knowledge of pandemics and our present understanding of COVID-19, public health needed to be given top most priority.Improving healthcare facilities and staff training is crucial in this situation.A culture of prevention and control of COVID-19 must be built across the country. Theoretical Implications Multiple theoretical ramifications flow from this investigation.First, this study found that COVID-19 had a more significant impact in Bangladesh than anywhere else due to the circumstantial factors.Several review papers have adhered to the PRISMA methodology.Afterward, a thorough assessment of 43 pieces was carried out in detail.As a result, we've compiled all of the articles into a single table.Based on those studies and materials, we've summarized our findings.COVID-19 has quickly disrupted global life and will have long-term and short-term consequences on several sectors, including economic issues, health and psychological issues, public health, online education, agriculture, and food security, criminal activities, refugee crisis, and the increase of domestic violence amidst pandemics. Practical Implications For the government of Bangladesh and those involved in battling COVID-19 for example the frontline workers, the findings of this study have practical relevance.Human lives and livelihoods should be considered at a top priority.In addition, the Government of Bangladesh should take some additional measures to target the people and fill their knowledge gaps, improve motivation for acceptable behavior, and create a culture of prevention and control of COVID-19 at a national level.In addition, 43 pieces of literature are cited in this study, making it a comprehensive evaluation.Researchers in the future will be benefit from this publication because it serves as a comprehensive compilation of literature reviews too.We are hopeful that our study will address a void in the literature regarding COVID-19 pandemic. Limitations and Future Research This study aimed to determine how specific parameters were associated with COVID-19's impact.This article incorporates substantial material to construct a review paper utilizing the PRISMA model.There would be methodological inconsistencies and data saturation biases.Future research should also examine rural residents' opinion on COVID-19 and related cybercrime, community participation and awareness during and after COVID-19, pandemic fanaticism, and the presence of prevalence risk factors. Figure 1 . Figure 1.Flowchart Describing the Search Strategy and Inclusion/Exclusion of Studies by Following the Preferred Reporting Items for Systematic Reviews (PRISMA) Guidelines Records identified through initial search (Web of Science, PubMed, PubMed Central and Scopus) (N = 67) Table 1 . Article Summaries Title: Prevalence and associated factors of intimate partner violence (IPV) against women in Bangladesh amid COVID-19 pandemic Methods: Quantitative Sample Size: 605 Population: Bangladeshi married women who lived with her intimate partner (aged 16-45 years) Numerous workers have been sent back to Bangladesh, and much more fear returning.It exacerbates social and economic issues such as unemployment, family finances, and foreign investment.The authors of this study neglect the psychological, physical, and cultural effects of COVID on Bangladeshi migrant labors. al., 2020) Title: COVID-19's impacts on migrant workers from Bangladesh: In search of policy intervention Methods: Qualitative Sample Size: Not specific Population: Undefined Experts estimate the migrant laborers' annual contributions to Bangladesh's socioeconomic growth is $15 billion.The COVID-19 disaster has harmed their social and economic status.Authors: (Ruszczyk et al., 2021) Title: Contextualizing the COVID-19 Pandemic's impact on food security in two small cities in Bangladesh Methods: Qualitative Sample Size: 201 Population: Lower income groups Police-records for different types of offendersDuring the pandemic in Bangladesh, the Dhaka Metropolitan Police made their crime data public so that people could see how certain crimes have changed over time in Bangladesh's capital city, Dhaka.Auto Regressive Moving Average Model is used to predict how many people are arrested for different types of crimes in 2020.According to what we know so far, drug traffickers may have been able to take advantage of lack or less regular surveillance.It doesn't include important factors like the offender's situation or social consequences, and it doesn't include COVID-19's predicted rate of crime, which are also important.UndefinedIn this research, the effectiveness of policies and the impact of lockdown metrics on the people's mental health and economy are examined.The costs of limiting the epidemic outweighed the benefits in terms of economic and mental health, according to the researchers.In addition, it examines the success of various initiatives in other countries to see if there is a similar trend in Bangladesh.Almost all of the information used in this research came from government entities.Data analysis results may be exposed to a degree of bias due to the circumstance of data collection. Parvez et al.: COVID-19 in Bangladesh: A systematic review of the literature from March 2020 to March 2021 Published by Digital Commons @ University of South Florida, 2023 Author: (Rashid, 2020) Title: Impact of COVID-19 on selected criminal activities in Dhaka, Bangladesh Methods: Qualitative Sample Size: Undefined Population: Authors: (Al-Amin et al., 2021) Title: Status of tertiary level online class in Bangladesh: Students' response on preparedness, participation and classroom activities Methods: Quantitative Sample Size: 844 Population: Different universities students of Bangladesh To monitor classroom activity and teacher preparation, the Bangladeshi government employed the internet.Despite occasional issues with attendance and class activity, Bangladeshi students are largely prepared for online education.Rural and urban students differ greatly in their performance during online classes.This study of Bangladeshi preparations and classroom activities provides some insight into these areas.10.Authors: (Ferdous et al., 2020) Title: Knowledge, attitude, and practice regarding COVID-19 outbreak in Bangladesh: An online based cross-sectional study Methods: Quantitative Sample Size: 2,068 (Age group 12-64 years) Population: Bangladeshi residentsThe purpose of this study was to learn how much Bangladeshis know, and what do they feel about COVID-19, as well as how they think about it.The fast spread of COVID-19 in Bangladesh has prompted a multitude of responses.This study was cross-sectional.So, it's possible that causal conclusions can't be verified.Second, self-reporting has limitations compared to face-to-face interviews, including partiality.The survey only included people with internet access, thus, the results may not represent the entire Bangladesh's population.11.Authors:(Rahman etal., 2021) Title: COVID-19 responses among general people of Bangladesh: Status and individual view toward COVID-19 during lockdown period Methods: Quantitative Sample Size: 616 Population: All Bangladeshi general people For economic advancement, this study recommends that the government must plan, coordinate, and educate.There is no evidence of other countries' GDP for a comparative analysis and that has minimized the findings of this research.Undefined This article investigates the possible causes of the COVID-19 outbreak in Bangladesh that deviated from the epidemiological model.The number of experiments was restricted for observation, which is the primary reason for the divergence from the model.The study also suggests that longer summers may have played a role in reducing the number of cases reported. (Alam et al., 2020)., 2020)Title: The impact of COVID-19 pandemic on the economic growth in Bangladesh: A conceptual review Methods: Qualitative Sample Size: Not specified Population: Undefined To assess the impact of COVID-19 on Bangladesh's economic indicators, such as ready-made garments (RMG), gross domestic product (GDP), foreign remittances, financial institutions, and manufacturing are examined. UndefinedExamines the critical question of whether Bangladesh can withstand additional testing in light of the dangers posed by India and Pakistan.Increased testing levels and increased testing capacity helped Bangladesh find more cases than India and Pakistan combined, according to the research.Data was gathered from the website worldometers.infoand evaluated using basic statistical techniques.It's essential to employ more advanced statistical tools before considering a single website for a study of this nature. Author:(Haque., 2020) Title: The COVID-19 pandemic and the public health challenges in Bangladesh: A commentary Methods: Qualitative Sample Size: Not specific Population: UndefinedThe purpose of this study is to shed information on the present coronavirus (COVID-19) pandemic in Bangladesh and the government's response.Ability to identify, inform, and mobilize community ahead of an emergency were the primary issues of investigation in this study.Checkpoints and COVID-19 hot zones are among the options being considered.This study's retroactive design may have hampered objectivity by preventing a full examination of the data. Parvez et al.: COVID-19 in Bangladesh: A systematic review of the literature from March 2020 to March 2021 Published by Digital Commons @ University of South Florida, 2023
2022-10-11T17:18:04.770Z
2023-03-01T00:00:00.000
{ "year": 2023, "sha1": "2c3c5a8fdeb7eb55e03b2fdd1b6a7a6ce315a5a9", "oa_license": "CCBYNC", "oa_url": "https://digitalcommons.usf.edu/cgi/viewcontent.cgi?article=1222&context=globe", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "76f9c7f35768ec7969d38c72947cf79253ebbe3f", "s2fieldsofstudy": [ "Education", "Economics", "Medicine", "Sociology" ], "extfieldsofstudy": [] }
239477690
pes2o/s2orc
v3-fos-license
Quantitative Identification of the Water Resistance Capacity of Composite Strata in Mining Coal Seam Floors Institute of Resources & Environment, Henan Polytechnic University, Jiaozuo 454000, China Collaborative Innovation Center of Coalbed Methane and Shale Gas for Central Plains Economic Region, Jiaozuo 454100, China Collaborative Innovation Center of Coal Work Safety and Clean High Efficiency Utilization, Jiaozuo 454100, China College of Geosciences and Engineering, North China University of Water Resources and Electric Power, Zhengzhou 450045, China Jiangmen Branch Bureau of Hydrology, Guangdong Provincial Bureau of Hydrology, Jiangmen 529000, China Institute of Energy and Chemical Industry, China Pingmei Shenma Group, Pingdingshan 467000, China Introduction In the process of mining Permian Carboniferous coal seams in North China coalfields, water inrush from an Ordovician or Cambrian thick limestone aquifer is always a threat. These bottom plates usually experience high water pressure and are rich in water [1,2]. The composite rock layer is composed of sandstone, mudstone, thin limestone, and a thin coal seam. Between the coal seam and thick limestone is the barrier for resisting high water pressure and preventing groundwater from rising. Therefore, it is of great theoretical and practical significance to quantitatively evaluate the water resistance capacity of the composite rock strata, which can scientifically formulate water prevention and control countermeasures to reduce the degree of harm from water inrush in an area. At present, many experts and scholars have carried out research work on the water resistance of rock formations. Yang [3] believed that the essence of mining under pressure is the existence of combined water, which makes the rock strata have the ability of water resistance and decompression, and proposed the concept of water resistance coefficient. Qian et al. [4] and Miao and Qian [5] put forward the "key layer" theory for judging water inrush from coal floors. Xu et al. [6] based on the investigation of the floor lithology, strata combination relationship, and karst development conditions of a certain wellfield proposed that at least 25 m upper Cambrian limestone could be used as an aquiclude, which significantly improved the mining conditions of the coal seam. Zhang et al. [7] studied the relationship between water resistance capacity of rock strata and its structural composition and combination form through laboratory tests and proposed that the combination with better water resistance ability should start with soft rock strata and alternate soft and hard rock strata. Yin and Hu [8] took structure, in situ stress, and rock permeability as the influencing factors of the water resistance capacity of rock strata and concluded that the water resistance capacity of mudstone, siltstone, medium sandstone, and limestone ranges from large to small. Feng et al. [9] used experiments and numerical simulations to study the effects of different lithological characteristics of the aquiclude on its water resistance capacity. Li et al. [10] faced the problem of the stability of the water resistance rock mass of the karst tunnel, only considered the safe thickness of the water resistance rock mass, and did not consider the joints and permeability of the rock mass, which has certain limitations. Sun et al. [11] studied the influence of thickness, layer, length, cohesion, and internal friction angle on the water resistance capacity of the key composite aquiclude and concluded that the water pressure that can withstand is in a quadratic parabola relationship with its thickness. The greater the thickness, the better the water resistance capacity. Xu et al. [12] analyzed the lithology, void structure, and permeability characteristics of Fengfeng Formation from a micromacro scale and made a quantitative study on its water resistance capacity. Lyu and Xie [13] and Zhang et al. [14] used statistical analysis and laboratory experiments to study the lithologic combination characteristics, rock mechanical strength, water properties, and permeability of coal seam overburden rock and evaluated the comprehensive water resistance capacity. Wang [15] studied the water resistance capacity of overlying rock from the strength, anisotropy, rheology, and expansibility of rock. As shown above, the research on the water resistance capacity of rock layers has developed from the initial consideration of a single factor such as rock thickness or lithologic difference [16][17][18][19][20] to a comprehensive analysis of multiple factors such as lithology combination, rock mass strength, and permeability, making the evaluation system more and more perfect. However, due to insufficient field data and difficulty in quantifying index factors, the existing evaluation index system often ignores the influence of geological structure, mining failure, and equivalent water resistance of different lithology and rock formations. In addition, in the existing research results, the analytic hierarchy process or grey correlation method is commonly used to calculate the weight of index factors, so that the calculation results are greatly influenced by the subjective shortcomings. In this paper, the thickness ratio of plastic brittle rock, core recovery rate, thickness of effective aquiclude, fault complexity, composite compressive strength, and equivalent water resistance coefficient were selected as index factors, covering many factors affecting the water resistance capacity of composite strata on coal floor. The comprehensive weight determined by entropy weight theory overcomes the subjective and objective randomness of the traditional method. The fuzzy variable set theory can describe the characteristics of things under the combined action of multiple index factors and has realized the quantitative identification of water resistance capacity of composite rock strata. The research results are expected to provide technical support for an accurate evaluation of water inrush risk from coal floors. [16][17] coal is mined. The thickness of the coal seam has a range of 2-3.8 m, with an average thickness of 2.44 m. The main threatening aquifer of the baseplate is a Cambrian limestone aquifer with a thickness greater than 200 m. The coal seam and Cambrian limestone are composed of sandy mudstone, medium-fine sandstone, thin coal seam, thin limestone, and bauxite mudstone. The floor rock structure of the J [16][17] coal is shown in Figure 1. The thickness is 68-77 m, and the mean value is 71.3 m. Index Factor Selection. During coal mining, the main factors controlling the water resistance of the floor composite rock layer are the lithological structure, integrity, thickness of effective aquiclude, fault development, compressive strength, and permeability. The floor of the Ji Formation coal seam is an interbed of brittle sand (limestone) and plastic mudstone. The more sand (limestone) layers and the thicker the single layer, the more easily the floor is damaged by excavation; while the mudstone layers depend on elastic deformation to decompose stress under load, the more numerous and thicker the mudstone layers, the better the water resistance of the rock layers. Lithological structure is usually characterized by the ratio of brittle rock thickness to plastic rock thickness revealed by drilling holes (thickness ratio of plastic brittle rock). The integrity of the rock mass represents the degree of development of cracks in the rock mass. It reflects the permeability and water-bearing capacity of the rock mass. Therefore, it is an important index for evaluating the water resistance of the rock formation. Integrity is often indicated by the ratio of the core length to the thickness of the formation (core recovery rate) during drilling. The lower the recovery is, the more fractured the rock is, the better the permeability is, and the stronger the water-bearing capacity is. The thickness of the aquiclude is the distance between the mining coal seam and the main threatening aquifer, and the disturbance destroy depth of the floor is the disturbance destroy depth under the mining condition of the coal seam. The thickness of effective aquiclude is the difference between the two. Based on the lithological structure and rock mass integrity, the greater the effective aquiclude is, the stronger the ability to resist water pressure damage, and the lower the possibility of water inrush from the floor. Faults and associated fissures not only destroy the integrity of the rock layers but also are important water diversion channels. The more developed the interruption layers in the coal seam floor, the denser the tensional faults are, the more serious the rock layers are damaged, and the higher the frequency of water inrush occurs. Once water inrush occurs, 2 Geofluids the greater the water volume is. Fault development is often indicated by fault complexity. The ability of the rock formation to resist water pressure is closely related to its own compressive strength. Hard limestone and sandstone have high compressive strength, but poor plasticity and weak water-retaining ability, while soft mudstone has the opposite characteristics: low compressive strength, good plasticity, and good water-retaining ability. The compressive strength of the multirock combination of the coal floor is characterized by the composite compressive strength. The more permeable the composite rock layer in the coal seam floor is, the higher the rise height of groundwater under the same water pressure and the greater the possibility of water inrush. In order to facilitate the uniform comparison and analysis of permeability for the different lithological rocks, the equivalent water resistance coefficient is often used to indicate the permeability of rocks. When the lithological thickness and equivalent water-proofing coefficient of each rock layer are known, the equivalent water resistance coefficient of the composite rock layer can be obtained as the basis for determining the water-resistance capacity of the floor. Therefore, we selected six factors, including the thickness ratio of plastic brittle rock, the core recovery rate, the thickness of effective aquiclude, fault complexity, composite compressive strength, and equivalent water resistance as the index factors to evaluate the water resistance of the coal seam floor rock. Index Factor Quantization 3.1. Thickness of Effective Aquiclude. The thickness of effective aquiclude in the coal seam floor can play the role of water blockage [21]. The calculation formula is [22] 3 Geofluids where t is the effective thickness of the aquiclude, m; M is the total thickness of the aquiclude, m; C p is the disturbance destroy depth of the floor, m; L is the inclined length of the working face, m; H is the mining depth of the coal layer, m; and α is the inclination angle of the coal layer. The average destruction depth of the Ji Group coal floor disturbance for the No. 8 mine and Shoushan mine calculated by formula (2) is shown in Table 1. We take drilling hole No. 1 as an example, located in the No. 8 coal mine. The total thickness of the floor rock of the J 16-17 coal exposed by drilling is 68.14 m. First, the disturbance damage depth of the floor rock is 20.41 m (Table 1). Then, according to Equation (1), the effective thickness of the aquiclude is calculated to be 47.73 m. Analogously, the effective water-resistant thickness of 48 boreholes is calculated, and their contours are drawn (as shown in Figure 2). The Complexity of the Fault. Fault complexity is often expressed by fractal dimension and calculated by fractal theory [23]. The smaller the fractal dimension is, the lower the complexity of the fault layer. where A is constant and D S is the fractal dimension value. Firstly, the region containing fractal is divided into several square blocks according to certain rules. The blocks containing fractal are numbered one by one, and the similarity ratio r = 1, 1/2, 1/4, and 1/8 are taken, respectively. The blocks are subdivided into 1, 4, 16, and 64 square grids. Count the number of grids NðrÞ occupied by the fractal body at different scales in a certain segment and establish the lg ðrÞ-lgNðrÞ double logarithmic coordinate system with formula (3). Then, the slope of the fitted line and the correlation coefficient are obtained by the least square method, and the absolute value of the slope is the value of the fractal dimension. According to the actual situation, this paper divided the mining area into 600 × 600 mm square blocks. The number of mesh NðrÞ covering faults when r = 600, 300, 150, and 75 mm is calculated successively, and the obtained results are shown in Table 2. The contour of fault fractal dimension is shown in Figure 3. Composite Compressive Strength. In this study, a total of 60 cores were collected from seven boreholes in the J [16][17] coal floor, including 25 mudstone, 25 sandstone, and 10 limestone samples. The test results of the compressive strength are shown in Table 3. The order of compressive strength is plastic mudstone < brittle limestone < brittle sandstone. Although brittle rock has a high compressive strength, it easily fractures under load, and its water resistance performance is poor, while plastic mudstone exhibits contrary behaviors. The research results show that when the compressive strength of brittle rock is twice that of plastic rock, the water resistance capac-ity of the latter is 2.5-3 times that of the former [24]. According to the manual of mine geology [25], the conversion coefficient of the compressive strength for different rocks is shown in Table 4. The compressive strength of the composite rock formation is calculated as follows: where p is the composite compressive strength of the rock strata, MPa; P i is the average compressive strength of each rock layer, MPa; τ i is the ratio of thickness of each rock layer; and γ i is the conversion coefficient of the compressive strength of each rock layer, MPa/m. Taking the No. 1 drilling hole of the No. 8 mine as an example, the thickness ratios of the mudstone, limestone, and sandstone are 58.61%, 31.57%, and 9.83%, respectively. The composite compressive strength calculated by using Equation (4) combined with Tables 3 and 4 is 4.87 MPa. Following the same procedure, the composite compressive strength of the 48 boreholes can be obtained, and the contour lines are drawn (as shown in Figure 4). The figure shows that the compressive strength of the J 16-17 coal floor composite rock layer is 4.00-7.43 MPa, and the mean value is 5.71 MPa. At present, the water pressure However, under the influence of faults and mining, the water-resistant performance of the coal seam floor rock will be significantly reduced, which shows the necessity of a multifactor evaluation of the water-resistance capability. Equivalent Water Resistance Coefficient. Referring to the existing literature [26][27][28], the equivalent water resistance coefficients of different rock layers are listed in Table 5. The equivalent water resistance coefficient of the composite rock formation is calculated as follows [28]: where q is the equivalent water resistance coefficient; τ i is the ratio of thickness of each rock layer; and λ i is the conversion value of equivalent water resistance coefficient. Based on the rock layer thickness revealed by drilling and the equivalent water resistance coefficient values listed in Table 5, the equivalent water resistance coefficient of the composite rock layers for the J 16-17 coal floor in 48 drilling holes was obtained. The contours are shown in Figure 5. 3.5. Index Factor Set. The thickness ratio of the plastic brittle rock and core recovery rate can be counted according to the drilling disclosure information. Quantitative values of the six index factors corresponding to the 48 boreholes are shown in Table 2. In order to correspond to the evaluation of the water-resistance of the rock formation, the value of the fault complexity factor is taken as the reciprocal of the fractal dimension. Index Factor Weight It is very important to choose a scientific mathematical method to determine the weight of the index factors. Referring to the existing research results [29], this paper chooses nine-scale AHP and gray correlation analysis to calculate the subjective and objective weights and then couples them to determine the comprehensive weights. Subjective Weight. The AHP establishes a hierarchical structure model, constructs a judgment matrix, and calculates the weight of each factor's influence on the overall goal [30,31]. As shown in Table 6, the target layer is the evaluation of the water resistance of the coal seam floors. The criterion layer divides the index factors into 3 categories, and the scheme layer includes 6 index factors. Starting from the criterion level, referring to expert opinions to construct secondary indicators, the judgment matrix of geological structure and disturbance damage, compression resistance and permeability, lithology combination and water blocking performance is By calculating CR = 0:0000 < 0:1, it satisfies the consistency condition, indicating that the constructed judgment matrix is reasonable. The subjective weight of the six index factors is shown in Table 7. Objective Weight. The objective weight can be calculated by using the grey correlation analysis method. According to the index factor values corresponding to the 48 boreholes (Table 2), the overall reference sequence [32] is obtained as follows: Then, the weights of the six index factors [34] can be obtained, and their values are shown in Table 8. Comprehensive Weights. According to the subjective and objective weights, the following formula can be used to calculate the comprehensive weight [35]: where ω 1i and ω 2i are the subjective and objective weights, respectively. Further application of entropy weight theory can calculate the relative entropy value: where HðU, VÞ is the relative entropy of U and V, and n is the number of indicators. The comprehensive weights of the six index factors determined by formula (11) are shown in Table 9. The relative entropies of the comprehensive weights with the subjective and objective weights can be calculated by formula (12) (as shown in Table 10). Obviously, the relative entropies are less than 0.1 and tend to 0, which indicates that the consistency between the comprehensive weight and the subjective and objective weight is high [36]. That is, the comprehensive weight can effectively combine the subjective and objective weights, and its weight distribution is more scientific and reasonable. Level Matrix Establishment. According to the existing index factor data with reference to the existing research results [37], the water-insulation capacity of the composite strata in the coal seam floor can be divided into five grades, namely, extremely weak (I), weak (II), medium (III), strong (IV), and very strong (V). Assuming that the mean value of an indicator factor is x and the mean square deviation is s, according to the mean-variance method, a recognition interval composed of five levels can be established. The calculation formulas are as follows: The standard interval matrix of the six index factors corresponding to the five levels can be determined: The index factor range matrix constructed as The index factor matrix is According to the principle of the fuzzy variable set, the relative membership matrix and the corresponding comprehensive weight are combined based on the requirements, and the hierarchical characteristic values under different parameters can be obtained. where i = 1, 2, 3, ⋯ , n; n is the number of indicators; h = 1, 2, 3, ⋯ , m; m is the number of evaluation index grades; w i is the weight of the evaluation index (as shown in Table 9); μ A ðμ ih Þ is the relative membership degree of the ith index under grade h; and α and β are optimization criteria and distance parameters, respectively, usually taking values 1 and 2. According to formulas (19) and (20), the level eigenvalue of borehole 1 is H = 3:6544, 3:5047, 4:1654, 3:9117 ð Þ : ð21Þ Geofluids The mean of the level eigenvalue is 3.8091. Similarly, the mean value of the grade characteristics of other boreholes can be calculated as shown in Table 11 (Note: α and β are the parameters of the compound operation of the fuzzy variable sets). Water-Resistant Capacity Zoning. According to the hydrogeological conditions of the study area and the research results of others [38], the eigenvalue thresholds corresponding to the five levels of water insulation capacity are shown in Table 12. According to the characteristic values of 48 drilling levels of the No. 8 mine and the Shoushan mine listed in Table 11, the water resistance capacity grade can be determined by the classification standard in Table 12. The values are also listed in Table 11. The corresponding partition of the water resistance capacity of the composite strata in the coal floor of the J 16-17 is shown in Figure 6. The statistical analyses showed that the strong and very strong water resistance areas occupy 23.64% of the total area. The medium area accounts for 58.26%, and the weak and extremely weak areas account for 18.1%. The medium water insulation capacity is relatively high, and the weak and very weak areas are relatively small. Discussion It can be seen from the calculation process of AHP that the determination of the weight of indicator factors depends on the expert opinions or scores, and the results are easily affected by the subjective will of experts. Grey correlation method is based on the actual drilling data to determine the weight; it can avoid the impact of evaluator's subjective will, but the grey correlation analysis method uses the same weight set when calculating the optimal solution, it is difficult to reflect the optimization of the evaluation. Combined with the previous two methods, the comprehensive weight determined by the entropy weight method can not only reduce the interference of human factors but also fully reflect the actual field, and its results are more scientific and reliable. In the present mine excavation project, the water resistance capacity is usually judged according to the thickness of the aquiclude. It can be seen from Table 2 that the effective water resistance thickness of Nos. 38, 40, and 41 are 55.25 m, 79.58 m, and 75.73 m, respectively. Based on this, it is judged that the water resistance of the rock formation near No. 38 borehole is weaker than 40 and 41 drilling. In fact, the existence of the Huoyan fault near boreholes 40 and 41 not only reduces the distance between the coal seam and the aquifer [39] but also destroys the integrity of the coal seam floor [40][41][42]. At the same time, it also changed the migration characteristics of groundwater [43], which greatly reduced the water resistance of the rock formation. In this paper, it is determined that the water resistance capacity of the rock strata at borehole No. 38 is class V, and that of the rock strata at boreholes Nos. 40 and 41 is class I (Table 11), that is, the water resistance capacity of borehole No. 38 is greater than that of boreholes Nos. 40 and 41. The results are credible. It can be seen from Figure 6 that the water resistance level characteristic value of the composite rock layer of the coal seam floor in the west of the Shoushan mine and the southeast of the No. 8 mine is above 2.5, and the water resistance is relatively strong. Therefore, the possibility of water inrush from the floor during coal mining is relatively small. The mining activities of the No. 8 mine and the Shoushan mine are mainly carried out in areas with strong and medium water resistance. The 13230, 13250, 13260, 13270, 13290, and 13310 working faces of No. 8 mine have stopped mining. The 12010, 12030, 12050, and 12070 working faces of Shoushan mine have stopped mining. There is no floor water inrush accident in these working faces, which shows that the evaluation results are in good agreement with the actual situation. Conclusions (1) Based on the comprehensive analysis of multiple influencing factors on the water resistance of the J 16-17 coal seam floor composite rock in the Pingdingshan Coalfield No. 8 mine and the Shoushan mine, we select the thickness ratio of plastic brittle rock, core recovery rate, thickness of effective aquiclude, fault complexity, composite compressive strength, and equivalent water resistance coefficient as the evaluation index factors. It provides a guarantee for identifying the water resistance of the composite rock layer of the coal seam floor (2) Based on the analytic hierarchy process and grey relational analysis, the subjective and objective weights of the index factors are defined. The entropy weight theory is used to determine the comprehensive weights. Based on the fuzzy variable set theory, the mathematical model of water resistance evaluation is constructed, and the J 16-17 coal floor is quantitatively identified. The water resistance of the coal seam floor composite rock layer is divided into five grades: extremely weak, weak, medium, strong, and very strong, laying the foundation for the accurate assessment of the water inrush risk from the coal seam floor (3) The areas with strong and very strong water resistance capacity of the J 16-17 coal floor composite rock in the No. 8 mine and the Shoushan mine account for 23.64% of the total area, the medium area accounts for 58.26%, and the weak and extremely weak areas account for 18.1%. Areas with medium water-resisting capacity accounting for relatively high, weak, and very weak areas are relatively small. The accurate evaluation and zoning of waterresistance capacity indicate the direction for the mine to take targeted measures to prevent and control floor water hazards (4) The comprehensive weight of the index factors determined by the entropy weight theory reduces
2021-10-23T15:20:04.887Z
2021-10-20T00:00:00.000
{ "year": 2021, "sha1": "cfe29f8e4399b26038af832a12bc001765a0d342", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/geofluids/2021/3143024.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "c4bb0df1286142eff8e9779be3e0712389874bb5", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [] }
191203126
pes2o/s2orc
v3-fos-license
Atomic Force Spectroscopy on Ionic Liquids : Ionic liquids have become of significant relevance in chemistry, as they can serve as environmentally-friendly solvents, electrolytes, and lubricants with bespoke properties. In particular for electrochemical applications, an understanding of the interface structure between the ionic liquid and an electrified interface is needed to model and optimize the reactions taking place on the solid surface. As with ionic liquids, the interplay between electrostatic forces and steric e ff ects leads to an intrinsic heterogeneity, as the structure of the ionic liquid above an electrified interface cannot be described by the classical electrical double layer model. Instead, a layered solvation layer is present with a structure that depends on the material combination of the ionic liquid and substrate. In order to experimentally monitor this structure, atomic force spectroscopy (AFS) has become the method of choice. By measuring the force acting on a sharp microfabricated tip while approaching the surface in an ionic liquid, it has become possible to map the solvation layers with sub-nanometer resolution. In this review, we provide an overview of the AFS studies on ionic liquids published in recent years that illustrate how the interface is formed and how it can be modified by applying electrical potential or by adding impurities and solvents. Introduction In general, ionic liquids are defined as liquids that are composed entirely of ions [1]. This definition includes melts of crystalline ionic materials, which are commonly referred to as "molten salts", as well as mixtures of any type of ions in the liquid state. In a narrower sense, "room-temperature ionic liquids (RTIL)" are understood as salts, which often contain organic ions, with melting points below room temperature [2,3]. Starting from their first discovery in 1914, they have become of central interest in fundamental and applied chemistry in recent decades, mainly because they are considered as promising alternative environmentally-beneficent solvents [4]. As their properties can be as systematically tailored as both their constituents, anion and cation, can be varied, they have been labelled "designer solvents" [5]. In general, ionic liquids provide a variety of desirable characteristics for chemical engineering, such as a high chemical and thermal stability, a high ionic conductivity, a negligible vapor pressure, and low flammability [6]. Potential fields of application comprise not only all techniques of chemical synthesis and processing where conventional molecular solvents are employed, but also mechanical engineering where ionic liquids may serve as lubricants and coatings [7]. Moreover, they have attracted enormous attention in electrochemistry, as they can be used as stable electrolytes for sustainable energy storage and conversion by supercapacitors, batteries, dye-sensitized solar cells, or polymer electrolyte membrane fuel cells (PEMFCs) [8]. As an illustration of the composition of ionic liquids, selected cations and anions are presented in Figure 1. The analysis and description of ionic liquids opens a new field of science, not only because of the tremendous number of potential ionic liquids (up to 10 6 binary and 10 18 ternary ionic liquids have been predicted [9]), but rather because ionic liquids behave fundamentally differently than conventional molecular solvents. Classically, liquids are understood as matter without inner structure and long-range ordering [10]. As experiments have shown that this does not hold true for ionic liquids, a paradigm shift in the understanding of their structure has occurred. As Hayes et al. [11] put it, molecular solvents can be described as "homogeneous, coherent, and essentially irregular", but ionic liquids must be regarded as "nanoheterogeneous, coherent, and essentially regular". This regular ordering not only involves the bulk of ionic liquids but also surfaces and interfaces, which is of particular importance when using ionic liquids in electrochemical applications. In molecular solvents, the interface between a solid electrode and a liquid solvent can be described as the formation of a double-layer, as elaborated by Helmholtz, Gouy, Chapman, and Stern [12], but in ionic liquids, the internal ordering also influences the electrochemical processes at the interface [13]. Hence, a multitude of experimental approaches have been applied in order to elucidate the structure of ionic liquids at electrified interfaces. Among them, at first, electrochemical impedance spectroscopy (EIS) must be mentioned as a well-established method for estimating the average double layer thickness and the potential of zero charge (PZC) [14]. This method has been complemented by optical methods of interface analysis such as sum frequency generation vibrational spectroscopy (SFG), or infrared (IR) and Raman spectroscopy [15,16]. High-energy reflection and scattering using X-rays and neutrons have also been employed, revealing that ionic liquids form regular layers at the interface [17,18]. Direct mechanical access to the interface structure has been provided by using a surface force apparatus (SFA) to measure the thickness of anion and cation layers at the interface with utmost precision down to the Angstrom scale but averaging over a large contact area in the square micrometer range [19]. Hence, in recent years, atomic force microscopy (AFM) has become the method of choice for the direct investigation of ionic liquid interfaces, as it probes the layered structure locally by using a microfabricated tip with a radius of curvature of a few tens of nanometers [20]. Typically, force spectroscopy mode has been used, wherein the force between the tip operating in the liquid and the solid interface is recorded during approaching and retracting, which will thus be the focus of this review. Appl. Sci. 2019, 9, x FOR PEER REVIEW 2 of 28 conventional molecular solvents. Classically, liquids are understood as matter without inner structure and long-range ordering [10]. As experiments have shown that this does not hold true for ionic liquids, a paradigm shift in the understanding of their structure has occurred. As Hayes et al. [11] put it, molecular solvents can be described as "homogeneous, coherent, and essentially irregular", but ionic liquids must be regarded as "nanoheterogeneous, coherent, and essentially regular". This regular ordering not only involves the bulk of ionic liquids but also surfaces and interfaces, which is of particular importance when using ionic liquids in electrochemical applications. In molecular solvents, the interface between a solid electrode and a liquid solvent can be described as the formation of a double-layer, as elaborated by Helmholtz, Gouy, Chapman, and Stern [12], but in ionic liquids, the internal ordering also influences the electrochemical processes at the interface [13]. Hence, a multitude of experimental approaches have been applied in order to elucidate the structure of ionic liquids at electrified interfaces. Among them, at first, electrochemical impedance spectroscopy (EIS) must be mentioned as a well-established method for estimating the average double layer thickness and the potential of zero charge (PZC) [14]. This method has been complemented by optical methods of interface analysis such as sum frequency generation vibrational spectroscopy (SFG), or infrared (IR) and Raman spectroscopy [15,16]. High-energy reflection and scattering using X-rays and neutrons have also been employed, revealing that ionic liquids form regular layers at the interface [17,18]. Direct mechanical access to the interface structure has been provided by using a surface force apparatus (SFA) to measure the thickness of anion and cation layers at the interface with utmost precision down to the Angstrom scale but averaging over a large contact area in the square micrometer range [19]. Hence, in recent years, atomic force microscopy (AFM) has become the method of choice for the direct investigation of ionic liquid interfaces, as it probes the layered structure locally by using a microfabricated tip with a radius of curvature of a few tens of nanometers [20]. Typically, force spectroscopy mode has been used, wherein the force between the tip operating in the liquid and the solid interface is recorded during approaching and retracting, which will thus be the focus of this review. In the following, we will give a brief overview of the present knowledge on the bulk structure of ionic liquids and the formation of the interface layer to electrified surfaces before describing the measurement principle of atomic force spectroscopy. In the main part of this review, selected examples of AFM measurements in different modes are made, and the insights in the electrical double layer formed by ionic liquids and solid surfaces provided by this technique are discussed. Bulk Structure of Ionic Liquids As ionic liquids are composed of ions, the main interaction in the liquid phase is the electrostatic force between the oppositely-charged molecules. In contrast to a classical molten salt however, these molecules, in particular the cations, show a significant molecular asymmetry that countervails the coulombic interaction. This effect is the main cause of the characteristic low melting temperature [21,22]. Extensive theoretical and experimental research has been performed, especially on imidazolium-based ionic liquids, and it was found that they order in a self-assembled network, mediated by hydrogen bonding between the molecules in the solid as well as the liquid and even in the gas phase [23,24]. Hence, they can be effectively described as a heterogeneous polymeric "supramolecular fluid" consisting of polar and nonpolar domains at the nanoscale [25]. In order to illustrate the bulk structure of ionic liquids, simulations related to molecular dynamics, together with experimental high-energy scattering methods, are frequently employed [26]. When simulating the forces between the molecules, particular care must be taken into account for steric effects, since the electronic landscape around the ions in the liquid reveals a distinct anisotropy [27]. These effects can be exemplified with the imidazolium cations 1-ethyl-3-methylimidazolium [EMIm] + and 1-butyl-3-methylimidazolium [BMIm] + (see Figure 1). They both consist of a methyl group and an imidazole ring, while the length of their alkyl side chain differs. As the positive charge of the cation is located in the methyl head group, the tail group can be regarded as being nonpolar. By performing molecular dynamics simulations of ionic liquids consisting of imidazolium cations and NO 3 − anions, Wang et al. [28] could show that the nonpolar tail groups of the cations mainly aggregate due to van-der-Waals forces. In this way, they form domains beside a network consisting of the head groups of the cations and anions, which is kept together by electrostatic forces. In this way, a heterogeneous nanostructure of the ionic liquid is established. In the following, we will give a brief overview of the present knowledge on the bulk structure of ionic liquids and the formation of the interface layer to electrified surfaces before describing the measurement principle of atomic force spectroscopy. In the main part of this review, selected examples of AFM measurements in different modes are made, and the insights in the electrical double layer formed by ionic liquids and solid surfaces provided by this technique are discussed. Bulk Structure of Ionic Liquids As ionic liquids are composed of ions, the main interaction in the liquid phase is the electrostatic force between the oppositely-charged molecules. In contrast to a classical molten salt however, these molecules, in particular the cations, show a significant molecular asymmetry that countervails the coulombic interaction. This effect is the main cause of the characteristic low melting temperature [21,22]. Extensive theoretical and experimental research has been performed, especially on imidazolium-based ionic liquids, and it was found that they order in a self-assembled network, mediated by hydrogen bonding between the molecules in the solid as well as the liquid and even in the gas phase [23,24]. Hence, they can be effectively described as a heterogeneous polymeric "supramolecular fluid" consisting of polar and nonpolar domains at the nanoscale [25]. In order to illustrate the bulk structure of ionic liquids, simulations related to molecular dynamics, together with experimental high-energy scattering methods, are frequently employed [26]. When simulating the forces between the molecules, particular care must be taken into account for steric effects, since the electronic landscape around the ions in the liquid reveals a distinct anisotropy [27]. These effects can be exemplified with the imidazolium cations 1-ethyl-3methylimidazolium [EMIm] + and 1-butyl-3-methylimidazolium [BMIm] + (see Figure 1). They both consist of a methyl group and an imidazole ring, while the length of their alkyl side chain differs. As the positive charge of the cation is located in the methyl head group, the tail group can be regarded as being nonpolar. By performing molecular dynamics simulations of ionic liquids consisting of imidazolium cations and NO3 -anions, Wang et al. [28] could show that the nonpolar tail groups of the cations mainly aggregate due to van-der-Waals forces. In this way, they form domains beside a network consisting of the head groups of the cations and anions, which is kept together by electrostatic forces. In this way, a heterogeneous nanostructure of the ionic liquid is established. Figure 2. Molecular dynamics simulation of an ionic liquid consisting of imidazolium-based cations and NO3 − sandwiched between two vacuum interfaces. The white spheres represent the nonpolar cationic tail groups, the gold spheres the charged cationic head groups, the red spheres the anions, and the blue lines the cathodic side chains connecting the head and tail groups. Adapted from Wang et al. [28]. Figure 2 depicts a snapshot of such a simulation, making the aggregation phenomenon in the bulk visible. Comparing cations with different tail groups, Wang et al. [28] concluded that the tendency for building heterogeneous bulk structures increases with the length of the alkyl chains, which can explain many macroscopic properties such as diffusivity and viscosity. Not only does this hold true for the cations, but it was also found that asymmetric anions with extended alkyl chains can be the cause of nanostructurization and domain formation. This opens up the opportunity to − sandwiched between two vacuum interfaces. The white spheres represent the nonpolar cationic tail groups, the gold spheres the charged cationic head groups, the red spheres the anions, and the blue lines the cathodic side chains connecting the head and tail groups. Adapted from Wang et al. [28]. Figure 2 depicts a snapshot of such a simulation, making the aggregation phenomenon in the bulk visible. Comparing cations with different tail groups, Wang et al. [28] concluded that the tendency for building heterogeneous bulk structures increases with the length of the alkyl chains, which can explain many macroscopic properties such as diffusivity and viscosity. Not only does this hold true for the cations, but it was also found that asymmetric anions with extended alkyl chains can be the cause of Appl. Sci. 2019, 9, 2207 4 of 28 nanostructurization and domain formation. This opens up the opportunity to tailor ionic liquids with specific macroscopic properties by changing the configuration of their base ions [29,30]. Depending on the specific geometry of the ions, a variety of complex bulk structures can be adopted, e.g., for the protic ionic liquid ethylammonium nitrate (EtNH 3 NO 3 ), a defined structure composed of cationic layers with the anions distributed in between the layers as well as being intermixed with the cationic layers was proposed [31]. With the addition of further atomic species, the complexity increases. One example is triphilic ionic liquids, consisting of fluorinated, polar, and apolar constituents [32]. In these liquids, the cations typically contain the tails of alkyl chains while the anions have fluorous tails [33]. It was found that these fluorous tails agglomerate via self-assembly on the mesoscopic scale. Hence, three distinguishable types of domains are present, namely the charged polar domain, the fluorophilic domain and the lipophilic hydrocarbon domain [33]. They establish a filamentary network throughout the bulk phase of the liquid, thus introducing complex order phenomena [34]. As the building blocks of the network, which have fundamentally different local properties, coexist on the nanoscale, triphilic ionic liquids promise the possibility of the precise local control of chemical reactions and transport properties and thus have become an emerging research topic [33]. A further step towards more structural complexity is the admixture of additives or cosolvents, e.g., by mixing different ionic liquids [35]. When two or more ionic liquids are mixed, the resulting properties cannot be simply understood as conventional "double salt mixtures" but as all constituting ions interacting with each other, thus forming an individual nanostructure that determines the properties of the new liquid [36]. In pure ionic liquids, mixture effects related to impurities must also be considered. Water in particular is a common impurity, as it can be easily absorbed from a humid atmosphere or purposely added to the ionic liquid to alter the mechanical or electrochemical properties [37][38][39][40]. The uptake of water was found to depend specifically on the hydrophilicity of the anions, thus pointing to a close interaction between water and the ionic liquid nanostructure [41][42][43]. Appl. Sci. 2019, 9, x FOR PEER REVIEW 4 of 28 tailor ionic liquids with specific macroscopic properties by changing the configuration of their base ions [29,30]. Depending on the specific geometry of the ions, a variety of complex bulk structures can be adopted, e.g., for the protic ionic liquid ethylammonium nitrate (EtNH3NO3), a defined structure composed of cationic layers with the anions distributed in between the layers as well as being intermixed with the cationic layers was proposed [31]. With the addition of further atomic species, the complexity increases. One example is triphilic ionic liquids, consisting of fluorinated, polar, and apolar constituents [32]. In these liquids, the cations typically contain the tails of alkyl chains while the anions have fluorous tails [33]. It was found that these fluorous tails agglomerate via self-assembly on the mesoscopic scale. Hence, three distinguishable types of domains are present, namely the charged polar domain, the fluorophilic domain and the lipophilic hydrocarbon domain [33]. They establish a filamentary network throughout the bulk phase of the liquid, thus introducing complex order phenomena [34]. As the building blocks of the network, which have fundamentally different local properties, coexist on the nanoscale, triphilic ionic liquids promise the possibility of the precise local control of chemical reactions and transport properties and thus have become an emerging research topic [33]. A further step towards more structural complexity is the admixture of additives or cosolvents, e.g., by mixing different ionic liquids [35]. When two or more ionic liquids are mixed, the resulting properties cannot be simply understood as conventional "double salt mixtures" but as all constituting ions interacting with each other, thus forming an individual nanostructure that determines the properties of the new liquid [36]. In pure ionic liquids, mixture effects related to impurities must also be considered. Water in particular is a common impurity, as it can be easily absorbed from a humid atmosphere or purposely added to the ionic liquid to alter the mechanical or electrochemical properties [37][38][39][40]. The uptake of water was found to depend specifically on the hydrophilicity of the anions, thus pointing to a close interaction between water and the ionic liquid nanostructure [41][42][43]. . Schematic illustration of the mechanism of the intermixing between water and an ionic liquid. The ionic liquid is displayed in yellow (cations) and blue (anions) and the water in red (oxygen) and white (hydrogen). Adapted from Ma et al. [44]. In general, the addition of water to an ionic liquid can be divided into different steps, as elaborated by Ma et al. [44]. This scheme is illustrated in Figure 3. With small water contents, an embedment of water in the ionic liquid nanostructure can be established without clustering effects and thus this stage can be described as the solvent mixture (step II). With increasing water content, the ordered structure of the ionic liquid gets distorted and the dissociation and hydration occurs such that individual ion pairs can evolve (step III). Finally, at high water concentrations, a complete or partial dissociation and hydration takes place (step IV) and thus the mixture can be treated as a classical electrolyte. Depending on the individual properties of the ions, the solution of the ions in the water can also be related to the formation of micelles [45]. This has been studied in detail with ionic liquids containing the 1-alkyl-3-methylimidazolium family of cations and it has been found that the micelles formation can accelerate specific chemical reactions, thus showing potential for the practical application of ionic liquid-water mixtures in catalysis [46]. Considering the mechanisms of the formation of a heterogeneous nanostructure in the bulk of ionic liquids described so far, it must be expected that they will also influence the formation of the interface structure. As the presence of an interface breaks the symmetry of the system, a different . Schematic illustration of the mechanism of the intermixing between water and an ionic liquid. The ionic liquid is displayed in yellow (cations) and blue (anions) and the water in red (oxygen) and white (hydrogen). Adapted from Ma et al. [44]. In general, the addition of water to an ionic liquid can be divided into different steps, as elaborated by Ma et al. [44]. This scheme is illustrated in Figure 3. With small water contents, an embedment of water in the ionic liquid nanostructure can be established without clustering effects and thus this stage can be described as the solvent mixture (step II). With increasing water content, the ordered structure of the ionic liquid gets distorted and the dissociation and hydration occurs such that individual ion pairs can evolve (step III). Finally, at high water concentrations, a complete or partial dissociation and hydration takes place (step IV) and thus the mixture can be treated as a classical electrolyte. Depending on the individual properties of the ions, the solution of the ions in the water can also be related to the formation of micelles [45]. This has been studied in detail with ionic liquids containing the 1-alkyl-3-methylimidazolium family of cations and it has been found that the micelles formation can accelerate specific chemical reactions, thus showing potential for the practical application of ionic liquid-water mixtures in catalysis [46]. Considering the mechanisms of the formation of a heterogeneous nanostructure in the bulk of ionic liquids described so far, it must be expected that they will also influence the formation of the interface structure. As the presence of an interface breaks the symmetry of the system, a different nanostructure than that in the bulk evolves [47]. This can be readily seen in the simulation shown in Figure 2 where, at the surface of the liquid modelled as the interface to the vacuum, a preferential agglomeration of cationic tail groups occurs. When the ionic liquid is in contact with a solid, the interface structure is also affected by the topography, atomic arrangement, reactivity, local charge, and polarizability of the solid surface [11]. This interface structure becomes additionally distorted when electrical potential is applied to the solid, as is the case in electrochemical cells such as PEMFCs that employ ionic liquids as an electrolyte [48]. In consequence, a complex layered structure is formed in the ionic liquid that determines the kinetics of ion transport and reactions at the interface. The Electric Double Layer at Solid-Liquid Interfaces Investigations of the electric double layer formed in a liquid electrolyte close to an electrified interface range back to the early days of electrochemistry. In 1853, Helmholtz described the electric double layer as a simple capacitor formed by ions of the electrolyte being attracted by the oppositely-charged surface [49]. Later, Gouy and Chapman improved this conception by taking into account the fact that entropic effects lead to a smearing of the region that screens the surface charge and described the double layer as a diffuse space charge zone [50,51]. This model has been found to work well in describing electrolytes with low ion concentrations at moderate voltages. Stern further improved understanding of the interface by re-introducing a "Helmholtz-like" layer directly at the surface, thus removing the deficiencies that the Gouy-Chapman model revealed at high voltages, as it does not account for the finite size of the ions [52]. In this Gouy-Chapman-Stern model, a linear potential drop in the inner compact layer occurs before a quasi-exponential decrease in the outer layer does. An illustration of the discussed models of the electrical double layer is shown in Figure 4. These models are so-called "primitive models", as they treat the ions as charged spheres, the solvent as a dielectric continuum, and the electrode as an ideal metal. When trying to understand the interface between an electrified solid surface and an ionic liquid, the above-mentioned models could not be successfully applied, mainly because they were developed for diluted electrolytes where the ion distribution can be described by Poisson-Boltzmann statistics [13]. To obtain a more comprehensive insight into the mechanism of the double layer, so-called "non-primitive models" were developed to take into account the properties of the metal electrode and the discrete nature of the solvent [53]. For instance, the ion-dipole-jellium model treats the solvent and solute as an ensemble of hard spheres interacting with the electron gas of the electrode [54,55]. In this way, it became possible to describe the processes at the interface with particular consideration of the charge transfer from a quantum mechanical perspective. However, for modelling ionic liquids, further parameters must be considered, as the electrolyte forms a kind of "ionic plasma" [56]. It was found that their electric double layer shows similarities to conventional molten salts, where overscreening or crowding effects become relevant [57]. Describing such a complex interface structure is challenging, and approaches using mean-field theory, Landau-Ginzburg theory, and molecular dynamics have been considered [13,[57][58][59]. Appl. Sci. 2019, 9, x FOR PEER REVIEW 5 of 28 nanostructure than that in the bulk evolves [47]. This can be readily seen in the simulation shown in Figure 2 where, at the surface of the liquid modelled as the interface to the vacuum, a preferential agglomeration of cationic tail groups occurs. When the ionic liquid is in contact with a solid, the interface structure is also affected by the topography, atomic arrangement, reactivity, local charge, and polarizability of the solid surface [11]. This interface structure becomes additionally distorted when electrical potential is applied to the solid, as is the case in electrochemical cells such as PEMFCs that employ ionic liquids as an electrolyte [48]. In consequence, a complex layered structure is formed in the ionic liquid that determines the kinetics of ion transport and reactions at the interface. The Electric Double Layer at Solid-Liquid Interfaces Investigations of the electric double layer formed in a liquid electrolyte close to an electrified interface range back to the early days of electrochemistry. In 1853, Helmholtz described the electric double layer as a simple capacitor formed by ions of the electrolyte being attracted by the oppositelycharged surface [49]. Later, Gouy and Chapman improved this conception by taking into account the fact that entropic effects lead to a smearing of the region that screens the surface charge and described the double layer as a diffuse space charge zone [50,51]. This model has been found to work well in describing electrolytes with low ion concentrations at moderate voltages. Stern further improved understanding of the interface by re-introducing a "Helmholtz-like" layer directly at the surface, thus removing the deficiencies that the Gouy-Chapman model revealed at high voltages, as it does not account for the finite size of the ions [52]. In this Gouy-Chapman-Stern model, a linear potential drop in the inner compact layer occurs before a quasi-exponential decrease in the outer layer does. An illustration of the discussed models of the electrical double layer is shown in Figure 4. These models are so-called "primitive models", as they treat the ions as charged spheres, the solvent as a dielectric continuum, and the electrode as an ideal metal. When trying to understand the interface between an electrified solid surface and an ionic liquid, the above-mentioned models could not be successfully applied, mainly because they were developed for diluted electrolytes where the ion distribution can be described by Poisson-Boltzmann statistics [13]. To obtain a more comprehensive insight into the mechanism of the double layer, so-called "non-primitive models" were developed to take into account the properties of the metal electrode and the discrete nature of the solvent [53]. For instance, the ion-dipole-jellium model treats the solvent and solute as an ensemble of hard spheres interacting with the electron gas of the electrode [54,55]. In this way, it became possible to describe the processes at the interface with particular consideration of the charge transfer from a quantum mechanical perspective. However, for modelling ionic liquids, further parameters must be considered, as the electrolyte forms a kind of "ionic plasma" [56]. It was found that their electric double layer shows similarities to conventional molten salts, where overscreening or crowding effects become relevant [57]. Describing such a complex interface structure is challenging, and approaches using mean-field theory, Landau-Ginzburg theory, and molecular dynamics have been considered [13,[57][58][59]. As ionic liquids are entirely composed of ions, one can assume in a simplified approximation that ions with a charge opposite to the electrified electrode will be attracted to the interface. As illustrated in Figure 5 for a positive electrode, the first layer would be formed by anions, followed by As ionic liquids are entirely composed of ions, one can assume in a simplified approximation that ions with a charge opposite to the electrified electrode will be attracted to the interface. As illustrated in Figure 5 for a positive electrode, the first layer would be formed by anions, followed by a cation layer. Depending on the strength of the coulomb interaction, this layering will be repeated towards the bulk several times. With increasing distance from the interface, the disorder will decrease until, finally, the bulk structure is adopted. In ionic liquids composed of asymmetric ions, the orientation of the molecules also becomes relevant, and the interaction between charged head groups and uncharged tail groups can result in complex field-induced charge arrangement processes [60]. Appl. Sci. 2019, 9, x FOR PEER REVIEW 6 of 28 a cation layer. Depending on the strength of the coulomb interaction, this layering will be repeated towards the bulk several times. With increasing distance from the interface, the disorder will decrease until, finally, the bulk structure is adopted. In ionic liquids composed of asymmetric ions, the orientation of the molecules also becomes relevant, and the interaction between charged head groups and uncharged tail groups can result in complex field-induced charge arrangement processes [60]. In order to verify or improve on these theories, direct experimental insight into the double layer structure is essential. In particular, the mechanical methods, AFM and SPA, have contributed significantly to elucidating the interface structure in recent years. While the AFM investigations will be discussed in detail below, we present as a first example SFA measurements performed on [C4MIm][NTf2] (also referred to as [BMIm][TFSI]) by Gebbie et al. [61], illustratively revealing the shape of the extended double layer. As is shown in Figure 6, the force between a positive Au and a negative mica surface was measured to reveal an attraction starting at a distance of more than 25 nm before a repulsive force prevailed in the last nanometers in front of the surface. The data were modeled by the interplay of double layer attraction, van der Waals attraction, and steric repulsion. These results indicate that a long-range double layer, as well as a dense short-range nanostructure, which will be in the scope of the following discussion, exists simultaneously. Principles of Atomic Force Spectroscopy Shortly after the epoch-making invention of the scanning tunneling microscope (STM), Binnig et al. developed the atomic force microscope (AFM) in the 1980s [62]. It utilizes a sharp tip attached to a microcantilever scanning the surface. In the standard contact mode, the deflection of the In order to verify or improve on these theories, direct experimental insight into the double layer structure is essential. In particular, the mechanical methods, AFM and SPA, have contributed significantly to elucidating the interface structure in recent years. While the AFM investigations will be discussed in detail below, we present as a first example SFA measurements performed on [C 4 MIm][NTf 2 ] (also referred to as [BMIm][TFSI]) by Gebbie et al. [61], illustratively revealing the shape of the extended double layer. As is shown in Figure 6, the force between a positive Au and a negative mica surface was measured to reveal an attraction starting at a distance of more than 25 nm before a repulsive force prevailed in the last nanometers in front of the surface. The data were modeled by the interplay of double layer attraction, van der Waals attraction, and steric repulsion. These results indicate that a long-range double layer, as well as a dense short-range nanostructure, which will be in the scope of the following discussion, exists simultaneously. Appl. Sci. 2019, 9, x FOR PEER REVIEW 6 of 28 a cation layer. Depending on the strength of the coulomb interaction, this layering will be repeated towards the bulk several times. With increasing distance from the interface, the disorder will decrease until, finally, the bulk structure is adopted. In ionic liquids composed of asymmetric ions, the orientation of the molecules also becomes relevant, and the interaction between charged head groups and uncharged tail groups can result in complex field-induced charge arrangement processes [60]. In order to verify or improve on these theories, direct experimental insight into the double layer structure is essential. In particular, the mechanical methods, AFM and SPA, have contributed significantly to elucidating the interface structure in recent years. While the AFM investigations will be discussed in detail below, we present as a first example SFA measurements performed on [C4MIm][NTf2] (also referred to as [BMIm][TFSI]) by Gebbie et al. [61], illustratively revealing the shape of the extended double layer. As is shown in Figure 6, the force between a positive Au and a negative mica surface was measured to reveal an attraction starting at a distance of more than 25 nm before a repulsive force prevailed in the last nanometers in front of the surface. The data were modeled by the interplay of double layer attraction, van der Waals attraction, and steric repulsion. These results indicate that a long-range double layer, as well as a dense short-range nanostructure, which will be in the scope of the following discussion, exists simultaneously. Principles of Atomic Force Spectroscopy Shortly after the epoch-making invention of the scanning tunneling microscope (STM), Binnig et al. developed the atomic force microscope (AFM) in the 1980s [62]. It utilizes a sharp tip attached to a microcantilever scanning the surface. In the standard contact mode, the deflection of the Principles of Atomic Force Spectroscopy Shortly after the epoch-making invention of the scanning tunneling microscope (STM), Binnig et al. developed the atomic force microscope (AFM) in the 1980s [62]. It utilizes a sharp tip attached to a microcantilever scanning the surface. In the standard contact mode, the deflection of the cantilever is measured via optical lever amplification when approaching the surface, thus serving as a force sensor based on Hooke's law: where F is the force, k the spring constant, and δ c the cantilever deflection [63]. Using a feedback loop, the force can be kept constant and thus the topography of solid samples can be mapped by moving the sample via piezo elements. With this relatively simple measurement technique, impressive results on the micro-and nanoscale, even up to atomic resolution, have been obtained [64]. As further measurement modes, the non-contact and intermittent or tapping mode have been developed based on the fact that the resonance frequency of the cantilever changes when the tip comes in the vicinity of a surface [65]. They are dynamic measurement modes, where the cantilever is oscillated and the frequency or amplitude of the oscillation is used as an input signal of the feedback loop. These techniques have the advantage that the direct interaction between the tip and sample is minimized, and thus more reliable measurements at higher scan speeds are possible. Dynamic measurements are not only possible using the optical lever technique but also with piezoelectric sensors such as the qPlus tuning fork, allowing for measurements with utmost signal-to-noise ratio [66,67]. AFM measurements can be performed in various environments without complex sample preparation and do not rely on the presence of vacuum conditions, which are required for other high-resolution microscopy techniques such as electron microscopy. Hence, the AFM has also become a versatile tool for the investigations of liquid samples in chemistry and life science [68]. It has been extensively employed for investigating the morphology changes of metallic surfaces during electrochemical polarization in various electrolytes, including ionic liquids [69][70][71][72][73]. Here, we focus on the investigation of the spatial extension of the electric double layer at a solid/liquid interface, where information about the structure out-of-plane, normal to the surface in particular, is needed. Therefore, the tip is approached and subsequently retracted from the surface and the force between the tip and sample is recorded as a function of their separation [74]. This mode is called atomic force spectroscopy (AFS), as the force-separation relationship depends on the material properties, thus allowing for mapping of the material contrast [63]. In recent years, AFS has become the method of choice for the investigation of interfaces in liquid and is in particular applied in nanobiology for investigating, e.g., cells, proteins, or antigen-antibody interactions [75]. During these investigations, where the samples are typically immersed in an ionic buffer, effects of interface modifications at charged surfaces have already become obvious [76]. Theoretically, the interaction between the tip and a solid surface can be approximated by the Lennard-Jones potential U LJ (see Figure 7a), which was calculated by a two-atom approach. It includes an attractive van der Waals interaction and a repulsive interaction related to Coulomb forces and the Pauli Exclusion Principle: Here, ε is the minimum of the potential well, s is the separation between the two atoms, and σ describes the separation of zero force [74]. On a real surface in contact with an ionic liquid, the situation can be more complex, as is illustrated by the schematic force curve shown in Figure 7b. At first, one must consider that in conventional AFMs, the force is recorded as a function of the position of the piezo scanner, z. This is, however, not the real distance between the tip and surface. As the cantilever is also moved by the piezo scanner when in contact with the surface, one must subtract the cantilever deflection δ c , which has been measured by the optical lever method, from the scanner movement z in order to obtain the true distance, called separation s, between the tip and sample [77]. Secondly, one must consider that during approach and retraction, instabilities in cantilever motion can develop, leading to jumps [78]. When the gradient of the force between the tip and surface during the approach becomes larger than the spring constant of the cantilever, a jump-to-contact (JTC) occurs. Similarly, a jump-off-contact (JOC) occurs when the spring constant of the cantilever becomes larger than the gradient of adhesive forces between the tip and sample [79]. These two instabilities, related to the maximum adhesive forces called pull-on force and pull-off force, mark a hysteresis between the approach and the retraction curve, containing information about the material properties of the sample [78]. Appl. Sci. 2019, 9, x FOR PEER REVIEW 8 of 28 larger than the gradient of adhesive forces between the tip and sample [79]. These two instabilities, related to the maximum adhesive forces called pull-on force and pull-off force, mark a hysteresis between the approach and the retraction curve, containing information about the material properties of the sample [78]. When approaching solid surfaces in a liquid, the tip is not necessarily in direct contact with the surface after the jump-to-contact occurs, as solvation layers constitute an additional barrier that the tip must puncture before reaching the solid surface. Thanks to the high sensitivity of an AFM in the z direction, this effect can be used to resolve the internal structure of the interface layer. In the case of ionic liquids, dense alternating anion and cation layers are present at the interface that cause additional characteristic jumps in the force curve, preferentially during the approach, as illustrated in Figure 7b [80]. At each layer, the force increases at constant separation until it is high enough that the tip can puncture the layer and jump to the next layer. These jumps are strongly dependent on the charging of the system. In the case that the tip or surface is not charged, the measured thicknesses of the layers correspond to the dimension of the anion or cation, respectively, but when a charged surface is investigated with a charged tip, the observed jumps correlate with the dimension of an ion pair [81]. In this manner, detailed information about mechanical properties and the thicknesses of the interface layers of ionic liquids can be collected with sub-nN and sub-nm resolution. In order to achieve such a high resolution, the condition of the tip is the key parameter. While extremely sharp Si tips can be prepared via microfabrication, their morphology is significantly altered by impurities. When handling the tips in ambient conditions, organic adsorbates can be adsorbed on the tip, impairing its performance. Hence, tip preparation methods involving plasma treatment, sputtering, or UV irradiation have been found to significantly increase the quality of AFM measurements in liquids [82]. A further source of tip contamination can be solid state impurities such as elements from the backside coating of the cantilever that have diffused to the tip apex [83]. Similar contaminations can also evolve in the course of the measurement if material from the solid surface or particles already present in the liquid get attracted to the tip and thus changing its geometry [84]. Besides these unwanted effects, controlled tip modification by adsorbates can also be used to increase the sensitivity and selectivity of the tip as shown below [85]. A further parameter that can influence the measured forces is the velocity of the tip during approach and retraction. This has become obvious in particular when performing measurements in liquid for biological applications such as protein unfolding where the measured forces are related to the retraction speed [86,87]. Depending on the molecular structure of the interface, such dynamic When approaching solid surfaces in a liquid, the tip is not necessarily in direct contact with the surface after the jump-to-contact occurs, as solvation layers constitute an additional barrier that the tip must puncture before reaching the solid surface. Thanks to the high sensitivity of an AFM in the z direction, this effect can be used to resolve the internal structure of the interface layer. In the case of ionic liquids, dense alternating anion and cation layers are present at the interface that cause additional characteristic jumps in the force curve, preferentially during the approach, as illustrated in Figure 7b [80]. At each layer, the force increases at constant separation until it is high enough that the tip can puncture the layer and jump to the next layer. These jumps are strongly dependent on the charging of the system. In the case that the tip or surface is not charged, the measured thicknesses of the layers correspond to the dimension of the anion or cation, respectively, but when a charged surface is investigated with a charged tip, the observed jumps correlate with the dimension of an ion pair [81]. In this manner, detailed information about mechanical properties and the thicknesses of the interface layers of ionic liquids can be collected with sub-nN and sub-nm resolution. In order to achieve such a high resolution, the condition of the tip is the key parameter. While extremely sharp Si tips can be prepared via microfabrication, their morphology is significantly altered by impurities. When handling the tips in ambient conditions, organic adsorbates can be adsorbed on the tip, impairing its performance. Hence, tip preparation methods involving plasma treatment, sputtering, or UV irradiation have been found to significantly increase the quality of AFM measurements in liquids [82]. A further source of tip contamination can be solid state impurities such as elements from the backside coating of the cantilever that have diffused to the tip apex [83]. Similar contaminations can also evolve in the course of the measurement if material from the solid surface or particles already present in the liquid get attracted to the tip and thus changing its geometry [84]. Besides these unwanted effects, controlled tip modification by adsorbates can also be used to increase the sensitivity and selectivity of the tip as shown below [85]. A further parameter that can influence the measured forces is the velocity of the tip during approach and retraction. This has become obvious in particular when performing measurements in liquid for biological applications such as protein unfolding where the measured forces are related to the retraction speed [86,87]. Depending on the molecular structure of the interface, such dynamic processes also cannot be excluded when investigating ionic liquids by AFS and thus should be kept in mind. The Electrified Interface The history of investigating the ionic liquid/solid interface layer by means of AFS goes back to the year 2006 when Atkin and Warr performed the first measurements on the atomically flat model substrates mica, silica, and graphite [88]. They investigated the three different ionic liquids 1-ethyl-3-methylimidazolium acetate (C2MImAc), propylammonium nitrate (PAN), and ethylammonium nitrate (EAN). As is exemplarily shown in Figure 8, the force-separation curves exhibit oscillations with frequencies corresponding to the dimension of the ion pairs of the respective ionic liquids. A strong correlation between the surface charge, surface roughness, and molecular arrangement with the observed layered structure was found. In particular, in the case of graphene, a pronounced layering was identified with up to seven layers for C 2 MImAc. Hence, it was concluded that steric effects such as the local interaction of the alkyl chains of the ionic liquid molecules with the carbon atoms of the substrate are decisive for the formation of the interfacial structure. Comparing the ionic liquids under investigation, it has been determined that an increased internal flexibility (e.g., PAN compared to EAN) results in a weakening of the layer structure and an increase in the compressibility. Appl. Sci. 2019, 9, x FOR PEER REVIEW 9 of 28 processes also cannot be excluded when investigating ionic liquids by AFS and thus should be kept in mind. The Electrified Interface The history of investigating the ionic liquid/solid interface layer by means of AFS goes back to the year 2006 when Atkin and Warr performed the first measurements on the atomically flat model substrates mica, silica, and graphite [88]. They investigated the three different ionic liquids 1-ethyl-3methylimidazolium acetate (C2MImAc), propylammonium nitrate (PAN), and ethylammonium nitrate (EAN). As is exemplarily shown in Figure 8, the force-separation curves exhibit oscillations with frequencies corresponding to the dimension of the ion pairs of the respective ionic liquids. A strong correlation between the surface charge, surface roughness, and molecular arrangement with the observed layered structure was found. In particular, in the case of graphene, a pronounced layering was identified with up to seven layers for C2MImAc. Hence, it was concluded that steric effects such as the local interaction of the alkyl chains of the ionic liquid molecules with the carbon atoms of the substrate are decisive for the formation of the interfacial structure. Comparing the ionic liquids under investigation, it has been determined that an increased internal flexibility (e.g., PAN compared to EAN) results in a weakening of the layer structure and an increase in the compressibility. As the mechanical measurement of layered interfaces is prone to fluctuations, it is useful to not measure only one but a set of several tens of force curves (as drawback information about dynamic effects, which are often investigated in biological AFS, gets lost). Subsequently, their average can be As the mechanical measurement of layered interfaces is prone to fluctuations, it is useful to not measure only one but a set of several tens of force curves (as drawback information about dynamic effects, which are often investigated in biological AFS, gets lost). Subsequently, their average can be calculated and presented using 2D histograms, as is shown for the [EMIm][TFSI]/mica interface in Figure 9. Four series of measurements obtained using different set points of maximum force are presented, illustrating a major challenge when performing AFS in liquids. Due to the calculation of the separation via the cantilever deflection (Equation (3)) and the increasing hardness of layers when approaching the surface, it is demanding to determine the point of zero separation, which denotes the point when the tip is in direct contact with the surface [20]. In all measurements, the layered interface structure of the ionic liquid can be clearly observed, but when choosing too low of a set point, the real surface cannot be reached. This also constitutes a requirement on the cantilever selection. Using a soft cantilever with a low spring constant has the advantage of high sensitivity at the expense of a limit on the maximum force. Furthermore, the large deflection of soft cantilevers can result in a distortion of the force curves due to the nonlinearities of the photodetector [20]. Hence, one should select a cantilever that can provide sufficient force to reach the surface. In the example of Figure 9, a set point of 14 nN was needed to puncture all liquid interface layers. This was concluded by the fact that a further increase of the force set point to 18 nN did not reveal further layers. An additional indication for hitting the surface can be derived by regarding the retraction curves, often showing a distinct change in adhesion after direct tip-surface contact [20]. Appl. Sci. 2019, 9, x FOR PEER REVIEW 10 of 28 calculated and presented using 2D histograms, as is shown for the [EMIm][TFSI]/mica interface in Figure 9. Four series of measurements obtained using different set points of maximum force are presented, illustrating a major challenge when performing AFS in liquids. Due to the calculation of the separation via the cantilever deflection (Equation (3)) and the increasing hardness of layers when approaching the surface, it is demanding to determine the point of zero separation, which denotes the point when the tip is in direct contact with the surface [20]. In all measurements, the layered interface structure of the ionic liquid can be clearly observed, but when choosing too low of a set point, the real surface cannot be reached. This also constitutes a requirement on the cantilever selection. Using a soft cantilever with a low spring constant has the advantage of high sensitivity at the expense of a limit on the maximum force. Furthermore, the large deflection of soft cantilevers can result in a distortion of the force curves due to the nonlinearities of the photodetector [20]. Hence, one should select a cantilever that can provide sufficient force to reach the surface. In the example of Figure 9, a set point of 14 nN was needed to puncture all liquid interface layers. This was concluded by the fact that a further increase of the force set point to 18 nN did not reveal further layers. An additional indication for hitting the surface can be derived by regarding the retraction curves, often showing a distinct change in adhesion after direct tip-surface contact [20]. Using a tip attached to a cantilever as a mechanical probe, one should also keep in mind that the probe itself can have an impact on the measurement. The radius of commercial AFM tips is typically in the range of a few nanometers, which is much larger than the distance between ionic liquid interface layers that are supposed to be measured. Additionally, the presence of the tip leads to the formation of a second solid-liquid interface where, depending on the material and charge of the tip, a layered interface structure can also evolve. Hence, one interface structure is probed with another. Black et al. [20] performed tests employing two different tips on the same [EMIm][TFSI]/mica interface. They compared a SiN tip with a Au-coated SiN tip. As is shown in Figure 10, the layered interface structure can be seen in both measurements. The forces needed to puncture the layers are significantly different and mainly relate to the tip radius, which was up to three times higher for the Au-coated tip than for the uncoated one. Despite this, the measured thicknesses of the layers are the same for both tips, as becomes obvious from the separation histogram on the right of Figure 10. Using a tip attached to a cantilever as a mechanical probe, one should also keep in mind that the probe itself can have an impact on the measurement. The radius of commercial AFM tips is typically in the range of a few nanometers, which is much larger than the distance between ionic liquid interface layers that are supposed to be measured. Additionally, the presence of the tip leads to the formation of a second solid-liquid interface where, depending on the material and charge of the tip, a layered interface structure can also evolve. Hence, one interface structure is probed with another. Black et al. [20] performed tests employing two different tips on the same [EMIm][TFSI]/mica interface. They compared a SiN tip with a Au-coated SiN tip. As is shown in Figure 10, the layered interface structure can be seen in both measurements. The forces needed to puncture the layers are significantly different and mainly relate to the tip radius, which was up to three times higher for the Au-coated tip than for the uncoated one. Despite this, the measured thicknesses of the layers are the same for both tips, as becomes obvious from the separation histogram on the right of Figure 10. formation of a second solid-liquid interface where, depending on the material and charge of the tip, a layered interface structure can also evolve. Hence, one interface structure is probed with another. Black et al. [20] performed tests employing two different tips on the same [EMIm][TFSI]/mica interface. They compared a SiN tip with a Au-coated SiN tip. As is shown in Figure 10, the layered interface structure can be seen in both measurements. The forces needed to puncture the layers are significantly different and mainly relate to the tip radius, which was up to three times higher for the Au-coated tip than for the uncoated one. Despite this, the measured thicknesses of the layers are the same for both tips, as becomes obvious from the separation histogram on the right of Figure 10. Figure 11 shows the resulting force-separation curves for the [Py 1,4 (111) interface at open circuit potential (OCP) and positive and negative voltage. The oscillatory behavior, which was already present at OCP, became modified upon polarization, leading to an increase in the force necessary to punctuate the innermost layers and an increase in the number of detected layers (Hoth et al. [90] even measured up to 12 ion pair layers at −2 V). It has been observed that this effect is more pronounced at negative potentials, suggesting that the [Py 1,4 ] + cations absorbed on the surface are a more effective template for establishing an ordered interface structure. Figure 11 shows the resulting force-separation curves for the [Py1,4][FAP]/Au (111) interface at open circuit potential (OCP) and positive and negative voltage. The oscillatory behavior, which was already present at OCP, became modified upon polarization, leading to an increase in the force necessary to punctuate the innermost layers and an increase in the number of detected layers (Hoth et al. [90] even measured up to 12 ion pair layers at −2 V). It has been observed that this effect is more pronounced at negative potentials, suggesting that the [Py1,4] + cations absorbed on the surface are a more effective template for establishing an ordered interface structure. These findings by Hayes et al. [89] finally provided direct experimental evidence that the electric double layer in ionic liquids cannot be traditionally described using a Gouy-Chapman-Stern approach, and an oscillatory capacitive double layer is present, as is illustrated in Figure 5. Hence, in recent years, AFS investigations under applied potential have been extended to a variety of different combinations of ionic liquids and conducting surfaces relevant for electrochemical applications [91,92]. For example, Zhang et al. [93] performed detailed measurements on the Au(111) surface in contact with 1-n-butyl-3-methylimidazolium hexafluorophosphate ([BMIm][PF 6 ]). They were able to resolve three to four layers using AFS at the interface. By recording force-separation curves at potentials between −0.9 V and +0.5 V, they concluded that the electric double layer is confined to a few charged interior layers, but that additionally neutral exterior layers are also present. 1,4 ] + cations were found to be strongly bound to the surface, leading to the evolution of a wormlike reconstruction pattern under cathodic polarization [96]. This illustrates that even small changes in the molecular configuration of the cations can drastically change the electrochemical properties of the ionic liquid. Moreover, the configuration of the anions can also have an impact on the interfacial structure as was addressed, e.g., by Liu et al. [97]. (111) surface. AFS measurements under cathodic polarization revealed that the number of layers and force necessary for their punctuation increases systematically with anion size, and it was concluded that this is also the cause for differences in macroscopic electrowetting behavior [97]. ][FAP]/Au In order to extract further information from the force-separation curves, approaches to controlled tip modification have been developed. One method to enlarge and accurately define the size of the contact area is to use colloidal probes instead of conventional pyramidal AFM tips [98]. Using a spherical silica probe with a diameter of 5 µm, Li et al. [99] were able to elucidate the interface structure between Au and imidazolium-based ionic liquids with cations having different alkyl chain lengths, such as [EMIm] + , [BMIm] + , and [HMIm] + . They found that under cathodic polarization, [BMIm] + was relatively weakly bound to the surface, while [EMIm] + as well as [HMIm] + showed stronger interactions. For [EMIm] + , this was attributed to the stronger interaction of the imidazolium ring with the surface. For [HMIm] + , steric effects related to the increased length of the alkyl chain, leading to a stronger solvophobic force, have been made responsible. A second approach of tip modification was followed by Zhong et al. [85]. They attached alkylthiols with different end groups, such as -NH 3 + , -COO − , and -CH 3 to the tip, thus creating charge-sensitive probes, as illustrated in Figure 12a. They investigated the interface between 1-octyl-3-methylimidazolium hexafluorophosphate ([OMIm][PF 6 ]) and Au (111). Regarding the force-distance curves in Figure 12a measured under anodic and cathodic substrate polarization, distinct differences between the neutral, positive, and negative tips can be seen. Positively charged tips are more sensitive for layer detection at positive substrate potential and vice versa. This becomes obvious with regard to the force, which is necessary to punctuate the third sublayer as a function of the potential (Figure 12b). A "camel-shaped" dependency can be observed for all tips, indicating that the structural organization of the electric double layer increases with the potential at both polarities, but at positive potential, the positively-modified tips show the highest force, while at negative potential, the force for the negatively modified tips is the highest. Based on these findings, Zhong et al. [85] elaborated a molecular model of the interface, as shown in Figure 12c. At negative substrate potential, the cations are modeled to be directly attached to the surface via their imidazolium ring where the positive charge is located. At positive bias, the smaller anions form the first interface layer with the cationic imidazolium ring attached to it while the cationic tail groups are pointed away from the surface. In the intermediate state, at zero charge, a checkerboard-like arrangement of the ions was assumed instead. Aside from the aforementioned investigations of the electrified Au surface, studies of the interface between ionic liquids and graphite have been conducted, as the understanding of carbonbased electrodes is of high significance for building fuel cells or electrochemical supercapacitors [100]. Black et al. [101] focused on the interface structure between 1-ethyl-3-methyl-imidazolium bis(trifluoromethanesulfonyl)imide ([EMIm][TFSI]) and highly oriented pyrolytic graphite (HOPG). The measured 2D force-separation histograms are presented in Figure 13 for the unbiased and positively and negatively polarized surface. In all cases, an ordered structure with up to four layers has been revealed. In order to gain a deeper insight into the ionic arrangement of the double layer, the measured curves were correlated with molecular dynamics simulations. The densities of anions and cations as a function of the separation from the surface were calculated from simulations performed at different surface potentials, as shown in Figure 13 for +1 V and −1 V. In contrast to the Aside from the aforementioned investigations of the electrified Au surface, studies of the interface between ionic liquids and graphite have been conducted, as the understanding of carbon-based electrodes is of high significance for building fuel cells or electrochemical supercapacitors [100]. Black et al. [101] focused on the interface structure between 1-ethyl-3-methyl-imidazolium bis(trifluoromethanesulfonyl)imide ([EMIm][TFSI]) and highly oriented pyrolytic graphite (HOPG). The measured 2D force-separation histograms are presented in Figure 13 for the unbiased and positively and negatively polarized surface. In all cases, an ordered structure with up to four layers has been revealed. In order to gain a deeper insight into the ionic arrangement of the double layer, the measured curves were correlated with molecular dynamics simulations. The densities of anions and cations as a function of the separation from the surface were calculated from simulations performed at different surface potentials, as shown in Figure 13 for +1 V and −1 V. In contrast to the uncharged surface, where different ion orientations are present, the electrostatic force induced by the application of the potential leads to a predetermined ion orientation in the first ion layer and a charge screening confined to approximately 1 nm. Further from the surface, a layered interface structure with absent preferential ion orientation was predicted. The conclusions derived from the simulation closely match the experimental results, showing the versatility of AFS for investigating the electric double layer in ionic liquids. Jurado et al. [102] extended the investigations of carbon-based materials to the graphene surface. They investigated the interface formed with [EMIm][TFSI] at zero charge as well as under applied potential, concluding that the strong interaction of the imidazolium ring with the graphene surface via π-π bonding results in a preferential orientation of the ions and to an overscreening, even on the uncharged surface. When a potential was applied, the behavior of the double layer changed from overscreening towards crowding, leading to a decrease in the force within the layered structure. Appl. Sci. 2019, 9, x FOR PEER REVIEW 14 of 28 uncharged surface, where different ion orientations are present, the electrostatic force induced by the application of the potential leads to a predetermined ion orientation in the first ion layer and a charge screening confined to approximately 1 nm. Further from the surface, a layered interface structure with absent preferential ion orientation was predicted. The conclusions derived from the simulation closely match the experimental results, showing the versatility of AFS for investigating the electric double layer in ionic liquids. Jurado et al. [102] extended the investigations of carbon-based materials to the graphene surface. They investigated the interface formed with [EMIm][TFSI] at zero charge as well as under applied potential, concluding that the strong interaction of the imidazolium ring with the graphene surface via π-π bonding results in a preferential orientation of the ions and to an overscreening, even on the uncharged surface. When a potential was applied, the behavior of the double layer changed from overscreening towards crowding, leading to a decrease in the force within the layered structure. While the investigation of model systems of atomically flat surfaces provides valuable insights into the fundamental nature of the double layer formation of ionic liquids, one must question if these results can be directly transferred to real applications where rough electrodes are present. In order to scrutinize this issue, Sheehan et al. [103] conducted a study employing silica nanoparticles deposited on silica wafers. Additionally, they employed sharp tips as well as different silica colloids as AFM probes to modulate the roughness of the system confining the ionic liquid. Comparing the force curves measured in [HMIm] [TFSI] shows that the roughness has a distinct influence on the layered structure. The presence of only a low density of nanoparticles modifies the measured double layer across the entire surface. Furthermore, the use of a rough probe contact was found to be related to a localized arrangement and agglomeration of the ionic liquid at "multi-asperity" contacts [103]. These results show that the presence of rough surfaces introduces further complexity to the system, affecting the static structure of the ionic liquids as well as their dynamics. It should also be noted that although an ordered double layer structure at solid/liquid interfaces has been observed by AFS as well as by complementary methods on a large variety of ionic liquids, there seems to be no universal behavior. As a counterexample, Radiom et al. [104] reported on the interface between trihexyl(tetradecyl)phosphonium bis(mandelato)borate ([P6, 6,6,14][BMB]) and different atomically flat surfaces, such as Au, mica, or silica. By investigating force-separation curves, no hints of the presence of a multilayer structure, but only a diffuse repulsion, were found. A possible explanation for this striking difference to conventional ionic liquids was suggested with regard to the molecular structure of [P6, 6,6,14][BMB] having a large and flexible anion with delocalized charge and a compact and inflexible cation [104]. This combination could exclude a regular arrangement of the molecules at the interface. Furthermore, Cheng et al. [105] reported that even on popular imidazolium-based ionic liquids such as [EMIm] [TFSI], no structured interface to mica is present if no water is present, as is discussed in detail below. This shows that the properties of ionic liquid/solid While the investigation of model systems of atomically flat surfaces provides valuable insights into the fundamental nature of the double layer formation of ionic liquids, one must question if these results can be directly transferred to real applications where rough electrodes are present. In order to scrutinize this issue, Sheehan et al. [103] conducted a study employing silica nanoparticles deposited on silica wafers. Additionally, they employed sharp tips as well as different silica colloids as AFM probes to modulate the roughness of the system confining the ionic liquid. Comparing the force curves measured in [HMIm] [TFSI] shows that the roughness has a distinct influence on the layered structure. The presence of only a low density of nanoparticles modifies the measured double layer across the entire surface. Furthermore, the use of a rough probe contact was found to be related to a localized arrangement and agglomeration of the ionic liquid at "multi-asperity" contacts [103]. These results show that the presence of rough surfaces introduces further complexity to the system, affecting the static structure of the ionic liquids as well as their dynamics. It should also be noted that although an ordered double layer structure at solid/liquid interfaces has been observed by AFS as well as by complementary methods on a large variety of ionic liquids, there seems to be no universal behavior. As a counterexample, Radiom et al. [104] reported on the interface between trihexyl(tetradecyl)phosphonium bis(mandelato)borate ([P 6,6,6,14 ][BMB]) and different atomically flat surfaces, such as Au, mica, or silica. By investigating force-separation curves, no hints of the presence of a multilayer structure, but only a diffuse repulsion, were found. A possible explanation for this striking difference to conventional ionic liquids was suggested with regard to the molecular structure of [P 6,6,6,14 ][BMB] having a large and flexible anion with delocalized charge and a compact and inflexible cation [104]. This combination could exclude a regular arrangement of the molecules at the interface. Furthermore, Cheng et al. [105] reported that even on popular imidazolium-based ionic liquids such as [EMIm] [TFSI], no structured interface to mica is present if no water is present, as is discussed in detail below. This shows that the properties of ionic liquid/solid interfaces are individual and must be investigated for every class of liquids, as well as for the specific boundary conditions, separately. Influence of Water When hydrophilic ionic liquids are employed for applications under environmental conditions, water is the major impurity that significantly influences the nanoscale structure of the double layer [106,107]. In particular, in ionic liquids used as electrolytes in fuel cells, water is generated as the product of the reaction between hydrogen and oxygen and can thus alter the reactions at the electrode surfaces [41]. Even small amounts of water can modify the interface structure of an ionic liquid under applied electric potential [108]. On the interface between butyl-1-methylpyrrolidinium bis(trifluoromethylsulfonyl) amide ([BMP] [TFSA], also referred to as [Py 1,4 ][TFSI]) and Au(111), Zhong et al. [109], using AFS, observed that upon a water increase from 30 ppm to 90 ppm, the stiffness of the first interface layers decreased significantly while the thickness of the layers increased. This indicates that at this low concentration, water absorbs on the surface and distorts the defined orientation of ions in the layer. A comparable result was obtained by Wang et al. [110] on different 1-alkyl-3-methylimidazolium bis(trifluoromethanesulfonyl) imide ([C n Mim][TFSI]) ionic liquids. Water was found to distort the interfacial structure formed with mica, which also impacts the wetting behavior of the liquid [111]. The dissolution of the structured interface layer upon water uptake is also a challenge when using ionic liquids as lubricants due to higher friction [112]. In contrast to many reports showing that even small amounts of water lead to a distortion of the structured interface layer, Cheng et al. [105] note that the presence of water is necessary for the evolution of a layered structure at the interface. They also investigated [RMIm][TFSI] ionic liquids in contact with mica and found that the force-separation curves were not indicative for a layered interface structure when using carefully dried ionic liquids, as is shown in Figure 14a,b. The force curve was found to be continuous with a repulsive force detected from approximately 12 Å away from the surface, which reveals that only an unstructured interface layer exists. Only after exposing the ionic liquid to humid ambient conditions could the characteristic instabilities in the force curve related to layering be observed (see Figure 14c,d). It was concluded that water or other small molecules are necessary for a charging of the mica surface via the dissolution of K + ions, inducing a structuring of the ionic liquid at the interface. In additionally wetted ionic liquids, layering was also present, but the jumps between the innermost layer and surface were larger, suggesting that water forms an adsorption layer on the mica surface, altering the interface structure [105]. Appl. Sci. 2019, 9, x FOR PEER REVIEW 15 of 28 interfaces are individual and must be investigated for every class of liquids, as well as for the specific boundary conditions, separately. Influence of Water When hydrophilic ionic liquids are employed for applications under environmental conditions, water is the major impurity that significantly influences the nanoscale structure of the double layer [106,107]. In particular, in ionic liquids used as electrolytes in fuel cells, water is generated as the product of the reaction between hydrogen and oxygen and can thus alter the reactions at the electrode surfaces [41]. Even small amounts of water can modify the interface structure of an ionic liquid under applied electric potential [108]. On the interface between butyl-1-methylpyrrolidinium bis(trifluoromethylsulfonyl) amide ([BMP] [TFSA], also referred to as [Py1,4][TFSI]) and Au(111), Zhong et al. [109], using AFS, observed that upon a water increase from 30 ppm to 90 ppm, the stiffness of the first interface layers decreased significantly while the thickness of the layers increased. This indicates that at this low concentration, water absorbs on the surface and distorts the defined orientation of ions in the layer. A comparable result was obtained by Wang et al. [110] on different 1alkyl-3-methylimidazolium bis(trifluoromethanesulfonyl) imide ([CnMim][TFSI]) ionic liquids. Water was found to distort the interfacial structure formed with mica, which also impacts the wetting behavior of the liquid [111]. The dissolution of the structured interface layer upon water uptake is also a challenge when using ionic liquids as lubricants due to higher friction [112]. In contrast to many reports showing that even small amounts of water lead to a distortion of the structured interface layer, Cheng et al. [105] note that the presence of water is necessary for the evolution of a layered structure at the interface. They also investigated [RMIm][TFSI] ionic liquids in contact with mica and found that the force-separation curves were not indicative for a layered interface structure when using carefully dried ionic liquids, as is shown in Figure 14a,b. The force curve was found to be continuous with a repulsive force detected from approximately 12 Å away from the surface, which reveals that only an unstructured interface layer exists. Only after exposing the ionic liquid to humid ambient conditions could the characteristic instabilities in the force curve related to layering be observed (see Figure 14c,d). It was concluded that water or other small molecules are necessary for a charging of the mica surface via the dissolution of K + ions, inducing a structuring of the ionic liquid at the interface. In additionally wetted ionic liquids, layering was also present, but the jumps between the innermost layer and surface were larger, suggesting that water forms an adsorption layer on the mica surface, altering the interface structure [105]. The effect of higher amounts of water on the interface structure was examined by Cui et al. [114] using 1-ethyl-3-methylimidazolium trifluoromethylsulfonate ([EMIm][TfO]) in contact with Au (111). They found that the number of layers detected by the force-separation curves decreased with increasing water content. For the pure ionic liquid, five layers were seen, whereas after the addition of 50 vol% of water, only one layer was resolved under cathodic polarization. Assisted by complementary experiments using vibrational spectroscopy, Cui et al. [114] concluded that a "water in IL" to "IL in water" transition takes place at 20−30 vol%. As a result, the water molecules weaken the interaction between the ions and the multilayer structure at the interface is diluted to a classical Helmholtz-like double layer. Depending on the molecular configuration of the ionic liquids, this transition can also be related to the formation of micelles. Reverting the roles of ionic liquid and water, Cheng et al. [113] investigated the formation of the mica/liquid interface when an ionic liquid, here [C8MIm] [Cl], was subsequently added to water. As is shown in Figure 15, force-separation histograms were constructed for different concentrations of the ionic liquid in the mixture. While at a concentration of 1 mM, no dense interface structure was present, the jumps in the force curves setting in at 50 mM indicate the formation of a dense hydrophobic molecular surface layer. At higher concentrations, the amount and position of the jumps was found to be modified, revealing that at first the formation and adsorption of micellar structures occurs and, finally, a dense multilayer interface evolves. This also impacts the compression of the layer calculated as force/distance slopes. At low concentrations of the ionic liquid, a value of 3.2 nN/Å is obtained, which corresponds to a dense surface layer of single molecules. This compression is weakened at an intermediate concentration, indicating a mixture of separated molecules and micelles, while at a high concentration, the compression was found to be the highest at 4.4 nN/Å, reflecting the dense micellar layering. The effect of higher amounts of water on the interface structure was examined by Cui et al. [114] using 1-ethyl-3-methylimidazolium trifluoromethylsulfonate ([EMIm][TfO]) in contact with Au (111). They found that the number of layers detected by the force-separation curves decreased with increasing water content. For the pure ionic liquid, five layers were seen, whereas after the addition of 50 vol% of water, only one layer was resolved under cathodic polarization. Assisted by complementary experiments using vibrational spectroscopy, Cui et al. [114] concluded that a "water in IL" to "IL in water" transition takes place at 20-30 vol%. As a result, the water molecules weaken the interaction between the ions and the multilayer structure at the interface is diluted to a classical Helmholtz-like double layer. Depending on the molecular configuration of the ionic liquids, this transition can also be related to the formation of micelles. Reverting the roles of ionic liquid and water, Cheng et al. [113] investigated the formation of the mica/liquid interface when an ionic liquid, here [C 8 MIm] [Cl], was subsequently added to water. As is shown in Figure 15, force-separation histograms were constructed for different concentrations of the ionic liquid in the mixture. While at a concentration of 1 mM, no dense interface structure was present, the jumps in the force curves setting in at 50 mM indicate the formation of a dense hydrophobic molecular surface layer. At higher concentrations, the amount and position of the jumps was found to be modified, revealing that at first the formation and adsorption of micellar structures occurs and, finally, a dense multilayer interface evolves. This also impacts the compression of the layer calculated as force/distance slopes. At low concentrations of the ionic liquid, a value of 3.2 nN/Å is obtained, which corresponds to a dense surface layer of single molecules. This compression is weakened at an intermediate concentration, indicating a mixture of separated molecules and micelles, while at a high concentration, the compression was found to be the highest at 4.4 nN/Å, reflecting the dense micellar layering. Addition of Metallic Solvates As ionic liquids are promising electrolytes for applications containing metallic components such as batteries or electrodeposition, many AFS studies have been performed that focus on the formation of the interface structure in the presence of the dissolved metals. Endres et al. [115] performed a study of [Py 1,4 ][FAP] on the Au(111) surface, revealing that the addition of LiCl changes the interface drastically. While the AFM tip experienced a repulsive force when approaching the surface as a pure ionic liquid, an attractive force was present in the Li-containing mixture. The addition of Li only influenced the innermost layer of the interface structure, leaving layers further away from the interface unchanged. Using N-butyl-N-methylpyrrolidinium bis(fluorosulfonyl)imid ([Py 1,4 ][FSI]), which intrinsically shows a stronger interaction with the Au(111) surface than [Py 1,4 ][FAP], comparable experiments, with the addition of Na, were performed. Carstens et al. [116] reported that the interfacial multilayer structure is not significantly affected after adding small amounts of Na. However, when adding more than 0.25 M of Na, the innermost layers changed, indicating the local formation of [Na(FSI) 3 ] 2− . The impact of a layered electric double layer on the electrochemical performance has also been illustrated by Begić et al. [117], addressing the mechanisms in rechargeable zinc batteries. Zinc dicyanamide salt (Zn(dca) 2 ) has been admixed to two ionic liquids, N-butyl-N-methylpyrrolidinium dicyanamide ([C 4 mpyr][dca]) and 1-ethyl-3-methyl-imidazolium dicyanamide ([C 2 mim][dca]), while force-separation curves were recorded as a function of the water content and applied potential. Layering at the interface was readily present in the pure ionic liquids in contact with the HOPG. The admixture of the zinc salts did not lead to a significant dissolution of the multilayer structure; it even increased the number of layers for [C 4 mpyr][dca]. These results demonstrate that interface ordering can be challenging when an effective mass transport to the surface is needed and that for chemically similar ionic liquids, electrochemical performance can significantly differ. The subtle interplay between metallic precursors and ionic liquids was elucidated by Borisenko et al. [118], looking at the electrodeposition of Ta and Ga from halides in [Py 1,4 ][TFSA], as well as [Py 1,4 ][FAP] on the Au(111) surface. Using AFS, they concluded that in mixtures of [Py 1,4 ][TFSA] and TaF 5 , the electric double layer becomes enriched in Ta-containing molecules, thus allowing for their further reduction to Ta in metallic form directly at the interface. When they tried to reproduce this experiment using [Py 1,4 ][FAP], it was not possible to induce the electrodeposition process, as the Ta species was pushed away from the interface. By changing the precursor to GaCl 3 , however, the electrodeposition of Ga could be successfully started in both ionic liquids. In order to simulate the interaction between solutes and ionic liquids, Hoffmann et al. [119] developed a mathematical semi-continuum model based on AFS investigations of the interface between Au(111) and in [Py 1,4 ][TFSA] with varying admixtures of Ag salt. As presented in Figure 16, a clear correlation between Ag concentration and the detected layers at the interface was observable. With increasing Ag concentrations, the widths of the layers at first increase, suggesting the presence of larger AgTFSA complexes within the double layer. Eventually, at concentrations above 500 µM AgTFSA, the ordered structure collapses into a simple double layer. The developed thermodynamic continuum model, which treats the ionic liquid as being composed of hard spheres, could be successfully applied to describe the interfacial structure under the influence of Ag additives and to simulate the force-separation curves [119]. To summarize the investigations of metal additives in different ionic liquids, it can be concluded that, in general, the structured interface layer becomes distorted at high metal concentrations; however, at low and intermediate concentrations, a complex interplay with the metallic species and ionic liquid takes place such that each specific combination of additives and liquids must be considered independently. Towards Three-Dimensional Mapping of the Interface Layer As the electric double layer not only exhibits a characteristic layered structure in the out-of-plane direction, as illustrated by the AFS examples above, but may also feature in-plane inhomogeneities, three-dimensional mapping is necessary in order to obtain a complete picture of electrochemical interface reactions. Using AFM-based techniques, two approaches to 3D mapping can be distinguished. Either force-separation curves can be recorded at different lateral positions using AFS, or classical AFM scans can be obtained at different separations between tip and surface, thus revealing slices through the liquid. These experiments have been primarily performed using the dynamic AFM modes, employing amplitude modulation (AM) or frequency modulation (FM), which are well-established techniques for mapping surfaces in liquids with atomic resolution [120]. Dynamic AFM has also been successfully applied to monitor the electric double layer in conventional electrolytes [121]. In order to map a solid-liquid interface by AFM, one must ensure a careful sample and tip preparation, an extremely low vibration and noise level, and sufficient thermal stability. Due to these experimental challenges, the number of publications showing 3D data of the interface layer obtained by AFM is small compared to those presenting single point AFS curves. Using contact-mode AFM, Black et al. [122] performed investigations of defects in the double layer structure between HOPG and [EMIm][TFSI], as shown in Figure 17. They measured 20 forceseparation curves at positions separated by 25 nm from each other along a defined line across the HOPG surface (see Figure 17a). The resulting curves were compiled into a map providing a crosssection through the interface layer (see Figure 17b). The layered interface structure can be clearly Towards Three-Dimensional Mapping of the Interface Layer As the electric double layer not only exhibits a characteristic layered structure in the out-of-plane direction, as illustrated by the AFS examples above, but may also feature in-plane inhomogeneities, three-dimensional mapping is necessary in order to obtain a complete picture of electrochemical interface reactions. Using AFM-based techniques, two approaches to 3D mapping can be distinguished. Either force-separation curves can be recorded at different lateral positions using AFS, or classical AFM scans can be obtained at different separations between tip and surface, thus revealing slices through the liquid. These experiments have been primarily performed using the dynamic AFM modes, employing amplitude modulation (AM) or frequency modulation (FM), which are well-established techniques for mapping surfaces in liquids with atomic resolution [120]. Dynamic AFM has also been successfully applied to monitor the electric double layer in conventional electrolytes [121]. In order to map a solid-liquid interface by AFM, one must ensure a careful sample and tip preparation, an extremely low vibration and noise level, and sufficient thermal stability. Due to these experimental challenges, the number of publications showing 3D data of the interface layer obtained by AFM is small compared to those presenting single point AFS curves. Using contact-mode AFM, Black et al. [122] performed investigations of defects in the double layer structure between HOPG and [EMIm][TFSI], as shown in Figure 17. They measured 20 force-separation curves at positions separated by 25 nm from each other along a defined line across the HOPG surface (see Figure 17a). The resulting curves were compiled into a map providing a cross-section through the interface layer (see Figure 17b). The layered interface structure can be clearly identified here. It can also be seen that distortions of the structure are present in the liquid above the step edges of the HOPG substrate. In a small region, the ionic layers bend to compensate for the disturbance induced by the topographic defect. This becomes obvious with regard to the force-separation histograms obtained above the basal plane, which show up to five layers (Figure 17c) and above a step edge showing only one layer clearly (Figure 17d). To illustrate the interplay between the topography of the solid surface and the ordering of the ionic liquid, a second region of the HOPG surface was investigated by Black et al. [122], which contained no single step edges but few larger steps, as shown in the topography line scan in Figure 17e. The map of the force-separation curve measured along the line scan shown in Figure 17e reveals the presence of edge dislocations in the ionic liquid interface structure. Those dislocations are present above the basal plane of the substrate and are related to a distortion of the layered structure spreading over 50 nm to 80 nm. This behavior is in correspondence with observations of conventional liquid crystals [123] that also show topological effects, which can have an impact on the electrochemical performance when using ionic liquids. Appl. Sci. 2019, 9, x FOR PEER REVIEW 19 of 28 identified here. It can also be seen that distortions of the structure are present in the liquid above the step edges of the HOPG substrate. In a small region, the ionic layers bend to compensate for the disturbance induced by the topographic defect. This becomes obvious with regard to the forceseparation histograms obtained above the basal plane, which show up to five layers ( Figure 17c) and above a step edge showing only one layer clearly (Figure 17d). To illustrate the interplay between the topography of the solid surface and the ordering of the ionic liquid, a second region of the HOPG surface was investigated by Black et al. [122], which contained no single step edges but few larger steps, as shown in the topography line scan in Figure 17e. The map of the force-separation curve measured along the line scan shown in Figure 17e reveals the presence of edge dislocations in the ionic liquid interface structure. Those dislocations are present above the basal plane of the substrate and are related to a distortion of the layered structure spreading over 50 nm to 80 nm. This behavior is in correspondence with observations of conventional liquid crystals [123] that also show topological effects, which can have an impact on the electrochemical performance when using ionic liquids. To map the solvation layers in ionic liquids, Negami et al. [124] employed frequency modulation (FM)-AFM force mapping, using a quartz tuning fork as a qPlus sensor, which has a higher Q factor than standard cantilevers used for the optical lever technique and thus can also provide high resolution in viscous liquids. On the [Py 1,4 ][FAP]/Au(111) interface, they were able to measure force spectroscopy series by recording frequency shift-separation curves at different positions of the surface. By applying bias to the substrate, they could monitor the changes to the electrical polarization. They found a characteristic ordered solvation structure detected as oscillations in the frequency shift signal, corresponding to the length of an ion pair. The same FM-AFM technique was applied to the [Py 1,4 ][FAP]/KCl(100) interface by Ichii et al. [125]. In this study, an alkali-halide substrate was employed, as it forms a neutral surface, having the same quantity of anions and cations in the uppermost surface layer. Hence, a different interface layer structure than that on electrified metal electrodes will form. In Figure 18a, a series of frequency shift-distance curves measured along a line of 4 nm is shown as a 2D map. A distinct oscillation in the frequency shift signal with up to five stripes can be identified. This is illustrated by the cross sections displayed in Figure 18b-e, extracted from the 2D map at four different positions. The distance between the stripes is smaller than the ion pair size of [Py 1,4 ][FAP] and rather corresponds to the lattice constant of KCl. The frequency shift-distance curves exhibit a sawtooth-like pattern with distinct jumps, which is a completely different behavior than that observed by FM-AFM on electrified surfaces and showing sinusoidal curves. Hence, Ichii et al. [125] concluded that those jumps are not related to the existence of solvation layers but are caused by a tip-induced dissolution of the KCl surface in the ionic liquid. This example shows that care must be taken when interpreting force-separation curves and that not all observed step-like features are indicative of the existence of interface layers. graphite (HOPG) and [EMIm] [TFSI]. (a) Topography of HOPG; (b) force-separation curves measured at 20 points as marked in (a); (c) force-separation histograms obtained on the basal plane; and (d) from the step edge; (e) topography line profile with a length of 1 µm; (f) force-separation curves measured along the line in (e) showing different defect regions. Adapted with permission from Black et al. [122]. Copyright 2015, Elsevier. To map the solvation layers in ionic liquids, Negami et al. [124] employed frequency modulation (FM)-AFM force mapping, using a quartz tuning fork as a qPlus sensor, which has a higher Q factor than standard cantilevers used for the optical lever technique and thus can also provide high resolution in viscous liquids. On the [Py1,4][FAP]/Au(111) interface, they were able to measure force spectroscopy series by recording frequency shift-separation curves at different positions of the surface. By applying bias to the substrate, they could monitor the changes to the electrical polarization. They found a characteristic ordered solvation structure detected as oscillations in the frequency shift signal, corresponding to the length of an ion pair. The same FM-AFM technique was applied to the [Py1,4][FAP]/KCl(100) interface by Ichii et al. [125]. In this study, an alkali-halide substrate was employed, as it forms a neutral surface, having the same quantity of anions and cations in the uppermost surface layer. Hence, a different interface layer structure than that on electrified metal electrodes will form. In Figure 18a, a series of frequency shiftdistance curves measured along a line of 4 nm is shown as a 2D map. A distinct oscillation in the frequency shift signal with up to five stripes can be identified. This is illustrated by the cross sections displayed inFigure 18b-e, extracted from the 2D map at four different positions. The distance between the stripes is smaller than the ion pair size of [Py1,4][FAP] and rather corresponds to the lattice constant of KCl. The frequency shift-distance curves exhibit a sawtooth-like pattern with distinct jumps, which is a completely different behavior than that observed by FM-AFM on electrified surfaces and showing sinusoidal curves. Hence, Ichii et al. [125] concluded that those jumps are not related to the existence of solvation layers but are caused by a tip-induced dissolution of the KCl surface in the ionic liquid. This example shows that care must be taken when interpreting forceseparation curves and that not all observed step-like features are indicative of the existence of interface layers. A further approach to image the interface structure was followed using AM-AFM. This technique has been successfully applied to image the HOPG/PAN or HOPG/[EMIm][TFSI] interface [126,127]. Two-dimensional maps of the in-plane structure of the interface have been obtained with a high resolution comparable to STM, and a laterally heterogeneous interface layer has been identified. Ebeling et al. [128] extended the AM-AFM investigations on HOPG/PAN to the third dimension by also performing dynamic force spectroscopy. The phase and amplitude were measured as a function of the separation, revealing a layered interface structure. Elbourne et al. [47] applied the A further approach to image the interface structure was followed using AM-AFM. This technique has been successfully applied to image the HOPG/PAN or HOPG/[EMIm][TFSI] interface [126,127]. Two-dimensional maps of the in-plane structure of the interface have been obtained with a high resolution comparable to STM, and a laterally heterogeneous interface layer has been identified. Ebeling et al. [128] extended the AM-AFM investigations on HOPG/PAN to the third dimension by also performing dynamic force spectroscopy. The phase and amplitude were measured as a function of the separation, revealing a layered interface structure. Elbourne et al. [47] applied the combination of AM-AFM mapping and spectroscopy of the five different ionic liquids, ethylammonium nitrate (EAN), propylammonium nitrate (PAN), ethanolammonium nitrate (EtAN), ethylammonium formate (EAF), and N,N-dimethylethylammonium formate (DMEAF). Phase-and amplitude-separation curves were measured on the mica surface. Complementary AM-AFM maps of the innermost and near-surface layers have been obtained using cantilevers of different stiffness, thus providing spatially-resolved information in three dimensions. The results, shown in Figure 19 provide evidence for a strong influence of molecular configuration of the cation on the interface structure. ethylammonium formate (EAF), and N,N-dimethylethylammonium formate (DMEAF). Phase-and amplitude-separation curves were measured on the mica surface. Complementary AM-AFM maps of the innermost and near-surface layers have been obtained using cantilevers of different stiffness, thus providing spatially-resolved information in three dimensions. The results, shown in Figure 19 provide evidence for a strong influence of molecular configuration of the cation on the interface structure. It can be seen that cations with short alkyl lengths form a superstructure related to the ionic ordering of the mica surface, providing a template for structuring the innermost layer of the ionic liquid (see center column). For DMEAF, having an additional methyl group in the cation the map of the innermost layer shows a higher irregularity, indicating that steric effects prevent regular ordering. The near-surface layer shows larger and more disordered features than the innermost layer for all investigated liquids (see right column in Figure 19). The formation of domains with dimensions larger than a single ion pair can be observed as differences in the measured topography. Here, a high topography value corresponds to a high local stiffness of the liquid, while dark areas correspond to regions where the molecules are mobile, such that the damping of the oscillation of the tip-cantilever system is smaller. Comparing the maps obtained in the different ionic liquids, distinct differences in the nanostructure can be identified, confirming that the specific configuration of both the anion and cation determines the morphology of the interface layer. Conclusions Ordered interfacial nanostructures are a common phenomenon between an electrified surface and an ionic liquid. While they were first predicted from indirect experimental evidence and simulations many years ago, the use of atomic force microscopy in spectroscopy mode provides a direct image of the details of the structuring during electrochemical operation. The AFS results available from the literature so far demonstrate that most of the investigated ionic liquids form a layered structure at the interface, thus forming an electric double layer beyond the classical Gouy-Chapman-Stern theory. The specific arrangement at the interface can be quite complex, which relates to the interplay between the Coulomb forces and steric effects caused by the molecular structure of the ions. Impurities and additives, such as water or metallic ions, also have a share in the formation of the nanostructure. At low concentrations of additives, a strengthening of the nanostructure has often been observed, while at higher concentrations, transition to a conventional solute/solvent mixture occurs. However, the details of the interface formation depend significantly on the specific ionic liquid in use, which on the one hand is a challenge when aiming to derive a general picture of the solid/ionic liquid interface, but on the other allows adjustment of the desired interface properties by designing customized ionic liquids. Author Contributions: The manuscript was written through contributions of all authors. All authors have given approval to the final version of the manuscript. Funding: This research received no external funding.
2019-06-14T14:32:18.369Z
2019-05-29T00:00:00.000
{ "year": 2019, "sha1": "563db0c0efeb1edac853b56f3ab424f08341d8a8", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3417/9/11/2207/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "fe6a865bb4ba606b97da61edf3ea47c561433ca5", "s2fieldsofstudy": [ "Physics", "Materials Science", "Chemistry" ], "extfieldsofstudy": [ "Engineering" ] }
269741516
pes2o/s2orc
v3-fos-license
Skyline query under multidimensional incomplete data based on classification tree A method for skyline query of multidimensional incomplete data based on a classification tree has been proposed to address the problem of a large amount of useless data in existing skyline queries with multidimensional incomplete data, which leads to low query efficiency and algorithm performance. This method consists of two main parts. The first part is the proposed incomplete data weighted classification tree algorithm. In the first part, an incomplete data weighted classification tree is proposed, and the incomplete data set is classified using this tree. The data classified in the first part serves as the basis for the second step of the query. The second part proposes a skyline query algorithm for multidimensional incomplete data. The concept of optimal virtual points has been recently introduced, effectively reducing the number of comparisons of a large amount of data, thereby improving the query efficiency for incomplete data. Theoretical research and experimental analysis have shown that the proposed method can perform skyline queries for multidimensional incomplete data well, with high query efficiency and accuracy of the algorithm Introduction With the rapid growth of data and the swift expansion of data volume, skyline queries have gained widespread application and development.The application of skyline queries is evident in numerous fields, including data mining, Geographic Information Systems (GIS), spatial databases, location services, and multi-criteria decision-making.Skyline queries represent a typical multi-objective optimization problem.Currently, both domestically and internationally, research on skyline query technologies primarily focuses on skyline queries in a complete data environment.Examples include probabilistic skyline queries [1], skyline queries for massive datasets [2], skyline queries in mobile edge computing [3], techniques for skyline queries in obstacle spaces [4], clustering design in spatial databases [5], LSM index storage technology in databases [6], k-dominant Skyline query algorithm for dynamic datasets [7], privacy-preserving skyline queries [8], and K-dominant spatial skyline queries [9], among others.The technology of skyline queries continues to advance steadily. Due to the prevalence of incomplete data in practical scenarios, where data is often characterized by missing values, the methods for skyline queries on incomplete data hold significant importance in areas such as multi-objective optimization and location services.Currently, existing approaches for skyline queries on incomplete data typically involve direct processing of the data.However, these methods exhibit low efficiency in data classification, leading to reduced accuracy in results.Additionally, during the query process, dataset redundancy can compromise the overall performance of the algorithm.Addressing these shortcomings, this paper proposes a skyline query method for multidimensional incomplete data based on classification trees.The primary contributions of this paper are as follows: 1) To address issues in skyline queries with incomplete data, such as data redundancy, low data classification efficiency, and slow classification speed, this paper introduces a method based on incomplete data-weighted classification trees.Integrating the characteristics of incomplete data with traditional tree structures, the method efficiently splits dimensions and adds weighted labels.Compared to existing classifications, this method achieves efficient classification with just three layers of tree structure.It separates the multiple dimensions of incomplete data, storing them in intermediate nodes.Each dimension is assigned weight values based on missing values.Classification is completed through horizontal indexing of leaf nodes, guided by varying weight values.The proposed algorithm resolves challenges of low data classification efficiency and slow speed, enhancing overall classification and skyline query performance.2) To address challenges in existing skyline queries for multidimensional incomplete data, characterized by the presence of substantial useless data and data redundancy leading to low query efficiency, this paper introduces a multidimensional incomplete data skyline query algorithm.Building on an effective classification foundation, the algorithm introduces the concept of optimal virtual points for skyline queries in incomplete data.These optimal virtual points are composed of the maximum values for each dimension from local skyline points and are labeled with the source points for each dimension value.Compared to prior research, optimal virtual points rapidly identify dominating points, significantly reducing the number of comparisons and improving query efficiency.The algorithm incorporates optimal virtual points into different classifications, comparing them with local skyline points.If local skyline points dominated by optimal virtual points are still dominated by the source points of the optimal virtual points, these points form shadow points.Local skyline points not dominated by optimal virtual points constitute candidate skyline points.Further dominance comparisons identify global skyline points.Optimal virtual points efficiently filter out a substantial amount of useless data, reducing the number of tuple comparisons.The proposed algorithm significantly enhances the performance and efficiency of skyline queries, avoiding the issues of high time consumption and low efficiency associated with extensive redundant data. The structure of the remaining content in this paper is outlined as follows.In "Basic definition", the definition of incomplete data skyline queries is provided."Basic definition" introduces incomplete data-weighted classification trees."Incomplete data weighted classification tree classification algorithm" proposes a classification algorithm for multidimensional incomplete data based on incomplete data-weighted classification trees, utilizing them for the classification of multidimensional incomplete data."Multidimensional incomplete data skyline query" further presents a skyline query algorithm for multidimensional incomplete data.Experimental analysis is presented in "Experimental analysis". Related work Skyline queries are widely researched in various fields.Reference [10] proposes a top-k skyline query algorithm based on user preferences and data partitioning.This algorithm, implemented in MapReduce, addresses the issue of low efficiency in top-k skyline queries for large datasets by leveraging user preferences and data partitioning.It effectively enhances query efficiency and exhibits good scalability.However, its drawback is its applicability to static datasets and not suitable for dynamic dataset queries.Reference [11] introduces a distributed skyline query algorithm (DSQ) designed for handling skyline queries in a distributed environment.The algorithm employs data block filtering and dominance graph-based data point filtering to eliminate redundant data tuples.Through a rotation-based scheduling plan, skyline results can be obtained in parallel without creating bottleneck nodes, thus improving query processing efficiency.Nevertheless, a drawback of this algorithm is the need to maintain a considerable number of spatial structures, resulting in substantial spatial overhead.Reference [12] introduces the top-k Manhattan space skyline query problem concerning monotonic scoring functions.This function quantifies the fitting degree of each point in set P under the L 1 distance for a given query.Reference [13] presents effective algorithms for continuous skyline queries on large datasets using the MapReduce framework.The main idea of the algorithm is to calculate the skyline query only once at the initial position, and then update the result as the query point moves, avoiding recomputation from scratch each time.This approach significantly improves the algorithm's efficiency.In Reference [14], numerous applications necessitate the analysis of data evolving over time, leading to the proposal of a novel algorithm, SLS, to evaluate skyline queries on data streams with a low cardinality domain.Reference [15] has designed three efficient algorithms, namely IMSS, OIMSS, and PMSS, which combine the advantages of various techniques, including distance-based priority scanning, virtual point testing, and intelligent pruning heuristics.In particular, the PMSS algorithm integrates some parallel programming techniques.This algorithm offers the flexibility to process queries within subsets of available dimensions, demonstrating significant flexibility.Additionally, it provides a set of pruning rules capable of eliminating redundant data objects, thereby enhancing query performance.However, this algorithm is not suitable for distributed systems and data stream environments.Reference [16] introduces an effective algorithm (SSQ) for processing subspace skyline queries using MapReduce.This algorithm can derive meaningful subsets of points from the complete skyline point set of any subspace.Reference [17] introduces two new skyline queries to identify information-rich and concise skyline sets, namely the minimal skyline query and the extended minimal skyline query.The algorithm's approximate set offers a subset of potentially large object sets containing the best-matching or most interesting objects.As the approximate set captures the primary distribution of the skyline, its semantics are user-friendly, providing a better basis for decision-making.However, a limitation of the algorithm lies in the challenge of determining the distance threshold.Additionally, the algorithm has certain shortcomings in querying the minimal skyline set across various environments. Research on the skyline query problem with incomplete data is limited both domestically and internationally.The concept of incomplete data skyline queries was first introduced by Khalefa [18].Currently, there is extensive research on incomplete data skyline queries in various environments.Examples include probability-based incomplete data skyline queries [19,20], skyline queries for incomplete data in cloud environments [21], skyline queries in incomplete dynamic databases [22], and skyline preference queries for incomplete data [23].The technology for incomplete data skyline queries continues to advance.Reference [24] investigates the use of crowdsourcing for skyline queries on incomplete data.A novel query framework, termed Bayesian Clusters, is proposed, considering the data correlation using Bayesian networks.The algorithm utilizes a typical c-table model on incomplete data to represent objects.The paper introduces an effective task selection strategy that balances budget and latency constraints.Specifically, calculating the probability for each object to serve as an answer object is as challenging as the #SAT problem.To address this, an adaptive DPLL algorithm is proposed to expedite computations.However, this algorithm still has certain limitations in optimizing the quality of incomplete data queries.In Reference [25], a new table-scan-based TSI algorithm is introduced to handle incomplete data on massive datasets.The TSI algorithm addresses the issues of nontransitivity and cyclic dominance in two phases.In the first phase, TSI calculates candidates through continuous scanning of the table, directly discarding tuples dominated by others.In the second phase, TSI retrieves candidates through another continuous scan, incorporating pruning operations to reduce execution costs.Reference [26] introduces an algorithm for incomplete dynamic skyline queries, aiming to identify the skyline on dynamic and incomplete databases.The algorithm involves pruning and selecting superior local skylines.The pruning process attempts to recognize new skylines using derived skylines before performing insert or update operations on the database.The algorithm accelerates query speed by eliminating dominating tuples.In Reference [27], a novel definition of skyline is proposed.It leverages a probability model on incomplete data, where each point has a probability of appearing in the skyline.Specifically, it returns K points with the highest skyline probabilities.This algorithm can offer users valuable reference decisions.However, there may be certain drawbacks in terms of accuracy.Reference [28] presents a novel privacy-preserving aggregate reverse skyline query (PPARS) scheme, ensuring complete query privacy simultaneously.Specifically, it transforms the ARS query problem into a combination of set-membership testing and logical expressions.It employs prefix encoding, Bloom filter techniques, and fully homomorphic encryption to operate on the transformed logical expressions, obtaining non-disclosing query requests, query results, and access patterns.In Reference [29], a crowdsourcing algorithm for a single dataset is proposed to filter datasets containing unknown attributes.Subsequently, a global hierarchy-preference-tree index is established based on the known attributes of both incomplete and complete datasets.The efficiency of the query is enhanced by filtering linked tuples based on global preference scores and the results of each crowdsourcing round. In the realm of incomplete data classification, Reference [30] introduces a general classification model tailored for incomplete data.Existing classification methods are effectively integrated.Initially, an attribute subset is selected based on information gain metrics, and a complete view is generated from incomplete data.Subsequently, these selected views are utilized to obtain multiple base classifiers.Finally, the base classifiers are efficiently combined with decision trees to form the ultimate classifier. In Reference [31], a minority oversampling technique based on multiple inferences is proposed to address both imbalanced and incomplete data classification.Most instances are computed only once, while minority instances undergo oversampling using multiple distinct computations without directly manipulating their observed values.Consequently, compared to traditional approaches, minority instances exhibit greater diversity with minimal data distortion. Basic definition The data set in this paper is a D-dimensional incomplete data which is Definition 2.skyline query [1].Given a data set O, skyline query returns a data object set R. Any data object in R will not be dominated by other data objects in O. Definition 3.Incomplete data domination [26].Given a D-dimensional incomplete data set O, consider two points p and q on the D-dimensional space, let M be a common dimension set of p and q incomplete data.Under |M| ≤ D,if ∀s i ∈ M, p.s i ≥ q.s i and ∃s i ∈ M, p.s i > q.s i , then p does not completely dominate q. Definition 4.Incomplete data skyline query [26].Assume that a set of incomplete data points O, where each point p = {u 1 , u 2 , ..., u d } has at least one known dimension u i , the incomplete data skyline query O sky ⊂ O , so that each point p ∈ O sky is not dominated by other points in O, for any q ∈ O − O sky is dominated by other points in O. Multidimensional incomplete data classification algorithm based on incomplete data weighted classification tree In order to handle skyline queries under multi-dimensional incomplete data, it is necessary to classify the dataset before conducting the skyline query.In this section, considering the slow classification speed issue based on the characteristics of incomplete data, we propose the incomplete data weighted classification tree and further introduce the incomplete data weighted classification tree classification algorithm. Basic definition The incomplete data weighted classification tree is a tree-like structure similar to B + trees.It can perform dimension missing judgments and classification operations on multi-dimensional incomplete data.Based on the characteristics and properties of traditional binary trees and multi-dimensional incomplete data, and in order to efficiently classify multi-dimensional incomplete data and accelerate the efficiency of multidimensional incomplete data skyline queries, a novel multi-dimensional incomplete data weighted classification tree is proposed for the first time.The definition is as follows. Definition 5.The root node N of the incomplete data weighted classification tree T stores the dimensional values of the multidimensional incomplete data tuples to be classified.Considering the characteristic of multidimensional incomplete data having values in multiple dimensions, these dimensions are sequentially written from the first dimension to the nth dimension, from left to right, into the second layer's leaf nodes.Based on the missing status of each dimension, weights are assigned: the weight is 0 for missing values and 1 for non-missing values.Subsequently, the aforementioned data is stored in the leaf nodes of the third layer, and data is exported through horizontal indexing, classifying it based on the different dimension weights.After classification, the data is sequentially imported into the respective classes.Finally, clearing the original tree's stored data completes the classification of multidimensional incomplete data. The incomplete data weighted classification tree T comprises leaf nodes, weights (w), internal nodes, and the root node N.After importing the incomplete data set O into the constructed weighted classification tree model, a classification operation is performed.The data tuples are sequentially stored in the root node N of the weighted classification tree.The classification of incomplete data proceeds from the top to the bottom of the tree.The root node holds the tuples from the dataset to be classified.Dimension attribute values are stored in internal nodes from left to right based on different data dimensions.Completed judgment operations on dimension attribute values are stored in the leaf nodes.The judgment process involves entering the left child node with a weight of 0 if the dimension attribute value in the internal node is missing.If the dimension attribute value is not missing, the right child node is entered with a weight of 1.After completing these operations, missing judgment operations are performed on the dimension attribute values of internal nodes from left to right and stored in the leaf nodes.Further classification is then performed based on the different weights of each dimension, and data is exported to the corresponding bucket through horizontal indexing.After one round Fig. 1 Incomplete data weighted classification tree of classification, the dimension attribute values in the leaf nodes are cleared, and subsequent data classification operations are sequentially performed, completing the classification of the entire multidimensional incomplete data set.The construction status of the horizontal index is depicted in Fig. 1.Each leaf node contains five attributes: identity id, weight, the address of this node, next address, and numerical value of dimensions.After construction, each leaf node possesses a known order of identity id and the address of the current node.Additionally, an array maintains the relationship between leaf node ids and addresses.With this array, the address of the next node can be found for each leaf node sequentially.Each leaf node sequentially completes the construction of the horizontal index..The incomplete data weighted classification tree model is shown in Fig. 1: Incomplete data weighted classification tree classification algorithm The classification algorithm for incomplete data weighted classification trees mainly consists of two stages.The first stage is the modeling stage of the incomplete data weighted classification tree.The second stage is the classification stage of multi-dimensional incomplete datasets in the incomplete data weighted classification tree model. Stage1: incomplete data weighted classification tree modeling. The generation of the incomplete data weighted classification tree is a divide-andconquer, top-down process.In this study, the dataset is an incomplete dataset O, and a training set (samples) is selected from this dataset to build the incomplete data weighted classification tree T. At this stage, the training set is randomly selected.This training set comprises 70% of the incomplete dataset.It encompasses all classification scenarios of data tuples to ensure the integrity and accuracy of the created training tree.The tree's root node stores data tuples and branches into intermediate nodes.Each branch sequentially stores the dimension attributes of the tuples.Intermediate nodes of the classification tree branch based on different weight values.The left branch of an intermediate node is assigned a weight of 0, and the right branch is assigned a weight of 1.Following these steps, all the leaf nodes are formed.Horizontal indexing is established in the leaf nodes, creating connections from the leftmost leaf node to the rightmost leaf node.This operation establishes spatial relationships for each leaf node.When all dimension attribute values and training set samples have been traversed, the construction of the incomplete data weighted classification tree model is complete. In this stage, the dimension missingness of the dataset is used as a test attribute.Building upon the modeling from the first stage, the final result is the incomplete data weighted classification tree, which can be employed to classify incomplete datasets. Stage 2: Classify the data set. Based on the constructed incomplete data weighted classification tree, the d-dimensional incomplete dataset O is input into the tree for classification training.At this stage, the training set is randomly selected.This training set comprises 70% of the incomplete dataset.It encompasses all classification scenarios of data tuples to ensure the integrity and accuracy of the created training tree.The root node of this tree forms internal nodes based on different dimensions.Internal nodes branch according to whether the attribute values for that dimension are missing, assigning different weights to the left and right branches.Finally, successful classification is achieved based on a horizontal index reflecting the varying weights of each dimension.After classifying a data tuple, the classified tuple is output to a bucket using a horizontal index, and then all leaf nodes are emptied.This process is repeated until all data is fully classified.The number of internal nodes in the incomplete data weighted classification tree, denoted as n, represents the dataset having n dimensions, meaning the incomplete data set can be divided into at most 2 n classes. Theorem 1.Given an incomplete data set O, input the dataset into the incomplete data weighted classification tree.Based on missing value judgments and weight assignment for each dimension of the tuples, tuples with equal dimension weights are then output to the same bucket using a horizontal index, thereby completing the data classification. Proof: For an incomplete dataset O, assume that the missing dimensions of a tuple are denoted by "-".The tuple of any missing dimension is o = {o 1 , −, o 2 , ..., o i , −, ..., o d } .Each dimension data will first be assigned to the second layer.Starting from the first dimension i, if dimension i is missing, the weight of i ∈ node leftChild is 0, otherwise the weight of i ∈ node rightChild is 1.where node leftChild is the left child node and node rightChild is the right child node.Then judge the second dimension j, if the dimension j is missing, then the weight of j ∈ node leftChild is 0, otherwise the weight of j ∈ node rightChild is 1.Continuing in this manner until all dimensions are evaluated.Output this data to the corresponding bucket through a horizontal index based on the classification.Proceed to sequentially evaluate other tuples in the dataset, performing the aforementioned classification operation.Ultimately, the classification of all data is completed. The sample dataset is illustrated in Fig. 2, representing an incomplete data set with 5 dimensions.Using the weighted classification tree for classifying data tuples, the tuple data is entered into the tree and stored in the first-layer node.Each dimension's data is initially placed in the second layer of the classification tree, followed by evaluating if the dimension data is missing.If a dimension is missing, it is stored in a leaf node with a weight of 0; if not, it is stored in a leaf node with a weight of 1.This process continues until all dimensions of the data are classified.Subsequently, based on the differing weights of each dimension and utilizing a horizontal index, the data is exported to distinct buckets.Repeat the aforementioned steps to complete the classification of the entire sample dataset.The dataset is ultimately divided into five classes, labeled as C 1 to C 5 . Figure 3 shows the classified sample data sets. The basic idea of the algorithm for the incomplete data weighted classification tree, as proposed in Theorem 1, is as follows.Firstly, utilize the sample training set "samples" to Multi-dimensional incomplete data skyline query Builds on the work done in "Basic definition" on classifying incomplete data using weighted classification trees for incomplete data.This section performs a skyline query on the classified data. Skyline query algorithm for multi-dimensional incomplete data Definition 6.After incomplete data classification, any class can be represented by a 0-1 vector, where the dimension of the missing data value is represented by 0, and the dimension of the non-missing data value is represented by 1. Suppose there are two tuples, P (-, 7,8, -) and Q (2, -, -, 6), which are divided into classes C i and C j .According to definition 6, two vectors, C i (0,1,1,0) and C j (1,0,0,1), are obtained.The two vectors are "AND" operated.If the result is not all 0, the tuples in the two classes can be compared; If the results are all 0, the tuples in the two classes cannot be compared.Definition 7. Virtual point [18].If the local skyline point p in the N i bucket dominates the q point in the N j bucket, then point p can be used as a virtual point.The main idea is that the virtual point can reduce the number of comparisons between tuples by controlling the relationship.Definition 8. Optimal virtual point.The role of the optimal virtual point is to improve execution efficiency.The optimal virtual point E= (e 1 ,e 2 ,...,e n ) is composed of the opti- mal value of the local skyline point dimension value in each bucket, where the missing dimension is represented by "-", where e i represents the optimal attribute value of the tuple dimension i in the local skyline.Definition 9. Shadow skyline [18].When local skyline points are dominated by the optimal virtual point and simultaneously dominated by the source point of the optimal virtual point, the dominated points within the group are removed, forming shadow skyline points.During the selection of global skyline points, comparison filtering is applied to reduce the number of comparisons, preventing issues of redundant data comparison.Theorem 2. If there is an optimal virtual point and the bit operation of the class to which the bucket belongs is not "0", then the optimal virtual point is introduced into the bucket.If the dominance relationship of ∃V i dominating ∀q j ∈ N j is satisfied, the number of tuples will be reduced. Proof: The optimal virtual point is generated by taking the maximum value for each dimension from the local skyline tuples in each bucket, forming a new set of data representing the optimal virtual point.Introducing the optimal virtual point into other buckets for dominance comparison, if dominance is observed, a comparison is made with the data tuples composing the optimal virtual point.If the dominance relationship persists, the dominated data is then inserted into shadow points.Through these operations, a significant reduction in data volume is achieved, consequently decreasing the number of comparisons and improving the efficiency of the algorithm. Figure 4 shows the calculation process of the optimal virtual point.Figure 4 illustrates the process of calculating the optimal virtual point.First, a skyline query is conducted in each bucket, eliminating points dominated within the bucket.The remaining points constitute the local skyline points within the bucket.Following the dominance principle, tuples O After this, the local skyline for each bucket is obtained.Subsequently, the optimal virtual point is selected for each bucket, and the dimension indices of each optimal virtual point indicate the source points of its data.The optimal virtual point is a tuple consisting of the optimal values of each dimension of the local skyline points in each bucket.Missing dimensions are denoted by "-".The locus points of the first dimension of point V 1 are points O 13 and O 38 in the barrel, and these two locus points provide the values of the first dimension of V 1 .The locus points of the second dimension of point V 1 are points O 34 and O 38 in the barrel, and these two locus points provide the values of the second dimension of V 1 .The locus points of the third dimension of point V 1 are points O 14 and O 8 in the barrel, and these two locus points provide the values of the third dimension of V 1 The fourth dimensional locus of the V 1 point is the O 34 point in the bucket, and this locus provides the value of the fourth dimension of V 1 .The formation and source points of the other optimal virtual points are introduced in turn. Figure 5 shows the calculation process of skyline of classified data. Figure 5 shows the skyline query process for the sample dataset.respectively.Then the local skyline of each bucket is obtained.The optimal virtual point of each bucket is selected according to the definition of the optimal virtual point.The optimal virtual points are added to the local skyline of each bucket, and then the shaded points and candidate skyline points are obtained.Take O 13 as an example.The optimal virtual point V 5 dominates O 13 , and then the constituent points of V 5 are compared with O 13 for domination.If O 13 is still dominated, then O 13 is written into the shaded points.The candidate skyline points and the shaded points are sorted out by performing the above operations on the local skyline points in turn.As shown in the figure, two candidate skyline points, O 8 and O 10 , are selected.Then the candidate skyline points are compared with the shaded points, and the global skyline point O 8 is obtained because the candidate skyline point O 8 is not dominated by any shaded points. The main idea of the algorithm proposed in this section is as follows.The data tuples in each bucket are filtered by a dominant comparison to find the local skyline.The optimal virtual point is a point that consists of the maximum value of each dimension.The optimal virtual point is introduced into the bucket to which the category belongs.If the optimal virtual point dominates the bucket, the dominated point is compared with the point from which the optimal virtual point originated.If there is still a dominance relationship, the dominance comparison is performed in the bucket, and the dominated point in the local skyline is inserted into the shaded skyline.The candidate skyline points are composed of the local skyline points in each bucket that are not dominated by the optimal virtual point.Since the candidate skyline points may be dominated by the shadow Fig. 5 Classified data skyline calculation skyline, the shadow skyline points are compared with the candidate skyline points, and the dominated candidate skyline points are deleted if there are points dominated by the shadow skyline points in the candidate skyline points.The final global skyline point is obtained. Theorem 3. Any global skyline point P will be output by the multi-dimensional incomplete data skyline query algorithm (Algorithm 2). Proof: Suppose there exists a global skyline point P.However, point P is not output by Algorithm 2. In the algorithm, only if P is dominated will it be discarded.Therefore, we have three cases: (1) P is dominated by points of the same class.Since P is assumed to be a global skyline point, according to the definition of skyline points, points of the same class will not dominate P. Otherwise, it contradicts the assumption.Therefore, P will not be discarded, and P will proceed to the next step of the algorithm.( 2) P is dominated by the optimal virtual point and its source point or by shadow points.According to the assumption, P is a global skyline point.According to the skyline definition, the result of comparing P with other source points and shadow points in the common dimensions is that P is not dominated.Otherwise, it contradicts the assumption.Therefore, P will not be discarded, and P will proceed to the next step of the algorithm.(3) P is dominated by other candidate skyline points.According to the dominance relation and skyline definition, P is a global skyline point.P will not be dominated by candidate skyline points.Otherwise, it contradicts the assumption.Therefore, P will not be discarded, and P will be output.In conclusion, if there exists a global skyline point, it will definitely be output by the algorithm.Theorem 4. Any point derived from the multi-dimensional incomplete data skyline query algorithm (Algorithm 2) will be a global skyline point. Proof: Suppose Algorithm 2 outputs a point P, but there exists a point Q that dominates P.There are three cases for Q: (1) Q and P belong to the same class.Because Q dominates P, P is discarded.Therefore, the algorithm will not output point P, contradicting the assumption.(2) Q exists in shadow points or Q exists in the source points of the optimal virtual point that dominates P. If the algorithm derives P, it means P enters the candidate skyline points.Because Q dominates P, P is discarded.Therefore, the algorithm will not output point P, contradicting the assumption.(3) Q exists in the candidate skyline points.Because the algorithm outputs P, P also exists in the candidate skyline.Since Q dominates P, P is discarded.Therefore, the algorithm will not output point P, contradicting the assumption.In conclusion, any point derived from the algorithm is a global skyline point. In summary, if there exists a global skyline point, it will definitely be output by the algorithm according to the above theorems.Therefore, the algorithm can correctly identify global skyline points and is convergent. Based on the above research, we further give a multidimensional incomplete data skyline query algorithm, as shown in algorithm 2: Algorithm 2 Multi-dimensional incomplete data skyline query algorithm The multidimensional imperfect data skyline query algorithm first inserts each class into a created bucket.A skyline query is performed within each bucket according to the dominance principle.Any tuple that is not dominated is a local skyline.Dominated tuples are deleted.The optimal virtual points are then extracted from each bucket, i.e., the maximum value of each dimension of the local skyline points in each bucket is extracted to form the optimal virtual points.All optimal virtual points are then introduced into buckets other than this one for skyline computation.The optimal virtual points are introduced to reduce the number of comparisons and hence the complexity of the algorithm.If the local skyline within the bucket is dominated by the optimal virtual point, it is located at the source of the optimal virtual point and compared.If they are still dominated by the source point, they are inserted into the shadow skyline.Points that are not dominated are inserted into the candidate skyline.Finally, the candidate skyline points in this bucket are compared with the shadow skyline in the nondominated bucket.If the candidate skyline in this bucket is dominated, the dominated point is removed.Based on the dominance relationship, the candidate skylines in each Based on the above discussion, integrating the two stages of the classification tree algorithm with weight based on incomplete data and skyline query algorithm is the complete multidimensional incomplete data skyline query algorithm (BTIS) proposed in this paper.This method can efficiently skyline query multidimensional incomplete data. Experimental analysis This paper presents a skyline query algorithm for multidimensional incomplete data.In this paper, we propose a skyline query algorithm for multidimensional incomplete data.The algorithm first classifies the incomplete data according to the weighted classification tree.Finally, a skyline query is performed on the classified incomplete data.In this section, experiments are designed to evaluate the performance of the algorithm. The experimental data encompass real datasets, synthetic datasets, and a road network dataset.The real dataset is derived from the MovieLens 1M dataset obtained from the GroupLens website, which features multidimensional attributes.Seventeen attributes were selected from the MovieLens dataset, and the data completeness is 95%.To introduce randomness, some statistical data in this dataset were randomly missing, ensuring a 10% missing rate.The synthetic dataset was generated using standard data synthesis tools, creating a complete 1M,17-dimensions dataset.Subsequently, some attributes were randomly removed to achieve a 10% missing rate.The attributes of the synthetic dataset are independent and follow a normal distribution.The road network dataset is based on a partial road network dataset from North California, adjusted to have 17 dimensions and a 10% missing rate. We compared the BTIS algorithm in this paper with the Sky-iDS algorithm [20], the SIDS algorithm [22], the △skyline algorithm [33], and PTKD algorithm [32] in terms of result accuracy, dataset missing rate, dataset dimensions, and dataset size. Comparative analysis of algorithms In this section, we have completed the comparative analysis of four experiments. Experiment 1 compared the execution time of BTIS, SIDS, Sky-iDS, △skyline and PTKD algorithm under different dataset sizes.The dimensions of the three datasets in the experiment were set at 17, with dataset sizes ranging from 0 to 900K.The comparison involved BTIS, SIDS, Sky-iDS, △skyline and PTKD algorithm.The relationship between dataset size and CPU execution time is illustrated in Figs. 6, 7, and 8. Figures 6 and 8 shows that the BTIS algorithm takes the least amount of time to execute.This is because as the size of the data becomes larger, the number of data sorting and tuple comparisons becomes larger, resulting in a longer running time for SIDS, due to the existence of the RankList structure in the Sky-iDS algorithm.The number of tuples in the structure and the number of sorts in the structure increase significantly and increase the execution time of the algorithm as the size of the dataset grows.The △skyline algorithm performs a one-by-one comparison when removing useless data sets.The PTKD algorithm involves a complex computation process in skyline queries, resulting in increased runtime as the data scale grows.The BTIS algorithm removes a large amount of useless data in the presence of incomplete data with weighted classification trees and optimal virtual points.The algorithm reduces the number of comparisons and has a lower run time for the data size.The CPU execution time of the BTIS algorithm increases slowly as the data set increases.Changes in the size of the data set have a small impact on the runtime.Through the comparison of Figs. 6 to 8, it can be observed that the execution time of the road network dataset is slightly higher.This is due to the complexity of the road network dataset, which results in longer processing time for the algorithm under the same data volume.The algorithm's execution time on the road network dataset is slightly higher than that of the other two datasets. From Figs. 6 and 7, it is observed that in the MovieLens experiments, algorithms such as SIDS, Sky-iDS, PTKD, and △skyline exhibit rapid changes in CPU execution time when the horizontal axis interval is between 300 k -400 k.This is due to the presence of some non-uniformly distributed outliers in MovieLens.In contrast, artificially generated datasets have uniformly distributed values, resulting in smaller fluctuations in runtime for algorithms like SIDS, △skyline, and BTIS.Additionally, BTIS algorithm demonstrates minimal fluctuations in execution time across both datasets, indicating good algorithm performance. Experiment 2 compares the execution time of BTIS, SIDS, Sky-iDS, △skyline and PTKD algorithm in different dimensions.The size of the three datasets in the experiment was 1M.The range of dimensionality is from [3] to [17].The CPU execution times of BTIS, SIDS, Sky-iDS, △skyline and PTKD algorithm vary with increasing data dimensionality, as shown in Figs. 9, 10 and 11: In Figs. 9, 10, and 11, as the horizontal axis (representing dimensions) increases within the interval from 3 to 11, the CPU runtime of the four algorithms tends to stabilize, with very little difference in runtime among them.As the dimensionality increases, the execution time of the Sky-iDS algorithm increases significantly between the horizontal coordinate interval 15 and 18.This indicates that the performance of the algorithms deteriorates as the dimensionality of the data becomes larger.As the RankList structure is present in the Sky-iDS algorithm and the size of the dataset becomes larger, the number of tuples and the number of sorts within the structure increases significantly.As the dimensionality of the data increases, the Sky-iDS algorithm needs to traverse and sort multiple dimensions in all datasets.As a result, the performance of the Sky-iDS algorithm decreases as the dimensionality of the dataset increases.The ∆skyline algorithm performs a one-by-one comparison when removing useless data sets.The execution time of the ∆skyline algorithm increases significantly as the dimensionality of the data set increases, but the BTIS algorithm performs well.This is because the algorithm first classifies the data using an incomplete data weighted classification tree and then selects the optimal dummy points to massively reduce the redundant data.The algorithm reduces the size of the data, the number of comparisons between the data is reduced, and the algorithm's running time is relatively smooth. Experiment 3,the execution times of BTIS, SIDS, Sky-iDS, △skyline and PTKD algorithm were compared at different missing rates.Two datasets with a size of 1M and a dimension of 17 were used in the experiments.To ensure the random missingness of data and meet the conditions of missing rates from 10 to 50%, 10% to 50% of the data in the two datasets were randomly deleted.As the missing rate increased, the CPU execution times of the BTIS algorithm and the SIDS, △skyline, Sky-iDS and PTKD algorithm varied, as shown in Figs. 12, 13 and 14. From Figs. 12, 13 and 14, it can be seen that as the missing rate increases, the execution time of the Sky-iDS algorithm decreases slowly.The execution time of the BTIS algorithm, △skyline algorithm, PTKD algorithm and SIDS algorithm decrease significantly.This is because the BTIS algorithm uses an incomplete data-weighted classification tree to improve the classification efficiency and accuracy of the classified data.After classification, the BTIS algorithm also uses the optimal virtual point to Fig. 11 Impact of road network dataset dimension on execution time Fig. 12 Impact of data missing rate on execution time screen out a large amount of redundant data, reducing the number of comparisons and hence, the algorithm's running time.The PTKD algorithm employs parallel processing in skyline queries, which reduces computation time.The △skyline algorithm extracts the dominant optimal value when identifying data pairs and greatly reduces the number of comparisons, thus reducing the running time.The SIDS algorithm uses a sorted array, which stores the IDs of tuples that do not have missing values for the corresponding dimension.As the missing rate slowly increases, the number of tuples inserted into the array decreases, and the execution time of the algorithm also decreases.Due to the existence of the RankList data structure in the Sky-iDS algorithm, the bucket structure of the algorithm does not decrease significantly as the number of missing dimensions in the dataset increases.Therefore, the number Fig. 13 Impact of data missing rate on execution time Fig. 14 Impact of data missing rate on execution time of comparisons between tuples will not be significantly reduced, and the algorithm's running time will not be significantly reduced.From Fig. 14, it can be observed that as the dimension missing increases, the execution time of the algorithm significantly decreases.This is because with the increase in dimension missing, the complexity of the road network dataset decreases sharply, resulting in a noticeable acceleration in the algorithm's processing speed. Experiment 4 compares the accuracy of BTIS, SIDS, Sky-iDS, △skyline and PTKD algorithm in different dimensions.The size of the three datasets in the experiments is 1M.The range of dimensionality is from [3] to [17] In Figs. 15, 16 and 17, the accuracy of BTIS, SIDS, ∆skyline, Sky-iDS and PTKD algorithm decreases as the dimensionality of the data increases.As can be seen from the graph, the accuracy of the five algorithms decreases slowly in the interval 3-9.The decrease in accuracy of Sky-iDS algorithm is significant in the interval 12-15.Since the RankList structure is present in the Sky-iDS algorithm, the number of tuples and the number of sorts within the structure increases significantly as the dimensionality of the dataset becomes larger.The Sky-iDS algorithm requires traversal and sorting of multiple dimensions in all datasets, which results in poor results and reduced accuracy.The SIDS algorithm also has a greater reduction in accuracy in the interval 12-15 in the horizontal coordinate.This is because the SIDS algorithm requires tuple sorting and deletion of the dominated tuple.This operation removes a larger amount of data by mistake, resulting in a decrease in accuracy.The ∆skyline algorithm also deletes data when comparing global best points.The accuracy of the ∆skyline algorithm also decreases as the dimensionality of the data increases.The BTIS algorithm, which removes a large amount of redundant data, has a nearly flat accuracy rate as the dimensionality of the data increases. Conclusion With the rapid development of computer information technology, skyline querying is playing an important role in a large number of real-life scenarios.Traditionally skylines have been used for complete data, i.e., data that is complete and not missing.In production environments where multidimensional data is increasing, there is a large amount of incomplete data in these multidimensional data.How to efficiently obtain global skyline points from multidimensional incomplete data is a difficult problem.At present, the existing research results have major limitations in skyline querying of multidimensional incomplete data.This paper proposes a skyline query method for multidimensional incomplete data based on classification trees to address the problem of low query efficiency caused by redundant data in the query process.The proposed method first solves the problem of low classification efficiency and slow classification speed by using the classification tree of incomplete data with weight.Then the classified data is skyline queried.The optimal virtual points are used to filter out redundant data and reduce the number of comparisons.The algorithm improves the performance and efficiency of skyline query.Experimental results that the algorithm proposed in this paper has good performance.The categorized index space and data dimensions are linearly related in the figure.Future research work could focus on how to further reduce the spatial memory of this structure. a tuple in the incomplete data set.Definition 1. Dominant relation[1].Given a D-dimensional data set O, for any two objects, o 1 and o 2 in O, o 1 and o 2 have a dominating relationship (o 1 ≺ o 2 ) if and only if two conditions are satisfied: (1) o 1 is not worse than o 2 in any attribute.∀ i , o 1 .[i]≥ o 2 .[i], ( 2) At least one attribute j exists, where o 1 is better than o 2 .∃j,o 1 .[j]> o 2 .[j]. Fig. 2 Algorithm 1 Fig. 3 Fig. 2 Sample data set bucket are then compared, and the dominated candidate skyline points are removed.This results in a global skyline point.Assuming there are n tuples with d-dimensional data.The time complexity of lines 1-5 is O(2 d *n).Lines 6-13 have a time complexity of O(d*2 d *logn).The time complexity of lines 14-15 is O(d*4 d *logn).Lines 16-27 have a time complexity of O(d*2 d *n).Assuming there are z shadow points and w candidate skyline points.The time complexity of lines 28-32 is O(z*d*w).The time complexity of lines 33-36 is O(d*w*logw). Fig. 6 Fig. 7 Fig. 6 Effect of MovieLens dataset size on execution time Fig. 8 Fig. 8 Effect of road network dataset size on execution time Fig. 9 Fig. 9 Impact of MovieLens dataset dimension on execution time . The variation of accuracy of BTIS, SIDS, Sky-iDS, △skyline and PTKD algorithm in the experiments is shown in Figs.15, 16 and 17: Fig. 15 Fig. 15 Influence of MovieLens dataset dimension on accuracy
2024-05-13T13:16:28.082Z
2024-05-12T00:00:00.000
{ "year": 2024, "sha1": "d1e2912c3ce79869ca0e5b07dbe21f5a4ecf35ef", "oa_license": "CCBY", "oa_url": "https://journalofbigdata.springeropen.com/counter/pdf/10.1186/s40537-024-00923-8", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "f43c4e05c201ba9b092dd500148746b34ea1182d", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
250973647
pes2o/s2orc
v3-fos-license
Assessment of underlying topography and forest height inversion based on TomoSAR methods ABSTRACT Due to the strong penetrability, long-wavelength synthetic aperture radar (SAR) can provide an opportunity to reconstruct the three-dimensional structure of the penetrable media. SAR tomography (TomoSAR) technology can resynthesize aperture perpendicular to the slant-range direction and then obtain the tomographic profile consisting of power distribution of different heights, providing a powerful technical tool for reconstructing the three-dimensional structure of the penetrable ground objects. As an emerging technology, it is different from the traditional interferometric SAR (InSAR) technology and has advantages in reconstructing the three-dimensional structure of the illuminated media. Over the past two decades, many TomoSAR methods have been proposed to improve the vertical resolution, aiming to distinguish the locations of different scatters in the unit pixel. In order to cope with the forest mission of European Space Agency (ESA) that is designed to provide P-band SAR measurements to determine the amount of biomass and carbon stored in forests, it is necessary to systematically evaluate the performance of forest height and underlying topography inversion using TomoSAR technology. In this paper, we adopt three typical algorithms, namely, Capon, Multiple Signal Classification (MUSIC), and Compressed Sensing (CS), to evaluate the performance in forest height and underlying topography inversion. The P-band airborne full-polarization (FP) SAR data of Lopè National Park in the AfriSAR campaign implemented by ESA in 2016 is adopted to verify the experiment. Furthermore, we explore the effects of different baseline designs and filter methods on the reconstruction of the tomographic profile. The results show that a better tomographic profile can be obtained by using Hamming window filter and Capon algorithm in uniform baseline distribution and a certain number of acquisitions. Compared with LiDAR results, the root-mean-square error (RMSE) of forest height and underlying topography obtained by Capon algorithm is 2.17 m and 1.58 m, which performs the best among the three algorithms. Introduction Forest is known as the "lungs of the earth", playing an essential role in preventing wind and sand fixation, conserving water and soil, and maintaining the global carbon cycle in human life and production.According to statistics, forests cover about one-third of the land area globally, which is a crucial resource repository for human survival.Therefore, it is vital that keep an eye on forest resource changes to meet the global resource crisis (Bohn and Huth 2017;Mitchard 2018;Spies 1998). The vertical structure directly reveals the growing trend of the forest and is an essential parameter for estimating the Above-Ground Biomass (AGB) and the carbon storage (Spies 1998;Zhang et al. 2014;Ramli and Tahar 2020).As the penetrability of long-wavelength radar, interferometric synthetic aperture radar (InSAR) has become the most potent tool for reconstructing the forest vertical structure (Pardini et al. 2018), especially in L-band and P-band.In terms of many InSAR technical methods, SAR Tomography (TomoSAR) technology can obtain the three-dimensional structure of the penetrable natural medium and has a unique advantage in acquiring forest height and underlying topography (Reigber and Moreira 2000;Yu et al. 2020;Aghababaei et al. 2020).Polarimetric InSAR (PolInSAR) technology can also obtain highprecision Canopy Height Model (CHM) and Digital Elevation Model (DEM) products, which has been proven in many works (Papathanassiou and Cloude 2001;Cloude and Papathanassiou 2003;Kugler et al. 2015;Neumann, Ferro-Famil and Reigber 2008;Cheng, Pinto, and Gong 2012;Wu et al. 2019).It follows a physical model, namely, random volume over ground (RVoG) model (Treuhaft, Moghaddam, and van Zyl 1996), which establishes a linear relationship with different complex coherence coefficients in the complex unit circle space.The geometrical relationship can solve forest height and underlying topography under the pure volume coherence assumption (Lee and Pottier 2017).However, considering the actual situation, PolInSAR technology can only obtain the phase center height of a single-pixel that is discrete and single through complex coherence coefficients.In addition, PolInSAR technology can only use multi-polarization (MP) or fullpolarization (FP) datasets to invert high-precision forest height and underlying topography based on the RVoG model (Yardibi et al. 2010). Reigber et al. firstly used L-band airborne FP SAR data in 2000 to obtain the three-dimensional structure of the forest using Fast Fourier Transformation (FFT) technology (Reigber and Moreira 2000).It is the first successful experiment of TomoSAR technology in forest applications, motivating many scientists to explore the new TomoSAR methods in forest applications.Inspired by the Direction-Of-Arrival (DOA) estimation technology, the forest structure is assumed to be divided into two layers: the ground and the canopy.The estimation of forest height and underlying topography becomes finding the phase center position of the ground and the canopy, which is similar to the estimation of source locations in the DOA estimation and appearing in the tomographic profile as a peak position (Yardibi et al. 2010;Del Campo, Nannini, and Reigber 2020).After more than 10 years of development, a variety of DOA estimation methods have been developed and introduced in TomoSAR technology.This paper mainly categorizes the TomoSAR methods into four categories: (1) Non-parametric estimation method similar to Capon algorithm.Non-parametric methods mainly depend on the estimation accuracy of the covariance matrix and can be solved without a priori knowledge of the number of scatterers.Based on the theory of statistical regularization, sparse iterative covariance-based estimation (SPICE) method (Stoica, Babu, and Li 2010), maximum likelihood estimation (Del Campo, Nannini, and Reigber 2018), iterative adaptive approach (IAA) method (Yardibi et al. 2010;Peng et al. 2018), regularized IAA (RIAA) method (Roberts et al. 2010), and other algorithms are introduced to improve the vertical resolution.(2) Parametric estimation method based on subspace fitting technique (Viberg 1990).Those methods need to know the number of scattering sources, divide the signal into noise subspace and signal subspace, and reconstruct the tomographic profile of the required signal by using the mathematical relationship of different subspaces, such as multiple signal classification (MUSIC) algorithm (Schmidt and Schmidt 1986), weighted subspace fitting (WSF) algorithm (Huang, Ferro-Famil, and Reigber 2011), etc. (3) Compressed Sensing (CS) (Budillon, Evangelista, and Schirinzi 2010;Li et al. 2015;Zhu and Bamler 2010;Aguilera, Nannini, andReigber 2012, 2013).CS algorithm assumes that the forest signal obtained is sparse under the sparse basis.After selecting appropriate user parameters, convex optimization tools can be used to solve and reconstruct the tomographic profile.(4) Sum of Kronecker Product Decomposition (SKPD) (Tebaldini 2009).SKPD algorithm follows the assumption of two layers of the forest and applies the principle of algebraic geometry to separate the FP covariance matrix into the ground structure matrix and volume structure matrix.After that, any TomoSAR algorithm based on covariance matrix can be used to obtain the tomographic profile. Whether using PolInSAR or TomoSAR technology to obtain the vertical structure of the forest, the forest height and the underlying topography have always been important parameters that are closely related to the AGB.As an emerging SAR technology, TomoSAR technology can offer resolution in the third dimension and detect ground objects, especially in forest areas.It is different from the traditional InSAR technology with twodimensional focusing in the range-azimuth direction.In the case of multiple data stacks, it resynthesizes the aperture at the normal-to-slantrange direction, forming its unique perspective of observing ground objects.Furthermore, it can provide the scattering information of the penetrable natural medium in the form of continuous profiling along the height direction (see Figure 1).Therefore, TomoSAR technology has great potential in obtaining the three-dimensional structure of the forest and provides a powerful technical tool for the upcoming BIOMASS mission. In this paper, we mainly discuss the performance of obtaining underlying topography and forest height by using the first three kinds of TomoSAR methods and analyze the influence of baseline design and filters on the reconstruction of the tomographic profile.SKPD algorithm must use FP SAR data, and it is difficult to determine the optimal parameters with different forests, which is very time-consuming.So, in this paper, we do not consider the SKPD algorithm.The article is arranged as follows: the second part mainly introduces the basic theory of the TomoSAR model and three kinds of methods; the third part mainly introduces the research area and the data processing; the fourth part mainly introduces the results of forest height and underlying topography obtained by the three methods.The final two parts show the discussion and the summary. TomoSAR model The essence of SAR tomography is similar to singlesnapshot DOA estimation in signal processing.The model estimated by TomoSAR can be expressed as follows (Fornaro, Lombardini, and Serafino 2005;Tebaldini and Guarnieri 2010): where Y represents single look complex (SLC) data stack after registration and phase flattening, A represents the known steering matrix, X represents the unknown vertical reflectivity profile, and σ represents complex random Gaussian zero-mean additive noise vector.To clarify the meaning of the above parameters, we write them in matrix form as follows: where subscript � p¼1;���;N represents the numbers of height samples which is set to reconstruct the continuous tomographic profiles (i.e. a height range from −60 to 30 m with an interval of 1 m, N equals 91); superscript y i i¼1;���;L ð Þ represents the ith SLC image; h p p¼1;���;N ð Þ represents the potential source location associated with the position of the peaks of the tomographic profile; X p p¼1;���;N ð Þ is the vertical reflectivity profile response to the location h p , is the phase-to-height parameter associated with the pair formed by the ith acquisitions and the first acquisitions; where B ? ; λ; θ, and R represent the vertical baseline, radar wavelength, incidence angle, and slant-range, respectively. Since the continuous complex reflectivity profile cannot be accurately recovered, we can only use the discrete sampling method to approximate the true reflectivity.In fact, TomoSAR technology is interested in obtaining the backscattering power profile (i.e. the second-order statistics of the complex reflectivity, also named tomographic profile, for the consistent definition, hereinafter collectively referred to as tomographic profile) for a certain azimuth-range location, where E � ð Þ represents statistical expectation, � h i indicates temporal or spatial ensemble averaging, � H represents the Hermitian operator.The tomographic profile consists of the diagonal elements of the P matrix.The TomoSAR methods in obtaining the forest tomography profile are mostly established based on the covariance matrix.Therefore, the accuracy of tomographic profile estimation by TomoSAR is closely related to the estimation accuracy of the covariance matrix.The following formula indicates the relationship between the tomographic profile and the covariance matrix: where Σ represents noise covariance matrix. Capon algorithm Capon algorithm is a typical non-parametric estimator based on the covariance matrix, expressed in the following manner, where P Capon represents the obtained tomographic profile using Capon algorithm, R À 1 represents the inverse of the covariance matrix R. When the sampling height interval Δh ¼ h pþ1 À h p is determined, the tomographic profile can be obtained by using the above formula. MUSIC algorithm MUSIC is a short name of the multiple signal classification algorithm.First, MUSIC algorithm needs to know the number of scattering sources ns, as the input parameter.Then, the covariance matrix R can be obtained from the observed SLC data stack and divided into noise subspace and signal subspace by the eigendecomposition tool. where λ l represents the lth eigenvalue corresponding to the eigenvector e l ;E s ¼ e 1 � � � e ns ½ �represents the eigenvectors in the signal subspace; � represents the eigenvectors in the noise subspace; Λ ¼ diag λ 1 � � � λ L ½ � represents the eigenvalues in the form of the diagonal matrix where λ 1 � � � � λ ns � � � � λ L .Finally, using the orthogonal relationship between the steering matrix A and the eigenvector of the noise subspace and the equality transformation, the tomographic profile P MUSIC expression is as follows. CS algorithm In recent years, the theory of CS has been widely applied in sparse signal recovery.SAR technology has also been successfully applied CS technology for building height extraction, four-dimensional urban deformation, and forest tomographic profile inversion.The forest tomographic profile can be reconstructed by CS algorithm with a limited amount of covariance samples under the following assumptions that (1) the signals must be sparse, indicating that there are a low number of non-zero coefficients in the reconstructed tomographic profile and (2) the steering matrix A must satisfy the restricted isometry property (RIP) criterion. Based on the representation of CS theory, we rewrite the TomoSAR model into vector form where vec � ð Þ represents the vec-operator which stacks the columns of a matrix; B¼A � A, � represents Khatri -Rao product; ψ represents sparse basis matrix; ε ¼ vec Σ ð Þ represents complex random Gaussian zeromean additive noise; f represents the sparse forest signal associated with the tomographic profile.ψf ¼ diag P ð Þ, where diag � ð Þ denotes getting the diagonal elements of the matrix. According to the above equations, the core of tomographic profile estimation is solving the sparse signal f .The solution of f can be simplified as the following constrained inequality minimization problem. where � is the hyperparameter determined by the user related to the noise level.In order to simplify the problem, this paper adopts the constraint of nonlinear inequality adopted in literature (Cazcarra-Bes et al. 2019) to solve the problem. where � k k 2;1 and � k k F represent mixed (2,1) and Frobenius norm, respectively.τ and μ are weighted parameters and can be set 2 and 0.5 in this article, respectively.This problem is a convex optimization problem and can be solved by the CVX toolkit (http://cvxr.com/cvx/download/). Basic overview of SAR data In this paper, airborne FP P-band SAR data of Lopè National Park, launched in Gabon, Africa, are used for validation.This test area is one of the four study areas in the biomass mission named "AfriSAR" implemented by European Space Agency (ESA) in 2016.The topography of this area varies greatly, with an average elevation of about 288 m and the mature stands of canopy height distributed between 30 and 50 m.Lopè park is characterized by a mosaic between grasslands and (dense) tropical forests, with a biomass ranging from about 50 to 600 t/ha.The dry season here is mainly concentrated from mid-June to mid-September, with an average annual rainfall of about 1440 mm/year (from 1984 to 2016) and a temperature range of 20-23°C.It has an average of 35 abundant tree species per ha, with a wide variation between savanna and forest.In the AfriSAR campaign, a total of 10 FP P-band acquisitions are obtained to study the TomoSAR technology in the rainforest.The interval of the spatial baseline is uniform.We can ignore the impact of temporal decoherence due to the short time interval between acquisitions.Refer Table 1 for some specific SAR data parameters.Considering the purpose of the paper, we have selected a part of the area with a topography range between 200 and 300 m and a forest height between 0 and 50 m for the experiment.Figure 2 shows the RGB images based on Pauli basis in the UTM coordinate system, including the selected experimental area (azimuth:4500-6499 pixel; range:2200-3200 pixel). LiDAR data While acquiring airborne FP P-band SAR data, the LiDAR data are provided by the National Aeronautics and Space Administration (NASA), covering the same region.The lidar footprint on ground is around 20 m wide.The LiDAR data acquired by the LVIS system include level 1 products and level 2 products.Level 2 products include CHM and DEM derived from level 1 products.To be consistent with the selected experimental area, we crop the LiDAR data in the same range as shown in Figure 3.While evaluating TomoSAR DEM and CHM, LiDAR data are geocoded to SAR coordinates for evaluation.For details of more parameters, please refer https://lvis.gsfc.nasa.gov/Data/Maps/Gabon2016Map.html Data processing After registered and phase flattening, the SLC images form the TomoSAR data stack, and the tomographic profile can be obtained using the above three methods.It should be noted that AfriSAR data use TanDEM-X data for phase flattening.Therefore, the topography containing canopy height is removed simultaneously in the phase flattening procedure.Some literature supports that HH polarization is sensitive to double-bounce scattering, which often occurs between the branches and the ground, leading to the corresponding phase center being located on the ground level.The HV polarization is sensitive to volume scattering, which generally acts on the interior of the canopy (Arii, Van Zyl, and Kim 2010;Freeman 2007;Tebaldini and Rocca 2012).Due to the long-wavelength and strong penetrability of P-band SAR, the canopy phase center is usually lower than the real forest height.This phenomenon is represented in the tomographic profile, as shown in Figure 4.Because of that, the HH data stack is used to obtain the underlying topography, while the HV data stack is used to get the forest height with the power loss method proposed in the literature (Tebaldini and Rocca 2012).The criterion of power loss is: where H Canopy max represents the max peak position of the canopy phase center in tomographic profile; H represents the actual forest height; < represents the power loss value obtained by using a small amount of LiDAR sample data for data verification.The primary data processing processes in this paper are summarized in Figure 5.The processing chain of obtaining underlying topography and forest height is summarized as follows: (i) Select the SLC (after registration and phase flattening) images with certain baselines to form the TomoSAR data stack (i.e.Equation ( 2)); (ii) Select a suitable filter, such as hamming window filter, to estimate the covariance matrix R of the TomoSAR data stack (i.e.Equation ( 5)); (iii) Based on the known vertical wave number k z , set the sampling height and calculate the steering matrix A (i.e.Equation (2)); (iv) Pixel-by-pixel calculation can be performed using the Capon (i.e.Equation ( 6)), MUSIC (i.e.Equations ( 7) and ( 8)), and CS (Equations ( 9)-( 11)) algorithm mentioned in the above to obtain a tomographic profile; (v) Determine the underlying topography and the forest height based on the peak positions and power loss method with HH tomographic profile and HV tomographic profile.When estimating the covariance matrix, a large but appropriate filtering window setting can effectively suppress the side lobes.In this paper, a 31 × 31 pixel window is used to evaluate the covariance matrix.In order to verify the results, this paper uses LiDAR DEM and LiDAR CHM as the comparative verification data.We evaluate TomoSAR DEM and TomoSAR CHM by 30 × 30 pixel window to avoid the deviation of the pixel-bypixel assessment.When evaluating the TomoSAR CHM, we ignore the forest height less than 10 m, which is not statistically significant in the tropical rainforest. The proposed evaluation index factors are root-meansquare error (RMSE) and relative error (△), and the calculation formulas are as follows: where K denotes the number of the selected samples that are used to participate in the experimental verification. Experimental result As mentioned above, we can obtain the underlying topography from HH data and the forest height from HV data.For this purpose, we firstly focus on the ability of each algorithm to reconstruct the tomographic profile.We use three different algorithms for drawing the same profile corresponding to the selected white line (range index: 200) in Figure 6.As a result, all of the following analyses correspond to the same profile with the top white dashed line is LiDAR CHM, and the bottom is LiDAR DEM.All of the following tomographic profiles are normalized 0 to 1. The performance of tomographic profile reconstruction based on different algorithms Figure 7(a,b) is the tomographic profile of HH data and HV data based on Capon algorithm.The results show that the HV tomographic profile demonstrates the ability to obtain the canopy height, and its power peaks are located in the phase center of the canopy (slightly lower than the top white dashed line).HH data have also shown the potential to invert underlying topography.Although some regions have obvious two-layer power centers, the phase centers of the ground can be found out easily by some criteria (Pardini et al. 2018;D'Alessandro and Tebaldini 2019).Since the experimental area is a tropical rainforest with high forest and density, the double-bounce scattering occurs not only on the surface and branches but also between branches in the canopy.Therefore, there is also a power peak at the phase center of the canopy and the result shows that the HH data also contain a strong volume scattering contribution. The tomographic profile of CS algorithm is similar to Capon's, which can obtain underlying topography by HH data and forest height by HV data, as shown in Figure 7(c, d).The difference is that CS algorithm needs to input hyper-parameter.With different forest types, changeable parameters may lead to different results. The most interesting is the result of MUSIC algorithm.Figure 8 shows that the tomographic profile of HH and HV data obtained by MUSIC algorithm, with different numbers of scattering sources.By comparison, we find that the assumption that there are only two scattering sources is inappropriate in the rainforest.As shown in Figure 8, the location of the maximum peak of the HH tomographic profile cannot be the ground phase center when the number of scattering sources equals two.From quantitative evaluation, we find that the best performance in inverting underlying topography by using MUSIC algorithm when the number of scattering sources equals four.When drawing the HV tomographic profile, the best parameter to obtain the phase center of the canopy equals two.When the number of input scattering sources is larger than two, the HV tomographic profile tends to have a peak at the ground, like the result of scattering sources equal four.Through quantitative analysis, we find that the highest accuracy of TomoSAR CHM is obtained only when the number of input scattering sources equals two. Tropical rainforest is complex, penetrable volume layer, and the acquired signal contains not only from the canopy and the ground but also from inside the forest structure.Radar wave penetration through the vegetation canopy to the ground is a power attenuation process, and the scattering process is complex and not easy to interpret.Therefore, the assumption of the forest as a two-layer structure consisting of two point-like scatterers is not universal in the MUSIC algorithm.Second, due to the specificity of the forest scene and the high density of tropical rainforest, even if HH is sensitive to the double-bounce scattering that occurs more often between the ground and the branches, such contributions still exist in the canopy.Therefore, the two scattering sources cannot get the optimal result, which also shows the uncertainty of MUSIC algorithm. The results of underlying topography and forest height-based TomoSAR algorithms The underlying topography and forest height obtained by Capon algorithm using HH data and HV data are shown in Figure 9. Compared with the LiDAR DEM in Figure 3, there is little difference between LiDAR DEM and TomoSAR DEM of Capon algorithm, with an RMSE of 1.58 m and a relative error of 1.1%.The reliability of the underlying topography obtained by using HH data sensitive to the double-bounce scattering is proved. In order to find the optimal power loss value in the tomographic profile by using a small amount of LiDAR CHM samples, we set the power loss as 0-4 dB with an interval is 0.5 dB and quantitatively evaluated the difference between the TomoSAR CHM and the LiDAR CHM.The results are shown in Figure 10.The results show that the maximum peak location (the power loss is equal to 0) of the canopy phase center based on HV data is not the actual forest height, significantly lower than the LiDAR CHM, as shown in Figure 4.That can be interpreted as the strong penetrability of the P-band leading to the canopy phase center being lower than the forest height.As power loss increases, we can see that the forest height is closer to the LiDAR CHM, and when power loss is more than the optimal value, there will be a significant overestimation error.We find that the power loss equals 2 dB, RMSE is the minimum, 2.17 m, and the relative error is 12.1%, as shown in Figure 9(c,d).Figure 11 shows the estimated underlying topography and forest height by MUSIC algorithm.MUSIC algorithm is a parametric method that needs to input the number of scattering sources when estimating the tomographic profile.For forest scenarios, although the "twolayer" assumption of the RVoG model in PolInSAR technique is reasonable, the result is unsatisfactory when using two scattering sources in estimating the underlying topography by MUSIC algorithm.Compared with LiDAR DEM, we find that when the number of scattering sources equals four, the underlying topography is most close to LiDAR DEM, with 2.14 m of RMSE and 1.5% of the relative error.The obtained forest height by MUSIC algorithm is shown in Figure 11(c,d). The result shows that the RMSE is 2.79 m, and the relative error is 15.5%, which is very close to Capon's products.It can be seen that both MUSIC algorithm and Capon algorithm can obtain approximately accurate DEM and CHM results compared with LiDAR data.The estimated underlying topography and forest height based on CS algorithm are shown in Figure 12.It can be concluded from the results that, like Capon algorithm and MUSIC algorithm, the TomoSAR DEM is very close to LiDAR DEM based on HH data, with an RMSE is 1.86 m and a relative error is 1.3%.At the same time, the precision of the forest height is slightly lower than that of Capon algorithm, with an RMSE of 2.38 m and a relative error of 13.3%. Although all algorithms can obtain high-precision underlying topography and forest height, the nonparametric Capon algorithm seems to perform the best among the three algorithms and has the advantage of no need for prior parameters.On the other hand, MUSIC algorithm needs to know the number of scattering sources, which may vary with different forest types.Moreover, CS algorithm is a convex optimization process, which is very time-consuming when using the CVX toolbox to solve the problem. Analysis and discussion In this paper, three typical TomoSAR methods are used to discuss and analyze forest height and underlying topography inversion in tropical forest area.It can be found from the results that all three algorithms can obtain precise underlying topography and forest height.Based on the algorithm principle, we know that the number of acquisitions and covariance matrix are crucial for reconstruction of tomographic profile. To further analyze the capability of tomographic profile reconstruction based on TomoSAR methods, we investigate the effects of different processing operations on tomographic inversion, such as baseline designs and filter methods.It can be seen from the above results that the correct canopy tomographic profile can be obtained using HV data among all three algorithms.Therefore, the results of HH data are mainly analyzed in the following.Due to Capon algorithm performs the best among three algorithms, only Capon algorithm is used for analysis and comparison to avoid redundancy and duplication of work in the experiment. The influence of the baseline design A total of ten acquisitions covering the Lopè national park in AfriSAR campaign are adopted in the experiment.Due to the uniform baseline interval, the influence of baseline interpolation on the inversion of forest height and underlying topography is not considered.The spatial baseline of ten acquisitions ranges from −80 to 80 m, supporting exploring the influence of different baseline designs on tomographic profile reconstruction, including the number of acquisitions, baseline order, and regularity of baseline distribution. Figure 13 shows the results of the tomographic profile reconstruction with different number of available acquisitions.As can be seen from the results in the red dashed circle, the results of the tomographic profile reconstruction using the ten acquisitions (Figure 13(a)) are significantly better than those using the six acquisitions (Figure 13(b)).The more acquisitions, the more redundant observations, which can provide a higher vertical resolution.To more obviously show the difference, we draw the evaluated DEM in the same plot, as shown in Figure 13(c).The black line represents the LiDAR DEM, the green line represents the DEM obtained using the ten acquisitions, and the red line represents the DEM obtained using the six acquisitions.From the results, we can see many misjudgments in some areas due to the fewer acquisitions that cannot separate the ground contribution and the volume contribution. Figure 14 shows the obtained tomographic profile by using six acquisitions.Figure 14 Figure 15 illustrates the influence of baseline arrangement order on tomographic profile reconstruction with ten acquisitions.The results show that the disordered or ordered baselines arrangement does not affect reconstructing tomographic profile.The essence of tomographic profile reconstruction is to seek the eigenvalue of the covariance matrix (Stoica, Li, and Tan 2009).The elements in covariance are unchanged, and the baselines are not correlated with each other, so the eigenvalue of covariance is not changed eventually. The effect of filters Some literature has explored the application of different filters to accurately estimate the covariance matrix considering the heterogeneity of experimental scenarios and successfully applied them to detect the presence of weak scatterers in urban areas (D'Hondt et al. 2017;Aghababaei 2020).In this paper, we consider the heterogeneity of forest structure and analyze the effect of filter window size and different filters on tomographic profile reconstruction.Figure 16 shows the results of the tomographic profile with different Hamming window sizes.The results show that the filter window is small, the tomographic profile appears noisy with prominent sidelobe.The sidelobe decreases as the filter window size increases, but more details are lost.For the natural scene with heterogeneity, such as forest, a large filter window is often required to suppress the misjudgment of the sidelobe to obtain better statistical results of forest height and underlying topography. At the same time, we also use different filters to evaluate the covariance matrix, including Boxcar filter, NL-SAR filter (Deledalle et al. 2014), and NDSAR-NLM filter (D'Hondt et al. 2017).The window size of the Boxcar filter is set to 31.The experimental results are shown in Figure 17.From the results, the tomographic profile of the Boxcar filter is smooth with many details lost, and the contribution from the ground and canopy is not separated in many areas.The NL-SAR filter result fully considers the heterogeneity of the forest; however, that does not satisfy the assumption that the forest is a two-layer structure.We expect the peak locations to appear on the ground and the canopy.Obviously, the NL-SAR filter does not meet the requirement and is time-consuming.The results between the NDSAR-NLM filter and Hamming window filter are similar and perform the best among these filters.However, due to the need for continuous iterations and search processing, the NDSAR-NLM filter is very time-consuming.When evaluating the covariance matrix, we should consider the consistency of the pixels in the sliding window so that an accurate covariance matrix can be obtained.Whereas tropical forests are typically natural scenes with complex structures, unlike urban buildings, the signal from forests is random.So NDSAR-NLM filter has no outstanding advantages in the forest area of this paper. Conclusion TomoSAR technology is different from traditional two-dimensional SAR technology.By synthesizing aperture in the direction of cross-slant range, TomoSAR technology can obtain the tomographic profile that consists of power distribution with different heights and has real three-dimensional resolution ability.This paper assesses the performance of three typical TomoSAR algorithms for obtaining underlying topography and forest height in tropical forests. The results show that all algorithms can effectively retrieve these two products from the tomographic profile of HH and HV data.According to the scattering characteristics of the forest, we obtained the underlying topography by HH data and the forest height by HV data.We analyzed the performance of three typical super-resolution algorithms to reconstruct the tomographic profile.Furthermore, we discussed the effects of different baseline designs and filters on the tomographic profile reconstruction.The conclusions are summarized as follows: (1) All three algorithms can reconstruct the tomographic profile representing the ground or canopy.Capon algorithm performs well, and the RMSE of the forest height obtained from HV data and the underlying topography obtained from HH data is 2.17 m and 1.58 m, respectively.(2) Under the same conditions, the more acquisitions, the more uniform baselines distribution, and the better performance in reconstructing the tomographic profile.(3) Aim to obtain forest height and underlying topography, it is necessary to select the appropriate filter window size and filters.Smaller window size fails Mingsheng Liao is a professor in Wuhan University.He has published more than 100 peer-reviewed journal papers and four books focused on synthetic aperture radar interferometry techniques and applications.His research interests include algorithms and application for interferometric synthetic aperture radar, remote-sensing image processing and analysis, and the integration and fusion of multisource spatial information. Figure 1 . Figure 1.The simplified local coordinate system within the TomoSAR imaging. Figure 2 . Figure 2. The RGB composite image of Lopè park.The right is the selected experiment area in the red box on the left. Figure 5 . Figure 5.The flowchart of forest height and underlying topography estimation using TomoSAR. Figure 4 . Figure 4.The power loss criterion for the retrieval of forest height. Figure 6 . Figure 6.Test azimuth profile (white solid line) in the Pauli RGB composite image. Figure 8 . Figure 8. Normalized HH and HV tomographic profile obtained by MUSIC algorithm with different numbers of scattering sources. Figure 10 . Figure 10.Differences between TomoSAR CHM and LiDAR CHM with different power loss values. Figure 13 . Figure 13.Normalized tomographic profile obtained with different number of baseline: (a) ten acquisitions; (b) six acquisitions; (c) the estimated TomoSAR DEM profile compared with LiDAR DEM (N represents the number of baseline). Figure 16 . Figure 16.Normalized tomographic profile obtained by different Hamming windows size. Figure 17 . Figure 17.Normalized tomographic profile obtained by different filters. Table 1 . The parameters of the FP P-band datasets. Flight06 : 02/03/04/05/06/07/08/09/10/11 ChuanJun Wu is currently pursuing the PhD degree in Wuhan university.His research interests are SAR tomography for forest application, including inversion of forest height, underlying topography, and above ground biomass.XinWei Yang received his PhD degree from Wuhan university and currently works in State Key Laboratory of Remote-Sensing Science, Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing.His research interests are SAR remote sensing in forest.YangHai Yu is currently pursuing the PhD degree in Wuhan university.His research interests are high-resolution SAR imaging.Stefano Tebaldini is an IEEE senior member and an associateProfessor in Polytechnic University of Milan.His research is mostly focused on remote sensing of the Earth using Radar technology.His research activities include the development of new processing techniques for Radar imaging and calibration, as well as scientific investigations on the physical properties of natural media based on their interaction with EM waves.Lu Zhang is a professor in Wuhan University.His research interests include synthetic aperture radar interferometry as well as remote-sensing classification and change detection.
2022-07-23T15:03:21.337Z
2022-07-21T00:00:00.000
{ "year": 2024, "sha1": "0de8f791eea2a6164826f8eccdf1294bf3c0dc54", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/10095020.2022.2083985?needAccess=true", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "c077ba38839736ac022c1ec5ee5f4ff700372331", "s2fieldsofstudy": [ "Environmental Science", "Geography" ], "extfieldsofstudy": [ "Computer Science" ] }
228880047
pes2o/s2orc
v3-fos-license
Prediction-Based Error Correction for GPU Reliability with Low Overhead : Scientific and simulation applications are continuously gaining importance in many fields of research and industries. These applications require massive amounts of memory and substantial arithmetic computation. Therefore, general-purpose computing on graphics processing units (GPGPU), which combines the computing power of graphics processing units (GPUs) and general CPUs, have been used for computationally intensive scientific and big data processing applications. Because current GPU architectures lack hardware support for error detection in computation logic, GPGPU has low reliability. Unlike graphics applications, errors in GPGPU can lead to serious problems in general-purpose computing applications. These applications are often intertwined with human life, meaning that errors can be life threatening. Therefore, this paper proposes a novel prediction-based error correction method called Prediction-based Error Correction (PRECOR) for GPU reliability, which detects and corrects errors in GPGPU platforms with a focus on errors in computational elements. The implementation of the proposed architecture needs a small number of checkpoint bu ff ers in order to fix errors in computational logic. The PRECOR architecture has prediction bu ff ers and controller units for predicting erroneous outputs before performing rollback. Following a rollback, the architecture confirms the accuracy of its predictions. The proposed method e ff ectively reduces the hardware and time overheads required to correct errors. Experimental results confirm that PRECOR e ffi ciently fixes errors with low hardware and time overheads. Introduction High-performance computing (HPC) applications typically require massive amounts of memory and a huge number of arithmetic computations. Special accelerators and processors have been proposed to achieve massive parallel computing power [1][2][3][4]. However, these accelerators and processors are very expensive to manufacture and cannot be used for general purposes. On the other hand, graphics processing units (GPUs) contain a huge number of computation and memory units. Due to their highly parallel structure, recent GPU researches have focused on general-purpose applications in the high-performance computing (HPC) field [5]. Therefore, GPUs are widely used as parallel computing accelerators in big data processing applications currently. Because big data processing applications have become increasingly intertwined with humans, the reliability of GPUs designed for general applications has become increasingly important, and such approaches are referred to as general-purpose computing on graphics processing units (GPGPU). However, because current GPUs are designed for creating images to be output to a display device, they lack hardware support for detecting and correcting errors in combinational logic. Therefore, incorrect results can be easily generated by GPUs, particularly in combinational logic. In GPU architectures, memory storage is protected by error correction codes (ECCs), but combinational logic is not protected. This is so because storage structures have regular patterns that can be protected via parity and ECCs, whereas combinational structures exhibit irregular patterns. Therefore, combinational logic outcomes cannot be easily protected via parity and ECC approaches. Furthermore, errors on GPUs are less critical in the control flow than these on CPUs. Because of the functionality of GPUs, the area portion of the control flow units on GPUs is smaller than that on CPUs. Moreover, the number of the instruction cache which can affect the control flow does not increase on GPUs even though the problem size increases [6]. Therefore, the functional interruption (FI) rate on GPUs is lower than that on CPUs and the SDC rate on GPUs is much higher than that on CPUs [6,7]. Additionally, the increases in the SDC rate on the GPGPU system cause the higher probability of obtaining incorrect outcomes in scientific applications. Therefore, methods for correcting SDC errors in GPGPU architectures are becoming increasingly important. Error detection and correction methods have always been imperative in space and nuclear applications, but they have recently become more important in a wide variety of fields. As technological devices continue to shrink, circuits are becoming more susceptible to radiation effects and electromagnetic emissions owing to reduced node capacitance [8]. Furthermore, to meet low-power requirements, the frequency and voltage settings of modern systems on chips (SoCs) are designed to reach the edges of their performance limits. Because SoCs have become increasingly common in various fields affecting daily life, errors in SoCs have in turn become increasingly dangerous for humans. Thus, appropriate error detection and correction methods are essential for meeting the reliability demands of modern SoCs. This paper proposes the approach called the Prediction-based Error Correction (PRECOR) for GPU reliability which predicts errors, checks prediction accuracy by re-executing instructions, and corrects faulty prediction results. To test the efficacy of the proposed method, simulated PRECOR solutions are compared with solutions obtained via dual and triple modular redundancy (DMR and TMR, respectively). The area and time overheads required for correcting errors are compared to prove the efficiency of the proposed method. The area overhead is reduced by 7% compared with the DMR methods. In addition, the time overhead is also reduced up to 10%. The remainder of this paper is organized as follows. Our major motivations are discussed in Section 2. Section 3 presents the methodology and hardware details of the PRECOR method. Experimental setups and results are discussed in Section 4. Section 5 summarizes our conclusions. Motivation In this section, demands of soft error resilience techniques for GPUs is explained. First, GPU architecture is described in Section 2.1. Then, the present error resilience support in GPGPUs is presented in Section 2.2. In Section 2.3, related works are explained. Figure 1a presents the Fermi GPU architecture [9]. A GPU architecture consists of a scalable number of streaming multi-processors (SMs). An SM consists of streaming processors (SPs) for arithmetic calculations, special function units (SFUs) for sine, cosine, and square root functions, load/store (LD/ST) units for memory operations, and several register file banks for caches. GPU Architecture SMs are designed to implement the single instruction, multiple threads (SIMT) execution model. A warp consisting of 32 individual threads is executed within a single SM block. SIMT execution is a lockstep execution mechanism that executes a given set of operations simultaneously. In the pipeline stage, the 32 threads of a warp run simultaneously with the same instructions. Each warp is scheduled by a two-level scheduler that checks the warp ID, the active mask, and a single program counter. This two-level scheduler deploys ready warps in the fetch stage. Each thread in a warp is executed in lockstep in each SP. There are two types of thread barrier instructions for preventing the data race condition. One is used for individual warps and the other is used for all threads. These instructions ensure that a thread in a barrier will not pass the barrier until all concurrent threads have been completed. Caches consist of hierarchical structures. Figure 1b presents a typical cache structure. The global memory of a GPGPU system is deployed in dynamic random-access memory. Every SM has access to all global memory. L2 cache is shared memory, which is shared by the SPs in the SMs. Each SM executes read and write instructions at the L2 cache level. The L2 cache size is 768 KB. The register file is accessible to all SPs in an SM. This register file is mapped in the SMs to improve computational performance by caching data for the threads running on each SM. Resilience Support in GPU Architecture Soft error resilience is becoming more and more important in GPU than in CPU. GPUs have a relatively high error rate. For example, 1.8% of commodity GPU devices have a least one permanent fault [10], and some GPUs were determined to have experienced transient memory faults at a rate of 66% during evaluations in an HPC cluster environment [11]. Most analyses of GPGPU failures have focused on memory faults, and combinational errors have not yet been extensively studied. Because a large portion of the total GPGPU silicon area is used for GPU cores [12], a non-negligible number of errors can be expected to occur in the cores of GPUs. However, unlike memory errors, which can be detected via ECCs, it is difficult to find errors that occur in GPU cores (e.g., floating-point units, arithmetic logic units (ALUs), local memory, or registers). Furthermore, miniaturization has increased error rates in hardware, especially those of transient errors [12]. Thus, it is necessary to develop a method to increase hardware reliability to avoid data corruption. In a previous research, a detailed survey of GPU errors in the Titan supercomputer was presented to analyze the reliability of the GPGPU architecture [13]. The authors gather the error history of Titan using simple event correlators on software management workstations and could highlight many GPU memory errors and GPU software/firmware-related errors, but they are unable to collect silent data corruption (SDC) errors that occurred in the processor. Therefore, analysis of SDC by the architecture-level fault injection has been studied [14]. In addition, error propagation and its effects on processing cores and memory have been studied [15]. SDC is widespread in GPGPU kernels and propagates through memory states via data corruption. G. Li et al. [15] measures SDCs and benign errors by comparing the output memory of fault injection run with that of the golden run. This corresponds to data recorded in output memory (OM) after the programs finish their executions. On average, crashes comprise 17.52%, SDCs comprise 18.98% and Benign errors comprise 63.35% of all injections. Finally, more than half of fault injections except benign fault injections are SDCs. However, on CPUs, the ratio of SDCs and crash is 36.11% (13% of SDCs and 23% of crash) [7]. SDC is dependent on the application, but such SDC errors Caches consist of hierarchical structures. Figure 1b presents a typical cache structure. The global memory of a GPGPU system is deployed in dynamic random-access memory. Every SM has access to all global memory. L2 cache is shared memory, which is shared by the SPs in the SMs. Each SM executes read and write instructions at the L2 cache level. The L2 cache size is 768 KB. The register file is accessible to all SPs in an SM. This register file is mapped in the SMs to improve computational performance by caching data for the threads running on each SM. Resilience Support in GPU Architecture Soft error resilience is becoming more and more important in GPU than in CPU. GPUs have a relatively high error rate. For example, 1.8% of commodity GPU devices have a least one permanent fault [10], and some GPUs were determined to have experienced transient memory faults at a rate of 66% during evaluations in an HPC cluster environment [11]. Most analyses of GPGPU failures have focused on memory faults, and combinational errors have not yet been extensively studied. Because a large portion of the total GPGPU silicon area is used for GPU cores [12], a non-negligible number of errors can be expected to occur in the cores of GPUs. However, unlike memory errors, which can be detected via ECCs, it is difficult to find errors that occur in GPU cores (e.g., floating-point units, arithmetic logic units (ALUs), local memory, or registers). Furthermore, miniaturization has increased error rates in hardware, especially those of transient errors [12]. Thus, it is necessary to develop a method to increase hardware reliability to avoid data corruption. In a previous research, a detailed survey of GPU errors in the Titan supercomputer was presented to analyze the reliability of the GPGPU architecture [13]. The authors gather the error history of Titan using simple event correlators on software management workstations and could highlight many GPU memory errors and GPU software/firmware-related errors, but they are unable to collect silent data corruption (SDC) errors that occurred in the processor. Therefore, analysis of SDC by the architecture-level fault injection has been studied [14]. In addition, error propagation and its effects on processing cores and memory have been studied [15]. SDC is widespread in GPGPU kernels and propagates through memory states via data corruption. G. Li et al. [15] measures SDCs and benign errors by comparing the output memory of fault injection run with that of the golden run. This corresponds to data recorded in output memory (OM) after the programs finish their executions. On average, crashes comprise 17.52%, SDCs comprise 18.98% and Benign errors comprise 63.35% of all injections. Finally, more than half of fault injections except benign fault injections are SDCs. However, on CPUs, the ratio of SDCs and crash is 36.11% (13% of SDCs and 23% of crash) [7]. SDC is dependent on the application, but such SDC errors certainly affect the output results. Nearly 50% of the errors that occur in a GPU corrupt output data via SDC. However, GPUs only have resilience support for memory storage. A single-error-correction, double-error-detection (SECDED) ECC is included in the GPU device memory, L2 cache, instruction cache, register files, shared memory, and L1 cache regions [9]. However, not all the blocks in GPUs are protected because SECDED only tracks memory reads and writes. Therefore, other blocks, such as logic blocks, queues, the two-level scheduler, the thread block scheduler, the instruction dispatch unit, and the interconnecting network are vulnerable to errors. Such errors can have different types of effects on circuits: (1) no effect on the program output (the failure does not affect any of the outputs), (2) program crash, or (3) SDC error (incorrect output, but the program does not crash). One of the keys to the resilience of some applications is that soft errors may not affect output states. At the microarchitecture level, there are two factors that determine the soft error rate of a hardware structure [16]: the failures-in-time (FIT) and the architecture vulnerability factor (AVF) [17]. The FIT rate is the raw soft error rate (SER) per bit. It is dependent on process technology and circuit design. The AVF represents the effects of the SER, meaning that a soft error that affects the output data and damages a process can lead to crashes and SDC. Therefore, the AVF is dependent on the applications and instructions in which errors occur and, thus, we will not focus on such errors and their effects. However, the other issues discussed above are crucial. Regardless of the error resilience problems in GPGPU systems, there may not be hardware support for detecting and correcting errors without ECCs. However, many recent applications on GPGPU systems are business-critical, long-running, and financially sensitive. Therefore, a single error can have a serious impact on users. Related Works Error detection and correction in combinational logic structures are traditionally performed using modular redundancy, by which the same instructions are executed several times and errors are detected by comparing the results. Dual Modular Redundancy (DMR) is usually based on neighboring dual cores that make synchronization, transfer, and comparison of the results upon error detection. The dual cores are synchronized by special link and the results of the dual cores are compared upon error detection [18]. DMR only detects errors in GPU lanes and requires correction schemes for recovering any identified errors. Triple Modular Redundancy (TMR) is usually based on the neighboring triple cores that make synchronization, transfer, and comparison of the results upon error detection and collection [19]. If an error occurs, the three results are compared with each other and two same results are selected to the correct result. In a TMR design, each logic element is designed in triplicate and majority voters are inserted after each register stage to remove logic upsets. DMR threads are deployed with branch divergence, meaning that any remaining cores are idle. Warped-DMR [20] is a DMR-based technique that exploits these underutilized GPGPU resources (i.e., the many idle cores) for duplicating threads and detecting errors. Warped-RE [21] is a fault-tolerant technique based on both DMR and TMR. DMR is an opportunistic approach that checks results using native redundancy instructions in the GPU lane. After error detection via DMR, TMR is used for error correction. Therefore, the DMR and TMR logics are also required. Additionally, opportunistic DMR threads are difficult to detect and can only be obtained by deploying additional control logic, which requires more than 1.5% hardware overhead and performance overhead by two additional pipeline stages only for searching opportunistic thread within total threads. The Clover scheme [22] introduced fault detection and correction techniques for sensing the waves created by particle strikes. These techniques detect errors using sensors that react to particle strikes and restart erroneous processes in idempotent regions. However, idempotent-region checkpoint regions (CRs) have a critical drawback. If errors occur at the end of these regions, the CR cannot restart at the proper point because of the latency of the sensing process. Clover employs a method called Tail-DMR to compensate for this drawback by resetting the program to the beginning of the code region where the error was detected. However, this method only detects errors related to particle strikes, and the hardware overhead required for particle strike sensors is unsustainable. Argus-G [23] is another error detection architecture for GPGPU cores. It is an extension of the Argus [24] architecture for CPU cores. Argus detects errors in three basic invariants, namely control flow, computation, and dataflow, based on signatures to detect errors in GPGPU cores. However, significant area overhead is required for generating signatures and comparing them with optimal values. The proposed PRECOR method detects errors by combining partial and temporal redundancies and corrects them using an error prediction buffer. This approach is based on warp-level DMR, which duplicates and verifies instructions. The use of this DMR scheme improves the performance overhead of PRECOR by nearly 100%. Additionally, the controller and checkpoint buffer are much simpler in PRECOR compared with other competitive schemes. Proposed Methodology This section presents the main concepts of PRECOR. The proposed method exploits space and time redundancies in GPU systems. Sections 3.1 and 3.2 describe the PRECOR algorithm and present the architecture of PRECOR in detail, respectively. The PRECOR Approach In most approaches, erroneous results are corrected via traditional DMR using a checkpoint method. The checkpoint buffers that are deployed in the pipeline stages store the previous state of the data and recover that state after an error is detected. These checkpoint buffers require massive overhead for duplicating all the data in the pipeline buffers. Furthermore, when restoring a state during the recovery process, the entire system must be halted, which increases execution time. To reduce execution time and the overhead of the checkpoint buffers, PRECOR implements an error prediction method based on historical data. Figure 2 illustrates the detection and correction flow of PRECOR. A thread is fetched at two cores, an original core and a redundancy core. Then cores decode and execute the thread. After execution, a comparator compares the two outcomes. This comparison is for checking an error occurrence. If outcomes are not the same, an error occurs to one of two cores. So, if outcomes are the same, the process keeps continuing. If not, the prediction controller anticipates the outcomes with a history of the cores. Then, the process keeps continuing with anticipated outcomes and the error occurrence outcome is saved in the buffer of prediction controller. During this process, the error occurrence instruction is issued to the core. After the execution of the re-issued instruction, the re-executed outcome is compared with the outcome in the prediction controller. If these are the same, the output value is changed to the re-executed outcome. If not, the process is continued. However, since the proposed method corrects an error by comparing values generated from the same two cores, permanent errors that continuously output a specific error value at any moment cannot be corrected, therefore, these error values cannot be corrected through the proposed method. So, only transient errors are targeted on the proposed method. To better explain the PRECOR method and avoid confusion, the following four terminologies are introduced to make the approach easier to understand. Anticipated incorrect output (AIO): The output that is assumed to be incorrect by the prediction controller and kept in the buffer. Anticipated correct output (ACO): The output that is assumed to be correct by the prediction controller and executed on the pipeline. To better explain the PRECOR method and avoid confusion, the following four terminologies are introduced to make the approach easier to understand. Anticipated incorrect output (AIO): The output that is assumed to be incorrect by the prediction controller and kept in the buffer. Anticipated correct output (ACO): The output that is assumed to be correct by the prediction controller and executed on the pipeline. Prediction success: The pipeline executes the next instruction with the correct/incorrect values. It does not matter whether the values are correct or not. Prediction failure: The pipeline returns to the process and changes the values. It does not matter whether the values are correct or not. PRECOR performs error detection on real GPUs by an efficient error correction using roll-back with DMR. Thus, PRECOR is the same as DMR in detecting an error. In correction, it acts like TMR by keeping the result in the buffer. In the first stage, the same instruction is fetched into two cores, an original core and a redundancy core, similar to the initial steps of DMR. After the instruction is executed, the two resulting values from the two cores are compared to check for errors. If the results are mismatched, an error has occurred. To correct an error, PRECOR uses a restart scheme, meaning that the instruction that resulted in the error is re-issued in the pipeline stage. Meanwhile, the controller identifies the core that can cause the error via historical prediction and the next instruction continues with the ACO value which means that the pipeline continues despite an error occurrence. The cores in which errors occur are marked with a latch and the controller chooses the core based on the state of a latch. However, this ACO must be verified. PRECOR performs error detection on real GPUs by an efficient error correction using roll-back with DMR. Thus, PRECOR is the same as DMR in detecting an error. In correction, it acts like TMR by keeping the result in the buffer. In the first stage, the same instruction is fetched into two cores, an original core and a redundancy core, similar to the initial steps of DMR. After the instruction is executed, the two resulting values from the two cores are compared to check for errors. If the results are mismatched, an error has occurred. To correct an error, PRECOR uses a restart scheme, meaning that the instruction that resulted in the error is re-issued in the pipeline stage. Meanwhile, the controller identifies the core that can cause the error via historical prediction and the next instruction continues with the ACO value which means that the pipeline continues despite an error occurrence. The cores in which errors occur are marked with a latch and the controller chooses the core based on the state of a latch. However, this ACO must be verified. To verify the output, the method must keep the output value in buffer. In the proposed method, there are two ways to select the output value for verification. The first way, called ACO strategy, is that the ACO value is kept in the buffer and the next instruction continues despite an error occurrence. Thus, the next instruction continues with the ACO value, chosen by the control unit using the latches of the two modules. The ACO is also stored in the buffer in the ACO strategy if the ACO value and the new output value from re-executed instruction are matched in the ACO strategy. However, if they are mismatched, the ACO value is the incorrect output value. In contrast, the AIO strategy is that the Electronics 2020, 9, 1849 7 of 18 AIO value is kept in the buffer and the next instruction continues with the ACO value. If the AIO value and the new output value from re-executed instruction are mismatched, the ACO value is the correct output value. If the ACO value is found to be incorrect after comparison with the new output value from re-executed instruction, the new output value from re-executed instruction is written to the register in the WB stage and the pipeline stalls. In the case of GPU, instructions can be classified into FP arithmetic instructions, Integer operations, and memory operations. Therefore, the rollback operation is performed differently depending on the type of instruction. First of all, the error detection and rollback operation are performed only for integer instructions and FP instructions. Because memory instructions are protected by ECC. In the case of branch instructions, when an error occurs, it affects the process flow, that is, the progress of the next instruction, and unlike the FP instructions or integer arithmetic instructions, the operation must be stopped and rollback must be performed. Moreover, there is a rollback operation already for branch misprediction, so if an error occurs during branch operation, it can be operated again as if it is misprediction. As mentioned in the explanation of terminologies, the final output value of the prediction success may be also incorrect. If two of the three results are incorrect, these errors cannot be corrected exactly in TMR strategy. However, these cases rarely happen. If the probability of an error occurrence in a module is P m , the probability of more than two errored results of three results is 3P m 2 + P m 3 . Additionally, it rarely occurs that the erroneous output values have the same value, except stuck-at fault cases. In addition, the case that all outputs from two modules have errors rarely happens. As studied in [6], the SDC rate in GPUs is about (1.80 + 0.39) × 10 1 − (1.04 + 0.32) × 10 3 FIT (errors/10 9 h). Therefore, it is highly unlikely to have more than one corruption during a single execution in the natural radioactive environment. Thus, PRECOR chooses the AIO strategy because retaining the AIO is faster than retaining the ACO. The example of PRECOR is demonstrated in the general single-instruction multiple-data (SIMD) pipeline stage, which consists of five sub-stages, namely instruction fetching, instruction decoding (ID), execution (EX), memory (MEM), and write-back (WB). Figure 3 illustrates the detection and correction example of PRECOR. In the example, it is also assumed that there is a precedent that an error occurred in 1000 instructions before on EX 2 and no error occurred on EX 1 . Errors are detected in the EX stage, which outputs values as shown in Figure 3a. In an error-free case, the same instruction is fetched and executed in two units. The output values O 1 and O 2 from execution units EX 1 and EX 2 , respectively, are compared for error detection. To verify the output, the method must keep the output value in buffer. In the proposed method, there are two ways to select the output value for verification. The first way, called ACO strategy, is that the ACO value is kept in the buffer and the next instruction continues despite an error occurrence. Thus, the next instruction continues with the ACO value, chosen by the control unit using the latches of the two modules. The ACO is also stored in the buffer in the ACO strategy if the ACO value and the new output value from re-executed instruction are matched in the ACO strategy. However, if they are mismatched, the ACO value is the incorrect output value. In contrast, the AIO strategy is that the AIO value is kept in the buffer and the next instruction continues with the ACO value. If the AIO value and the new output value from re-executed instruction are mismatched, the ACO value is the correct output value. If the ACO value is found to be incorrect after comparison with the new output value from re-executed instruction, the new output value from re-executed instruction is written to the register in the WB stage and the pipeline stalls. In the case of GPU, instructions can be classified into FP arithmetic instructions, Integer operations, and memory operations. Therefore, the rollback operation is performed differently depending on the type of instruction. First of all, the error detection and rollback operation are performed only for integer instructions and FP instructions. Because memory instructions are protected by ECC. In the case of branch instructions, when an error occurs, it affects the process flow, that is, the progress of the next instruction, and unlike the FP instructions or integer arithmetic instructions, the operation must be stopped and rollback must be performed. Moreover, there is a rollback operation already for branch misprediction, so if an error occurs during branch operation, it can be operated again as if it is misprediction. As mentioned in the explanation of terminologies, the final output value of the prediction success may be also incorrect. If two of the three results are incorrect, these errors cannot be corrected exactly in TMR strategy. However, these cases rarely happen. If the probability of an error occurrence in a module is Pm, the probability of more than two errored results of three results is 3Pm 2 + Pm 3 . Additionally, it rarely occurs that the erroneous output values have the same value, except stuck-at fault cases. In addition, the case that all outputs from two modules have errors rarely happens. As studied in [6], the SDC rate in GPUs is about (1.80 + 0.39) × 10 1 -(1.04 + 0.32) × 10 3 FIT (errors/10 9 h). Therefore, it is highly unlikely to have more than one corruption during a single execution in the natural radioactive environment. Thus, PRECOR chooses the AIO strategy because retaining the AIO is faster than retaining the ACO. The example of PRECOR is demonstrated in the general single-instruction multiple-data (SIMD) pipeline stage, which consists of five sub-stages, namely instruction fetching, instruction decoding (ID), execution (EX), memory (MEM), and write-back (WB). Figure 3 illustrates the detection and correction example of PRECOR. In the example, it is also assumed that there is a precedent that an error occurred in 1000 instructions before on EX2 and no error occurred on EX1. Errors are detected in the EX stage, which outputs values as shown in Figure 3a. In an error-free case, the same instruction is fetched and executed in two units. The output values O1 and O2 from execution units EX1 and EX2, respectively, are compared for error detection. If the two outputs are mismatched, the result is rechecked in one pipeline while another pipeline executes the next instruction with the predicted correct results as described in Figure 3b. Therefore, the same instruction is fetched for rechecking and is executed in the execution unit, EX1, which generates the ACO. PRECOR assumes that the value on the right side is incorrect the first time this happens. After that, the value that comes from the last error-causing core (EX2) is assumed to be the AIO. Because the output values O1 and O2 are different, the same instruction is fetched and executed If the two outputs are mismatched, the result is rechecked in one pipeline while another pipeline executes the next instruction with the predicted correct results as described in Figure 3b. Therefore, the same instruction is fetched for rechecking and is executed in the execution unit, EX 1 , which generates the ACO. PRECOR assumes that the value on the right side is incorrect the first time this happens. After that, the value that comes from the last error-causing core (EX 2 ) is assumed to be the PRECOR chooses the AIO strategy because retaining the AIO is faster than retaining the ACO. Because the AIO strategy has fewer pipeline stalls than the ACO strategy without any reliability loss, all erroneous output cases of each method are shown in the Table 1. For the erroneous output values in the first column of Table 1, the second, third, and fourth columns show the output, the prediction result, and the correctness of the output in the TMR, ACO and AIO strategies, respectively. The "Output", "Prediction" and "Result" columns of each strategy show the final output written to the register, the result of the prediction and the correctness of the final output, respectively. As shown in Table 1, the ACO strategy fails in five cases, whereas the AIO strategy fails only once. Therefore, AIO strategy is faster than the ACO strategy because AIO strategy is less stalling the pipeline during the rollback. Moreover, the reliability of the AIO strategy in the PRECOR method is the same as that of the PRECOR ACO strategy. Therefore, the AIO strategy is more efficient than the ACO strategy. Table 1 shows all error cases in the TMR and PRECOR. EX 1 , EX 2 , and EX 3 are assumed to be independent and identical execution units. Therefore, each module has the same error occurrence rate p and the reliability can be calculated using the binomial theorem as follows: r is the number of the erroneous outputs, n is the number of total outputs, and p is the error occurrence rate, respectively. For example, the cases of two erroneous outputs out of three outputs are Table 1, zero erroneous output or one erroneous output out of three outputs are correct in TMR. Additionally, the reliability of the voter (R V ) must be considered for the right selection in TMR. It is also assumed that all voters are independent and identical, and each voter has the same error occurrence rate p v . Therefore, the reliability of TMR [19] is: Compared with TMR, erroneous output cases (O 1 , O 2 ) and (O 2 , O 3 ) are also correct in PRECOR ACO and PRECOR AIO, respectively. In addition, the reliability of the voters is not considered because the voters are not required in PRECOR. Therefore, the reliability of PRECOR (ACO and AIO) is PRECOR assesses whether or not an error occurs in a core based on historical prediction results. As technological devices continue to shrink, cores become more vulnerable to outside influences. Technology scaling has also increased process variation [11], meaning that some cores are more affected by single-particle strikes or other environmental effects than others. Soundararajan et al. [16] demonstrated that errors occur more frequently in more sensitive cores. To avoid writing an erroneous result on a shared cache when using this method, a data-hazard prevention scheme is implemented because similar situations occur in data hazard cases. By providing microarchitectural and compiler support for data hazards, PRECOR can effectively fix errors. If an error-affected cache is read from or written to, the re-launched instruction is terminated, and the read/write instruction is halted by the detector. Once the correct value is obtained for the re-launched instruction, the value in the cache is changed and the instruction restarts. Figure 4 illustrates the microarchitectural support for PRECOR. This SM architecture includes a prediction controller that manages the prediction and correction flows. In conventional instruction pipelining, the throughput speed of the cores is increased by implementing instruction-level parallelism. The states of the stages for establishing a pipeline are maintained in pipeline registers. If an error occurs, the corrupted thread is re-executed to correct the result. In traditional implementations, the state of an instruction is restored by checkpoint pipeline buffers. However, checkpoint buffers incur high area overhead in pipeline registers. To overcome this drawback, PRECOR implements a prediction controller that maintains a relatively simple instruction and position list rather than all thread contexts. GPUs issue parallel thread execution (PTX) instructions that execute many threads simultaneously on multiple GPU cores. Each PTX instruction is decomposed into 32 thread contexts for executing 32 threads in each SP. After the ID stage, the thread contexts are scheduled in each SP. Each SP is controlled by the operand collector. Therefore, the thread context which has to be executed on each SP is received from the decoder and is kept on the buffer of the operand collector until the thread context is executed on each SP as shown in Figure 4. Microarchitectural Support In the general DMR architecture, these thread contexts are maintained in an extra cache for error correction. Rather than the thread contexts themselves, the PRECOR architecture retains only the PTX instructions and the positions of thread contexts (i.e., pointers to thread contexts that find specific threads in decoded PTX instructions). After decoding a PTX instruction, the thread contexts that should be re-executed are selected by specifying the positions of the corrupted threads. position list rather than all thread contexts. GPUs issue parallel thread execution (PTX) instructions that execute many threads simultaneously on multiple GPU cores. Each PTX instruction is decomposed into 32 thread contexts for executing 32 threads in each SP. After the ID stage, the thread contexts are scheduled in each SP. Each SP is controlled by the operand collector. Therefore, the thread context which has to be executed on each SP is received from the decoder and is kept on the buffer of the operand collector until the thread context is executed on each SP as shown in Figure 4. In the general DMR architecture, these thread contexts are maintained in an extra cache for error correction. Rather than the thread contexts themselves, the PRECOR architecture retains only the PTX instructions and the positions of thread contexts (i.e., pointers to thread contexts that find specific Figure 5 presents a decoder and a scheduler with a prediction controller. The prediction controller consists of the copied instruction cache and a position list. The copied instruction cache saves the PTX instructions for re-executing and the position list identifies thread contexts containing errors in their PTX instructions. At the marked positions of the corrupted thread contexts, the mask bits in the position list are set to one. Therefore, the scheduler can selectively launch threads whose bits are set to one. Electronics 2020, 9, x FOR PEER REVIEW 11 of 18 threads in decoded PTX instructions). After decoding a PTX instruction, the thread contexts that should be re-executed are selected by specifying the positions of the corrupted threads. Figure 5 presents a decoder and a scheduler with a prediction controller. The prediction controller consists of the copied instruction cache and a position list. The copied instruction cache saves the PTX instructions for re-executing and the position list identifies thread contexts containing errors in their PTX instructions. At the marked positions of the corrupted thread contexts, the mask bits in the position list are set to one. Therefore, the scheduler can selectively launch threads whose bits are set to one. Therefore, the correction flow of the PRECOR architecture proceeds as follows. When an error occurs, the prediction controller turns on the correction flow process. The erroneous instruction which has to be re-executed is re-fetched and decoded by the prediction controller using the fetch and decode units. Before scheduling the thread contexts, the prediction controller matches the mask bits using the position list. The mask bits control whether or not specific thread contexts are scheduled. Accordingly, the prediction controller masks 31 threads to transmit only previously erroneous thread context to the operand collector. Meanwhile, the other thread contexts on queues of the operand collectors in the GPU lane continue executing despite the re-execution of the erroneous instruction. The difference between the error occurrence and the non-error situation is that only previously erroneous thread context is added on the queue of the operand collector in the specific SP which generates the ACO without halting the GPU lane. After the output of the re-executed instruction is compared with the buffer, the selected data is written in the register. During the correction flow, the PRECOR architecture retains the address of the destination register to prevent Therefore, the correction flow of the PRECOR architecture proceeds as follows. When an error occurs, the prediction controller turns on the correction flow process. The erroneous instruction which has to be re-executed is re-fetched and decoded by the prediction controller using the fetch and decode units. Before scheduling the thread contexts, the prediction controller matches the mask bits using the position list. The mask bits control whether or not specific thread contexts are scheduled. Accordingly, the prediction controller masks 31 threads to transmit only previously erroneous thread context to the operand collector. Meanwhile, the other thread contexts on queues of the operand collectors in the GPU lane continue executing despite the re-execution of the erroneous instruction. The difference between the error occurrence and the non-error situation is that only previously erroneous thread context is added on the queue of the operand collector in the specific SP which generates the ACO without halting the GPU lane. After the output of the re-executed instruction is compared with the buffer, the selected data is written in the register. During the correction flow, the PRECOR architecture retains the address of the destination register to prevent data hazards. Figure 6 presents the flow for preventing data hazards in the GPU. The GPU uses the scoreboard algorithm to check for write-after-read and read-after-write dependency hazards [25]. PRECOR uses the scoreboard algorithm to remedy data-hazard dependency problems. The destination register address is obtained from the pipeline by the prediction controller and sent to the scoreboard. This address prevents the stalling of a specific thread that uses the destination register as its source register. Specific threads are selected to be re-executed and the destination registers are indicated for the scoreboard while other threads continue their pipelining processes. In contrast, existing methods prevent data hazards by simply stalling all threads during the correction flow. Experimental Results and Analysis The experimental results are obtained to evaluate the performance overhead, area overhead, and fault coverage of PRECOR. The experimental results are described in this section in comparison to previous methods: (i) DMR: full duplication re-execution of GPU instructions and correction via checkpoint buffers [18], (ii) TMR: full triplication re-execution of GPU instructions and correction via majority-voter rules [19]. Experimental Setup To evaluate the performance of PRECOR, two simulation setup environments were employed. The GPGPU-sim was generally used for evaluating the performance overhead of methods. This application was modeled from commercial NVIDIA GPU units, Fermi architecture. GPGPU-sim simulation was not updated after Fermi architecture. Therefore, GPGPU-sim simulation only supports Fermi architecture. It can easily simulate the benchmarks of the simulation and analyze their results. However, it cannot check for fault coverage. GPGPU-sim v3.2.2 [25] was used to compare PRECOR with other methods. In this simulation, the GPGPU had 30 SMs, each consisting of 32 SIMT lanes. The SIMT lanes were grouped into four SIMT-lane clusters: four SPs, four SFUs, four LD/ST units, and four register banks. Several applications from rodinia_3.1 were selected as benchmarks [26]. As mentioned previously, our target applications (scientific computing and financial applications) demand very high accuracy. To complement the GPGPU-sim simulation, Nyami open-source architecture was also used to build our experimental setup [27]. The DMR, TMR, and PRECOR architectures were designed using the Nyami architecture to evaluate the efficiency of the PRECOR architecture. Testbenches with the hash, dhrystone, and membench applications were built using this architecture. Figure 7 presents the Nyami GPGPU architecture. The Nyami architecture has four FIFO fetch units and a program counter Experimental Results and Analysis The experimental results are obtained to evaluate the performance overhead, area overhead, and fault coverage of PRECOR. The experimental results are described in this section in comparison to previous methods: (i) DMR: full duplication re-execution of GPU instructions and correction via checkpoint buffers [18], (ii) TMR: full triplication re-execution of GPU instructions and correction via majority-voter rules [19]. Experimental Setup To evaluate the performance of PRECOR, two simulation setup environments were employed. The GPGPU-sim was generally used for evaluating the performance overhead of methods. This application was modeled from commercial NVIDIA GPU units, Fermi architecture. GPGPU-sim simulation was not updated after Fermi architecture. Therefore, GPGPU-sim simulation only supports Fermi architecture. It can easily simulate the benchmarks of the simulation and analyze their results. However, it cannot check for fault coverage. GPGPU-sim v3.2.2 [25] was used to compare PRECOR with other methods. In this simulation, the GPGPU had 30 SMs, each consisting of 32 SIMT lanes. The SIMT lanes were grouped into four SIMT-lane clusters: four SPs, four SFUs, four LD/ST units, and four register banks. Several applications from rodinia_3.1 were selected as benchmarks [26]. As mentioned previously, our target applications (scientific computing and financial applications) demand very high accuracy. To complement the GPGPU-sim simulation, Nyami open-source architecture was also used to build our experimental setup [27]. The DMR, TMR, and PRECOR architectures were designed using the Nyami architecture to evaluate the efficiency of the PRECOR architecture. Testbenches with the hash, dhrystone, and membench applications were built using this architecture. Figure 7 presents the Nyami GPGPU architecture. The Nyami architecture has four FIFO fetch units and a program counter (PC). Each thread is controlled by the mask signal and the thread-select stage. The 16 float and integer units are grouped and execute the same instruction at the same time as the vector units. The results of each instruction are stored in the same register file. Instructions for loading and storing 8-, 16-, and 32-bit scalars are supported. The Nyami architecture was implemented via SystemVerilog [28]. Several applications in the NyuziToolChain were selected as benchmarks for the testbenches. Fault Coverage To evaluate the fault coverage of PRECOR and the previous methods, they were designed to match a physical architecture using Nyami emulators [27]. The fault coverage of the methods cannot be checked on GPGPU-sim. Therefore, Nyami architecture was used for the fault-coverage simulations. The benchmark applications were executed on the fault-coverage test architecture using Nyami Processor and NyuziToolChain. Then, the faults were injected by modifying the emulators. To select instructions in the testbenches for the injection of transient faults, a fault-injection program was designed on the emulator using C++. An error was injected into the compiled thread at the frequency of the mean instructions between failures (MIBF). For the simulations, the ptx instructions from the unchanged simulation have parsed twice or three times in ptx parse file except for the control flow instruction and the memory operations. Then, the application without faults is executed for obtaining the full thread logs in the application. Therefore, the full execution logs are gathered from the second run. Next, the erroneous threads are selected using the number of the threads and MIBF by the fault injection random function. The predictions of success and failure have been checked using the weighted random function in the erroneous thread selection. Then, the rollback operation is executed for each erroneous thread during the next simulation. As shown in Figure 8, the fault coverage of PRECOR was almost the same as that of TMR because the number of masked threads increases when PRECOR re-executes instructions. Moreover, the fault coverage for the dhrystone benchmark using the DMR method was lower than for the other benchmarks. This is so because dhrystone contains much more arithmetic instructions than the other benchmarks, which can generate more faults in the simulations. Therefore, more simultaneous transient faults occur in the simulations, and simultaneous transient faults cause errors when using DMR methods. Thus, the dhrystone benchmark resulted in less fault coverage than the other benchmarks. Fault Coverage To evaluate the fault coverage of PRECOR and the previous methods, they were designed to match a physical architecture using Nyami emulators [27]. The fault coverage of the methods cannot be checked on GPGPU-sim. Therefore, Nyami architecture was used for the fault-coverage simulations. The benchmark applications were executed on the fault-coverage test architecture using Nyami Processor and NyuziToolChain. Then, the faults were injected by modifying the emulators. To select instructions in the testbenches for the injection of transient faults, a fault-injection program was designed on the emulator using C++. An error was injected into the compiled thread at the frequency of the mean instructions between failures (MIBF). For the simulations, the ptx instructions from the unchanged simulation have parsed twice or three times in ptx parse file except for the control flow instruction and the memory operations. Then, the application without faults is executed for obtaining the full thread logs in the application. Therefore, the full execution logs are gathered from the second run. Next, the erroneous threads are selected using the number of the threads and MIBF by the fault injection random function. The predictions of success and failure have been checked using the weighted random function in the erroneous thread selection. Then, the rollback operation is executed for each erroneous thread during the next simulation. As shown in Figure 8, the fault coverage of PRECOR was almost the same as that of TMR because the number of masked threads increases when PRECOR re-executes instructions. Moreover, the fault coverage for the dhrystone benchmark using the DMR method was lower than for the other benchmarks. This is so because dhrystone contains much more arithmetic instructions than the other benchmarks, which can generate more faults in the simulations. Therefore, more simultaneous transient faults occur in the simulations, and simultaneous transient faults cause errors when using DMR methods. Thus, the dhrystone benchmark resulted in less fault coverage than the other benchmarks. coverage for the dhrystone benchmark using the DMR method was lower than for the other benchmarks. This is so because dhrystone contains much more arithmetic instructions than the other benchmarks, which can generate more faults in the simulations. Therefore, more simultaneous transient faults occur in the simulations, and simultaneous transient faults cause errors when using DMR methods. Thus, the dhrystone benchmark resulted in less fault coverage than the other benchmarks. Figure 9a shows the performance overheads of TMR, DMR, and PRECOR. The performance overheads of PRECOR and DMR increase as MIBF decreases, whereas the performance overhead of TMR is not changed regardless of the MIBF since TMR does not need the rollback operation when errors occur. Therefore, TMR is not affected by MIBF. However, DMR and PRECOR are affected by MIBF since the rollback operation is executed when errors occur. The performance overheads of PRECOR and DMR do not show much difference when the MIBF is large. Because the difference between DMR and PRECOR is the rollback overhead when an error occurs. Therefore, if MIBF is large and the number of the rollback operations is small, the difference in performance overhead between DMR and PRECOR also is small. However, as the MIBF decreases, the rollback overhead of PRECOR increases less than that of DMR. Comparison of Performance Overheads Electronics 2020, 9, x FOR PEER REVIEW 14 of 18 Figure 9a shows the performance overheads of TMR, DMR, and PRECOR. The performance overheads of PRECOR and DMR increase as MIBF decreases, whereas the performance overhead of TMR is not changed regardless of the MIBF since TMR does not need the rollback operation when errors occur. Therefore, TMR is not affected by MIBF. However, DMR and PRECOR are affected by MIBF since the rollback operation is executed when errors occur. The performance overheads of PRECOR and DMR do not show much difference when the MIBF is large. Because the difference between DMR and PRECOR is the rollback overhead when an error occurs. Therefore, if MIBF is large and the number of the rollback operations is small, the difference in performance overhead between DMR and PRECOR also is small. However, as the MIBF decreases, the rollback overhead of PRECOR increases less than that of DMR. Figure 9b describes the performance overheads of TMR, DMR, and PRECOR depending on the prediction success rate. Since the pipeline is stalled if the prediction fails, the simulation cycle decreases as the probability of prediction success increases. Therefore, the simulation cycle for the case of high prediction success rate is less than that of low prediction success rate. Comparison of Performance Overheads The performance overhead depending on forecasting accuracy was evaluated based on an experiment on error injection and prediction simulated using C code. The target benchmark is the breadth first search application on the GPGPU simulator. As shown in Figure 10, as the forecasting accuracy increases, the performance overhead decreases. Therefore, accurate prediction is important for reducing the time overheads of the error correction. However, forecasting data in real-world cases cannot be achieved, one can only assume the accuracy of the prediction. However, the prediction failure only happens in O1 case on the proposed methodology as shown in Table 1. Therefore, the Figure 9b describes the performance overheads of TMR, DMR, and PRECOR depending on the prediction success rate. Since the pipeline is stalled if the prediction fails, the simulation cycle decreases as the probability of prediction success increases. Therefore, the simulation cycle for the case of high prediction success rate is less than that of low prediction success rate. The performance overhead depending on forecasting accuracy was evaluated based on an experiment on error injection and prediction simulated using C code. The target benchmark is the breadth first search application on the GPGPU simulator. As shown in Figure 10, as the forecasting accuracy increases, the performance overhead decreases. Therefore, accurate prediction is important for reducing the time overheads of the error correction. However, forecasting data in real-world cases cannot be achieved, one can only assume the accuracy of the prediction. However, the prediction failure only happens in O 1 case on the proposed methodology as shown in Table 1. Therefore, the probability of the prediction success is (1 -p × (1 − p) × (1 − p)) even if the results are not correct. Therefore, the proposed methodology can reduce the performance overhead by increasing the probability of the prediction success. Simulations on the Nyami architecture were implemented via the Nyami emulator. PRECOR selects results using history buffers to speed up the correction procedure. Therefore, the prediction accuracy when using history buffers was selected to make a performance comparison. In the simulation, prediction accuracy was set as the value that was referred to in the forecasting accuracy simulation. Hash benchmark simulations were also performed for each MIBF 3000 times. As shown in Figure 9, the difference between DMR and PRECOR increases as the MIBF increases. Comparison of Area Overheads Hardware overheads were calculated based on a one-SM layout in the Synopsys CAD software. To synthesize additional hardware, Nyami architecture was implemented using the Synopsys Design Compiler and the saedEDK32.28 nm library (saed32rvt_tt0p85v25c.db). We do not add the redundant floating-point units and integer units for duplication with comparison. By combing the existing cores with comparators and voters, DMR and TMR have been generated. Therefore, original, DMR, and TMR cores have 18 lanes, 9 lanes and 6 lanes in the core, respectively. The results are listed in Table 2. Additional area overhead indicates that the area exceeded the available space in the original Nyami architecture. The total area is the entire area synthesized in the behavioral-level Nyami architecture using systemverilog code. Simulations on the Nyami architecture were implemented via the Nyami emulator. PRECOR selects results using history buffers to speed up the correction procedure. Therefore, the prediction accuracy when using history buffers was selected to make a performance comparison. In the simulation, prediction accuracy was set as the value that was referred to in the forecasting accuracy simulation. Hash benchmark simulations were also performed for each MIBF 3000 times. As shown in Figure 9, the difference between DMR and PRECOR increases as the MIBF increases. Comparison of Area Overheads Hardware overheads were calculated based on a one-SM layout in the Synopsys CAD software. To synthesize additional hardware, Nyami architecture was implemented using the Synopsys Design Compiler and the saedEDK32.28 nm library (saed32rvt_tt0p85v25c.db). We do not add the redundant floating-point units and integer units for duplication with comparison. By combing the existing cores with comparators and voters, DMR and TMR have been generated. Therefore, original, DMR, and TMR cores have 18 lanes, 9 lanes and 6 lanes in the core, respectively. The results are listed in Table 2. Additional area overhead indicates that the area exceeded the available space in the original Nyami architecture. The total area is the entire area synthesized in the behavioral-level Nyami architecture using systemverilog code. TMR needs six voters, DMR needs the pipeline state buffer and nine comparators, and PRECOR needs instruction buffer, history latch, and nine comparators. Since the pipeline buffer which saves the thread context is large, DMR architecture is required large area overhead. On the other hand, PRECOR architecture needs a 32-bit register to save the instruction, and the nine latches to keep history data. The area overheads were 7% lower for PRECOR than for the DMR method, but are larger than those of the TMR method. However, TMR has longer execution times than PRECOR. Conclusions Recently, GPGPU systems have emerged in devices with strong parallel computing power for high-end applications. However, GPGPU systems are very vulnerable to soft errors. Streaming processors, which play important roles in data parallelism, critically determine the reliability of GPUs. This paper proposes PRECOR as a low-cost error correction method for GPGPU architectures. PRECOR accelerates the correction process by choosing a correct value before additional instructions proceed, thereby avoiding additional errors. Therefore, the buffers for saving the outcome, and the prediction controller is the area overhead of the proposed method. However, the buffers for the rollback operation are reduced by the position buffers in the proposed method. Therefore, the total area overhead of the proposed method is less than DMR. Additionally, the rollback performance overhead is less than the conventional DMR because the entire process continues when the rollback operation executes in the proposed method, unlike DMR, which stops the entire process. The experimental results show that PRECOR can more reliably correct soft errors compared with traditional error correction approaches. It also reduces the required time overhead by improving prediction accuracy, allowing processes to continue instead of halting. Finally, it reduces hardware overhead by 7% compared with DMR method.
2020-11-12T09:09:06.064Z
2020-11-05T00:00:00.000
{ "year": 2020, "sha1": "8cf41b7907fe29ac0cd4214941680c00e86d5610", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-9292/9/11/1849/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "14a1d18381ff176e303194a188cf2b5a4d34c19b", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
234269304
pes2o/s2orc
v3-fos-license
Induced mutagenesis for improving water stress tolerance in durum wheat (Triticumturgidum L. subsp. durum) . Over the Water deficit is considered to be one of the most important limiting factors for crop productivity worldwide. Thus, it is important to use water resources more efficiently. One of the ways to conserve water and respond to the climate change is by using appropriate crop species and cultivars, notably which have low requirements for water.Chemical mutagens have contributed immensely to the development of a wide range of genetic variability and the improvement of several crop plants, including durum wheat. This study has the aim of understanding the effect of water stress on some morpho-physiological parameters and identifying tolerant lines to water stress from an EMS-mutated population in durum wheat. The results, under moderate (T1) and severe (T2) conditions of water stress,show the positive effect of mutagenesis on the population resulting in tolerantmutated linesto water deficit. Compared to the non-mutated, 32.15% of lines have a higher specific leaf weight; 57.14% of lines have a better ability to maintain a high water content and 75% of all lines demonstrate a very high intensity of chlorophyll fluorescence .In sum, this study has revealed the improvement of water stress tolerance in some induced durum wheat mutants. Introduction Durum wheat (Triticumturgidum L. ssp. durum) is a vital cereal crop that provides substantial economic output in the Mediterranean region[1]. Durum is better adapted than common hexaploid wheat to semi-arid environments. Butits production and quality suffer greatly from several abiotic stress constraints including water deficiency and heat[2], [3]. Water is very essential for smooth running of various metabolic activities inside plants. Water deficit is considered to be one of the most important limiting factors for crop productivity worldwide and especially in the Mediterranean area [4]. The proposed scenarios for climate change indicate that water availability will be a limiting factor for many countries in the following years [5]. So, the water deficit is a big challenge for plant breeders around the world who need to develop cultivars which could sustain such stress conditions without significant yield loss[6], [7].In drought conditions, plants usually respond in the form of stunted growth due to its adverse effects on different molecular, biochemical, physiological and morphological processes of the plant. Such changes are totally related to the growth stage, time and severity of environmental stresses[8], [9]. For screening and selection of tolerant genotypes to uphold productivity under water stress conditions, the understanding of physiological mechanisms is very essential [10]. It is reported that drought related physiological parameters were dramatically reduced under water stress conditions as compared to normal conditions [11]. For instance,plant exposure to moisture stress lowers down the relative water content (RWC), leaf water potential and osmotic potential [12]. The foliar photosynthetic rate of higher plants is known to decrease as the RWC and the leaf water potential decrease [13]. The decrease in photosynthesis of leaves is usually caused by stomatal limitation under mild to moderate drought conditions and non-stomatal limitation under severe drought conditions [14]. Moreover, under mild to moderate drought stress, LA decreases and an early leaves senescence occurs [15]. In an early future, drought is expected to increase due to climate change in most parts of the world [ [22]. The present work objectives are i) the identification of the water stress effect on various morphological and physiological characters in mutant lines derived from the CHAM1 variety of durum wheat by the EMS mutagen under normal and deficit irrigation; and ii) the selection of new mutant durum wheat genotypes presenting a better ability to grow and a satisfactory yield under water stress conditions. This information will aid to identify the best cultivars that could be used as genitors in future breeding programs. Study Material This study is carried out on a mutated population of durum wheat (Triticum durum Desf.) from the CHAM1 variety.Seedswerepreviously treated with 0.6% of the mutagenic chemical agent EMS (Ethyle Methane Sulfonates). 3215 seeds of the M1 generation were sown in a greenhouse at the National Institute of Agronomic Research (INRA) -Rabat, Morocco to obtain M2, M3 and M4generations. Out of 2505 M5lines sown in the experimental station,Marchouch-Morocco, an annual selection was made according to the height of the plant, the vigor, the yield and the resistance to diseases until obtaining individuals of the M8 generation. Then, out of 262 selected M9 lines, 40 lines were isolated to make the study material for this work. These lines are characterized by a high grain yield compared to the nonmutated parent CHAM1. Tests organization Two controlled experiments were carried out. The first was carried out under the condition of good water supply (T0), in order to characterize the mutation effect on the variations of the morpho-physiological parameters. These by comparison of the mutated lines -selected by their high grain yield-and the non-mutated CHAM1 durum wheat (control). The second experiment was carried out under two water stress conditions-moderate (T1) and severe (T2) in order to characterize the behavior of the mutated lines under the applied stresses and the studied parameters, in comparison to the control. Preparation of pots and seedlings The grains of the 40 selected mutant lines were sown under controlled conditions in a greenhouse equipped with a computerized electronic system at the INRA-Rabat. The microclimatic conditions are 22°C of temperature, 50% relative humidity of the air and photoperiodism programmed at 08h of darkness/20h of light. The sowing was carried out manually at the rate of 7 grains per plastic pots of 5Kg of a mixture substrate composed by 2/3 soil and 1/3 peat. Determination and application ofstress levels The pots were simultaneously irrigated twice a week until the end of the production stage. After, the pots were divided into two lots constituting respectively control (T0) and stressed plants. The water stress is applied by a watering stop. The water stress duration (l) is one week for moderate water stress (T1) and two weeks for severe water stress (T2).For all morphological and physiological parameters, the measurements were made on the flag leaf. Relative water content (RWC) is one of the criteria for assessing drought tolerance. It decreases when water stress increases. According to the method of(Clarke and McCaig, 1982) the cut flag leaves are directly weighed (fresh weight,FW) and immersed in test tubes filled with distilled water.Then, the tubes are placed in a cool dark place. After 24 hours, the saturated leaves are reweighed (turgor pressure weight, TW). Finally, the sample is put to dry in an oven at 80°C and weighed one last time after 48hours (dry weight,DW). The relative water content is determined according to the formula: RWC (%) = (FW -DW) / (TW -DW) x 100 ii. Quantum yield of PSII (ΦPSII) Chlorophyll fluorescence is a precise intrinsic indicator of the first stages of photosynthesis, photosystems II (PSII)in this case. Its intensity is inversely linked to photosynthetic yield and therefore to the vitality of plants [27]. The chlorophyll fluorescence measurements were carried out on the intact leaves -stillattached, using a portable fluorometer (chlorophyll fluorometer, Model 0os-30, USA). The measured fluorescenceFo (minimum fluorescence) and Fm (maximum fluorescence) assess the photochemical efficacy of PSII. Thus, after adaptation to darkness for 3 minutes, the maximum photosynthetic yield of the PSII is calculated according to the formula: Qmax PSII = (Fm-Fo) / Fm c. Statistical analyzes The descriptive statistical parameters and variance analyzes were processed using the SAS program (Statistical Analysis System, version 9.1). The graphics were produced by Ms Excel software and using Genstat. The statistical analyzes concerned only 28 lines.The other 12 lines were excluded from the analysis since they did not resist to the applied stresses. Results and Discussion Water stress is considered the most severe of the environmental stresses that affect plant growth and yield. To cope, plants develop adaptation strategies by adjusting leaf growth [28], [29], stomatal conductance [30],photosynthesis [31], [32] and the leaf surface [33], [34]. Here, we report the experimental results with the discussion of the variation of the morphophysiological parameters of the mutated lines, either under the condition of good water supply and water stress conditions, in addition to the correlation between the both variations. Variation of the morpho-physiological parameters of the mutated lines under the condition of good water supply The effect of the mutation on the variations in morphophysiological parameters between the mutated lines and the control (CHAM1 not mutated) is presented in the form of histograms (Fig..1).The descriptive analysis shows that the variation distribution follows the normal law, with a median regrouping a maximum number of lines. Indeed, under the conditions of good water supply, the mutant lines LA fluctuates between 8.37cm² and 27.61cm² while the control records an average of 22.6 cm² (Fig. 1a). Thus, 27.77% of the lines show a very small assimilating surface than that of the non-mutated control. However, 30.55% have larger assimilating surfaces. The rest recorded a value close to the control average. In 16.21% of the lines, the recorded values of the SLW are between 20.10 mg/cm² and 31.52 mg/cm². They are much higher than that of the control which is 13.64 mg/cm² (Fig. 1b). For the RWC (Fig. 1c), and compared to the control which records an RWC of 82.74%, the highest contents are noted in 36.11% of the lines, with a maximum value of 99.93%. ; whereas, the other lines record more or less low water contents with a minimum value of 62.39%.For the Qmax PSII parameter, the values range from 0.66 to 0.78; with 47.22% of the lines have a higher ΦPSII, 13.88% of the lines have a lower SI PSII, while the rest of the lines have values very close to that of the control (Fig. 1d). The results of the variance analysis (Table 1) reveal the existence of a high significant variability between the studied lines for all the measured parameters. This is true for all the measured parameters except the SLW. This variability in phenotypic expression of the mutated lines can be explained by the action of the applied mutagen (EMS) on the genomic material. So the mutation carried out was very successful. The several studies show that chemical mutagen ethyl methane sulphonate (EMS), have been successfully used on wheat [35]. In fact it's more efficient in inducing higher mutation frequency of crop traits compared with physical mutagens, such as gamma radiation[36], [37]. The noted morpho-physiological variability in the studied lines were used to analyze the water stress effect on the behavior of these lines with regard to the various measured parameters. Variation in the morpho-physiological parameters of the mutated lines under water stress conditions The results of the variance analysis of the various morpho-physiological parameters, under the condition of water stress (Table 2), reveal the existence of a significant effect of the genotype and the water stress treatment for the studied parameters. Significant interactions between the genotype and the treatment were noted positively in the case of quantum yield (Qmax). ***. Very highly significant effect at the threshold α <0.1%; **. Highly significant effect at the threshold P <1%; *. Significant effect at threshold P <0.05; Ns: Non-specific effect. Fig. 2: Variation in the leaf area of the control and 28 mutated lines subjected to different water stress levels Under moderate stress conditions (T1), there is a decrease in LA in all of the studied lines except for the two lines 19M9 and 18M9which record an increase in this parameter. A clear decrease in the LA is noted in the 45M102line with 26.595 ± 6.26cm2in T0 conditions and 14.23 ± 3,38cm2 in T1 condition. However, the 46M102line records LA values very close and vary from 19.045 ± 4.29cm2(T0)to 16.48 ± 0.02 cm2(T1). In this regard, there is a regression percentage of 46.49% in the 45M102line against 13.46% in the 46M102line. The non-mutated CHAM1 control records LA values between 22.364 ± 2.67 cm2(T0) and 18.223 ± 1.41 cm2(T1), with a regression of 18.52%. Under severe stress conditions (T2), all of the mutated lines maintain their decrease in the LA, except in 68M8, 3M102, 6M8, 41M102, 1M8, 3M9 and 46M102genotypes where an increase in LA is noted. The percentages of this increase range from 7.62% in the 3M102 line to 32.17% in the 41M102 line.It is noted that in all the water stress conditions, the 47M102 line have a slight decrease in LA compared to the non-mutated control CHAM1,while the 1M8 line maintains an increase in LA. Leaf area (LA) plays an important role in plant growth analysis. LA and leaf weight measurements are required to calculate several growth indices, like specific leaf weight (SLW). Under conditions of water stress, vegetative development is strongly disturbed by a significant decrease in size and LA. This result is in perfect agreement with what[38]found. In our experimentation and under the condition of water stress, we note 67.85% of the mutated lines which present the same evolution as that of the control with a regression of the LA. This reduction is considered to be one of the plants' avoidance strategies for water stress. In fact, the water stress significantly reduced LA due to the reduced cell division. Water stress may reduce turgor pressure and hence cell expansion, resulting in approximately the same dry mass being contained within a smaller LA, thus raising density [39]. The plant closes its stomata to conserve the water resources, which allows them to survive[40], but its productivity decreases because less carbon could be assimilated. For the 32.15% of lines -19M9; 18M9; 68M8; 3M102; 6M8; 41M102; 1M8; 3M9; 46M102 and 47M102-there is an increase in LA under water stress. These lines seem to support a water restriction better without appreciably modifying their leaf surface. The resistance of this portion of the mutated lines to the water deficit could be explained by an osmotic adjustment of the cells. This tolerance process has been reported by [41].Thus, the increase in tolerance to dehydration is achieved by the anatomical properties of the water-conducting elements allowing higher tension on the column and keeping the stomata open. So the productivity does not decrease. Effect of stress on the specific leaf weight Compared to the non-mutated control (CHAM1), the effect of water stress on the SLW, under different stress levels, is manifested in different behavior (Fig. 3). The present study shows a significant correlation between the SLW and the LA. This finding is reported by [42] pointing out that the leaf specific weight of durum wheat increases under water stress. This increase, in some varieties under stress, is highly correlated with the reduction in LA[39]. Effect of stress on the relative water content. The analysis of the relative water content allows describing in a global way the hydric status in response to water stress, and to evaluate the ability to achieve good osmoregulation and to maintain cellular turgor pressure [43]. Compared to the control, the evolution of the relative water content, under the water stress effect, shows a very highly significant decrease in the relative water content in all the studied lines (p <0.001) as and as the water deficit increases (Fig. 4). The decrease in RWC is noted in 35.71% lines, namely 33M102, 45M102, 47M102, 9M102, 56M8, 68M8, 54M102, 57M102, 72M8 and 41M102. They are the most drought tolerant of their non-mutated CHAM1 parent. The recorded decreases in these lines varied from 0.07% in the 45M102 line to 5.97% in the 72M8 line, compared to the control, which has a regression percentage of 7.37%. On the other hand, there is a slight increase in the water content in 10.71% of lines -51M102, 55M8 and 37M102-despite the conditions of water stress. The decrease in RWC is noted in 35.71% lines, namely 33M102, 45M102, 47M102, 9M102, 56M8, 68M8, 54M102, 57M102, 72M8 and 41M102. They are the most drought tolerant of their non-mutated CHAM1 parent. The recorded decreases in these lines varied from 0.07% in the 45M102 line to 5.97% in the 72M8 line, compared to the control, which has a regression percentage of 7.37%. On the other hand, there is a slight increase in the water content in 10.71% of lines -51M102, 55M8 and 37M102-despite the conditions of water stress. These results are similar to those of[43], [44], confirming that the water content of durum wheat leaves decreases proportionally with the reduction of water contained in the soil. This decrease is faster in susceptible varieties than in resistant varieties. It is reported that high relative water content is a resistant mechanism to drought, and that high relative water content is the result of more osmotic regulation or less elasticity of tissue cell wall [45]. In severe stress (T2), a clear decrease in water content is observed in all genotypes, except for the 3M102line, while the 6M102 and 8M102 lines have identical values to those of the T1 conditions. According to [46],the maintenance of a relatively high value of the relative water content, under stress conditions, can result from two adaptation mechanisms: maintenance of a high elasticity of the tissues or reduction of osmotic pressure. Incidentally, the lines which remain stable or which show a slight increase in this parameter, despite the stress conditions, are more water stress tolerant compared to the control. Effect of stress on the quantum yield of the PSII (ΦPSII). Differences in behavior are recorded in the different lines studied at different levels of stress (Fig. 5). The differences noted between the mutated lines for the fluorescence parameters in this study, especially the quantum yield, indicate that this technique is useful as a tool for screening for tolerance to heat stress in durum wheat. Indeed, several studies have been conducted to assess tolerance to thermal stress using this technique [34]. For T1 conditions, all the studied lines have a quantum yield of PSII (ΦPSII) comparable to the irrigated control. However, in comparison with the CHAM 1 control, 14.28% of studied lines -33M102, 57M102, 37M102 and 72M8-show a slight decrease in ΦPSII, with a regression percentage of 0.11%, 0, 13%, 0.52% and 1.96%respectively, while that of the control is 2.37%. The different performance of effective quantum yield of PSII (ΦPSII), indicated that the electron transport processes were influenced distinctly by water stress in the given 75% of lines. Drought has significant effect on ΦPSII in these lines: the values of this parameter gradually decreased during the treatment indicating that electron transport processes were partly down-regulated in these genotypes. For 6M102, 9M102, 45M102, 33M102, 41M102, 46M102 and 8M102lines, results could suggest that these lines have an ability to maintain a high growth in the intensity of chlorophyll fluorescence. This capacity would be the result of the absence of the photochemical activity inhibition of chloroplasts under the conditions of water stress described by [47]. Correlation coefficient The results show a very highly significant correlation between all the measured parameters, except for the Qmax PSII. This study shows a significant correlation between the SLW and the LA. This finding is reported by[25], [48] who found that the durum wheat SLW increases under saline stress. The increase in SLW in certain varieties under stress is highly correlated with the reduction in LA. .0 *** Very highly significant effect at the threshold α <0.1%; ** Highly significant effect at the threshold P <1%; * Significant effect at threshold P <0.05; Ns: Non-specific effect. Conclusions Mutagenesis by EMS has made it possible to create variability in the mutated population, but also to screen new genotypes that are tolerant to conditions of water lack. This tolerance is explained in 32.15% of mutated lines by their capacity to maintain a leaf water potential, which leads to a limitation of water losses. While in 57.14% of the mutated lines, tolerance to water deficit resulted in a better ability to maintain a high water content in the plant. The significant genotype x treatment interaction indicates that the Qmax PSII parameter could be considered as a valid criterion for the selection of drought tolerant genotypes. Thus, our results reveal that 75% of the studied lines have a better capacity to maintain the growth of the intensity of chlorophyll fluorescence, and therefore to preserve the structures and the functioning of the QPSII photosynthetic device more than the control. In the basis of these results, we confirm that the mutagenesis technique made it possible to select new efficientgenotypes. The selected mutant lines of durum wheat could be used for crosses in breeding programs for wheat genetic improvement. More importantly, the obtained results do not need to be confirmed by several years of testing, because it is a mutagenized population of the eighth generation (M8), whose genome presents genetic stability. In perspective, mutations screening from the set of tolerant and interesting genotypes has performed by TILLING technique. Indeed, the selectioned mutants constitute an important reservoir of genes, potentially usable in the improvement of wheat. The obtained results will be published soon.
2021-05-11T00:05:12.912Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "5c66b4dccadf73d2167022c594b53aff68f89a1a", "oa_license": "CCBY", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2021/10/e3sconf_icies2020_00107.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "d6da7c875b2ef10378bbb4cdc80acf1ba15589c5", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
246439798
pes2o/s2orc
v3-fos-license
Plant-Based Alternative Products: Are They Healthy Alternatives? Micro- and Macronutrients and Nutritional Scoring In recent decades, the demand, supply, and consumption of plant-based (pb) alternative products have increased worldwide. The objective of this study was to characterize pb meat and cheese products and compare them with their respective animal-based products. Data were collected in online market analyses (2019/2021). Nutritional data, Nutri-Score, and analysis of micronutrients are presented in this article. The number of products has grown in all categories, with the largest increase of 110% in pb cheese. The main protein sources in pb meat were soy and wheat, followed by an increasing use of peas. Pb meat generally contained less energy and total and saturated fat, but more carbohydrates and sugars than meat. In pb cheese, the protein content was lower than that of cheese. In 3 of 17 food groups, the salt content of pb alternatives was lower than in animal products. The daily requirement for iron could be covered better by pb alternatives than previously anticipated as well as the need for the vitamins E and K. The calculated Nutri-Score was generally lower for pb meat and higher for pb cheese than for the respective animal products. The trend towards consumption of pb alternative products is increasing, but the high level of processing, wide range of nutrients, and high salt content indicate the need for nutritional guidelines for these products. Introduction From start-ups and leading companies to the world's largest meat corporations, food manufacturers are developing fast-growing innovations in plant-based (pb) foods. This new generation of pb meat, fish, cheese, egg, and dairy products is increasingly competitive with animal products. Market research institutes conduct consumer studies to analyze, among other things, the motivations for consuming pb products. A German online survey [1] clarified that age is essential in individual motivation. More than 32% of the over-60 s stated that health reasons were their main motivation for abstaining from meat, while the 40-49 age group cited animal welfare reasons (27%). In the 18-29 age group, on the other hand, environmental and climate reasons (18%) play an important role. The discussion about reducing the consumption of animal-based foods has increased at many levels in recent years. The publication of the EAT-Lancet Commission, about a global planetary healthy diet [2], was decisive in the scientific field. In its latest report, the OECD/FAO [3] assumes that meat consumption in industrialized countries will not continue to rise as environmental and sustainability awareness increases in the population. Younger people, in particular, are increasingly adopting vegan, vegetarian, or flexitarian diets and thus reducing their consumption of animal products. In recent years, pb alternative products have experienced an enormous growth in the German and European food retail sector [4]. In Germany, for example, sales (€) of pb alternative products increased by 97% and sales volume (kg/L) increased by 80% between 2018 and 2020 [4]. Table 1. Classification of pb alternative product categories. PBMA-hot Fillet Either contains "tenderloin" in the product name or is a meat-free product that appears to imitate beef/pork tenderloin Steak Either contains "steak" in the product name or is a meat-free product that appears to imitate beef/pork steak Schnitzel Either contains "schnitzel" in the product name or is a meat-free product that appears to imitate breaded meat Burger Either contains "burger" and/or "pattie/patty" in the product name or is a meat-free product that appears to imitate beef burger Strips Either contains "gyros", "chunks", and/or "strips" in the product name or is a meat-free product that appears as small thin slices or strips Minced meat Either contains "mince" in the product name or is a meat-free product that appears to imitate minced meat Bratwurst Either contains "bratwurst" or "barbecue sausage" in the product name or is a meat-free product that appears to imitate bratwurst Sausage Either contains "Wiener", "Frankfurter", and/or "Hot Dog" in the product name or is a meat-free product that appears to imitate sausage PBMA-cold Meat sausage Either contains "Lyoner", "Mortadella", and/or "cold cuts" in the product name or is a meat-free product that appears to imitate meat sausage, which can be used in sandwiches Salami Either contains "salami" in the product name or is a meat-free product that appears to imitate salami, which can be used in sandwiches Spreading sausage Either contains "liver sausage" and/or "pâté" in the product name or is a meat-free product that appears to imitate spreading sausage, which can be used in sandwiches Meat salad Meat-free product that appears to imitate meat salad, which can be used in sandwiches PBCA Sliced cheese Either contains "Gouda" in the product name or is a dairy-free product that appears to imitate sliced cheese, which can be used in sandwiches Cheddar Either contains "Cheddar" in the product name or is a dairy-free product that appears to imitate cheddar, which can be used in sandwiches Cream cheese Either contains "fresh" and/or "cream" in the product name or is a dairy-free product that appears to imitate cream cheese, which can be used in sandwiches Mozzarella Either contains "mozzarella" in the product name or is a dairy-free product that appears to imitate mozzarella, which can be used in sandwiches Feta Either contains "Greek style" in the product name or is a dairy-free product that appears to imitate brined cheese or feta, which can be used in sandwiches PBMA-hot = plant-based meat alternatives consumed hot; PBMA-cold = plant-based meat alternatives consumed cold; PBCA = plant-based cheese alternatives; product name also refers to the sales description. Data of Animal Foods from Databases To make a comparison between plant-and animal-based products, data were collected from the following four national nutrient databases: Food Standards Australia New Zealand, AUSNUT [24]; Fineli, the Nutrition Unit of the National Institute for Health and Welfare in Finland [25]; the US Department of Agriculture (USDA) FoodData Central Data, Food and Nutrient Database for Dietary Studies 2017-2018 [26]; and the Max Rubner-Institute, Federal Research Institute of Nutrition and Food, Bundeslebensmittelschlüssel (BLS) Version 3.02 [27]. Nutri-Score FOPL were created by a joint initiative of governments, product manufacturers, and retailers to encourage consumers to make healthier food choices by providing product information at a glance and attract attention [15]. The Nutri-Score is a color-coded, graded FOPL first introduced in France in 2017 [28] and has also been used in Germany on a voluntary basis since 2020. Classified foods can be divided into five categories by a nutritional score (from category A = dark green, indicating high nutritional quality, to category E = red, indicating low nutritional quality) [29]. In this study, the evaluation of nutrients and the calculation of the Nutri-Score were based on the calculation table of the Federal Ministry of Food and Agriculture (BMEL, German translation as of 2021). On a scale from −15 points (A) to +40 points (E), the nutrient content per 100 g of food was evaluated. Positive points (0-10) were assigned for dietary energy, total sugars, saturated fatty acids (SFA), and sodium. Negative points (0-5) were scored for fruits, vegetables, and nuts, fiber, protein, and canola, walnut, and olive oil content. Sample Material All products used for vitamin and mineral analysis in this study were purchased from online stores or local supermarkets. Four products with mostly different protein or fat sources were selected from each of the four product categories (meat sausage, salami, burger, sliced cheese) ( Table 2). Mineral and Vitamin Analysis Freeze-dried material was used for mineral and vitamin analysis; for this purpose, the products had to be cut into pieces and freeze-dried (EPSILON 2-40; Christ, Osterode am Harz, Germany). Subsequently, the samples were ground with a coffee grinder (KSW 3307, Clatronic International, Kempen, Germany) and stored at +4 • C until analysis. Mineral concentrations were determined using a method adapted from that of Wheal et al. [30]. Approximately 100 mg of each sample was digested in 4 mL of 65% (v/v) nitric acid and 2 mL of 30% (v/v) hydrogen peroxide for 75 min at 200 • C and 40 bar in a microwave oven (Ethos 660; MWT AG, Heerbrugg, Switzerland). Samples were then made up to 25 mL with distilled water. Mineral concentrations were measured by inductively coupled plasma optical emission spectrometry (Vista-PRO CCD Simultaneous ICP-OES; Varian Inc., Palo Alto, CA, United States). Vitamin analysis of freeze-dried samples was conducted by bilacon (bilacon GmbH, Berlin, Germany, Department of Instrumental Analysis) using standardized procedures of the multi-method for determining water-and fat-soluble vitamins in food by LC-MS/MS (methods: PV-SA-158 and 159, 2019-02). One-way and two-way analysis of variance (ANOVA), followed by Tukey's HSD test (p ≤ 0.05), were conducted to show significant differences. Market Analysis In total, data were collected from 150 PBMA-hot products in 2019 and from 236 products in 2021 (+57%). Only a small proportion of this product category was labeled as organic. Compared to the earlier year, there were hardly any changes in organic products in this product category. The largest increase in samples from 2019 to 2021 was seen in the PBMA-cold category (from n = 48 to n = 101 or an increase of 110%). Furthermore, in this product category, significantly more than half of the products were produced conventionally. Overall, 65 cheese alternative products were included in the analysis in 2019 and 123 in 2021, representing an 89% increase in the number of products. The majority of the products available on the market were also not labeled as organic, but produced conventionally. However, the number of organic products increased significantly, by 140%, especially for alternative cream cheese products, by 260% (Table 3). Table 3. Development of the number and type of products of pb cheese and meat alternatives, and the distribution (%) of products with organic labels between 2019 and 2021. The main protein sources of the investigated meat alternatives differed significantly depending on the product (Figures 1 and 2). Soy was the most common protein source, and there was a year-to-year increase from 36.7% to 37.7% in PBMA-hot products and from 27.1% to 38.3% in PBMA-cold products. Wheat protein was the second most commonly used protein source. Overall, there was a clear increase in the use of pea protein, especially in PBMA-hot products, from 6.0% to 19.1%, and a significant decrease in animal proteins (milk protein and egg) in both pb product groups (PBMA-hot and PBMA-cold). Protein combinations were frequently found in the products. As shown in Figure 3, coconut oil was a common fat ingredient in the cheese alternatives. The use of palm oil was significantly reduced in 2021 from 10.8% to 4.1%. There was an increase in the use of cashew nuts, from 7.7% to 10.6%, and almonds, which accounted for 4.1% (Others) in 2021. Tables 4-6 give an overview of the main nutrients in PBMA-hot, PBMA-cold, and PBCA products that were available as information for each of the categories studied. In addition, the Nutri-Score was calculated for all products. Since consumers can directly substitute animal meat and cheese products with pb alternative products, a comparison of nutrients and Nutri-Score was performed and included in the tables. Nutrients and Nutri-Score In the PBMA-hot product category (Table 4), the average energy value ranged from 152.5 to 244.4 kcal/100 g in 2019 and from 146.5 to 240.7 kcal/100 g in 2021. Fat content varied from 5.05 to 15.98 g/100 g, with sausage having the highest value in both years. The SFA content was 0.92 to 3.40 g/100 g, with the highest value for bratwurst (2019) and burger (2021) categories. Carbohydrate content was in the range from 4.34 to 15.48 g/100 g (2019) and from 4.91 to 16.88 g/100 g (2021), with the highest values in both years for schnitzel. The sugar content was 0.77-2.07 g/100 g in 2019 and 0.88-2.28 g/100 g in 2021, with the highest amounts in burger and steak, respectively. On average, a relatively high protein content, between 14.68 and 21.33 g/100 g, was observed in all product categories. Although the average salt content was generally less than 1.5 g/100 g, there was a wide range in the means of the product groups as well as from year to year, from 1.16 to 1.82 g/100 g, such that several products (2019 n = 7, 2021 n = 9) also contained more than 2.5 g of salt. In six out of a total of seven nutritional categories, fillet and strips had the lowest scores in both years. In the Nutri-Score calculation, fillet scored best, from −2.17 (2019) to −1.17 (2021), thus being in category A (dark green). Bratwurst and steak, on the other hand, achieved Pb meat tends to have a lower kilocalorie content than animal-based meat (Table 4). In three out of eight food groups, namely burger, bratwurst, and sausage, the pb products showed significantly lower values than the animal-based products. A similar difference was found for total fat and SFA: their proportion was significantly higher in the animal-based products, except for minced meat (2021), where the SFA content in the pb products did not differ from that in animal-based products. Although, as expected, pb products contained more carbohydrates and sugars, there were no significant differences in PBMA-hot products compared to the animal variant except for burger. The salt content of pb products was significantly higher than for animal-based products in six out of eight food groups. The reverse was true for bratwurst, and there was no significant difference for meat sausage. The Nutri-Score was significantly better for the pb products in six categories. As with the pb sausage products, the animal-based sausage products received the highest scores, and the animal-based products scored lowest and were in category E (red). The results for PBMA-cold products (Table 5) were similar, with the content of fat and SFA being significantly higher in animal-based products in two of the four product groups. The carbohydrate and sugar contents were significantly higher in all categories, twice as high for carbohydrates and four times as high for sugar, compared to animal products. The calculated Nutri-Score was overall lower in the pb products, from 7.50 to 14.92 in 2019 and 8.35 to 13.56 in 2021, corresponding to categories C and D (yellow/orange). The scores for meat products are significantly higher here in three out of four food groups. Nutrients and Nutri-Score Tables 4 -6 give an overview of the main nutrients in PBMA-hot, PBMA-cold, and PBCA products that were available as information for each of the categories studied. In addition, the Nutri-Score was calculated for all products. Since consumers can directly substitute animal meat and cheese products with pb alternative products, a comparison of nutrients and Nutri-Score was performed and included in the tables. In the PBMA-hot product category (Table 4) Table 4. Nutrients ("Big 7") and Nutri-Score per 100 g of pb meat alternatives and meat (mean and standard deviation) in eight different food groups, compared to animal-based products *. Table 5. Nutrients ("Big 7") and Nutri-Score per 100 g of pb meat cold cuts alternatives and cold cuts meat (mean and standard deviation) in four different food groups, compared to animal-based products *. Table 6. Nutrients ("Big 7") and Nutri-Score per 100 g of pb cheese alternatives and cheese (mean and standard deviation) in five different food groups, compared to animal-based products *. Table 6 shows that for all product groups, the pb cheese alternatives had an energy content ranging from 249.0 to 288.2 kcal/100 g in a year-to-year comparison. In both years, the fat content was highest in the pb cream cheese products (24.72-24.78 g/100 g), and the SFA content was highest in the pb sliced cheese (16.27 to 18.14 g/100 g). Carbohydrate content ranged from 4.45 to 5.02 g/100 g in 2019 and from 20.19 to 21.09 g/100 g in 2021. The highest average protein values were found in pb feta in 2019 (6.84 g/100 g) and pb cream cheese in 2021 (5.72 g/100 g). The maximum salt content was calculated as 2.06 g/100 g for pb cheddar in 2019 and 2.02 g/100 g for sliced cheese in 2021. All cheese alternatives performed poorly in the calculated Nutri-Score, with cream cheese having the lowest score (11.62 to 14.13; category D (orange)); 20.13 points for pb cheddar (2019) and 21.07 points for sliced cheese (2021) would result in both being in category E (red). The Nutri-Score was significantly better than in animal products in four out of five product groups, while the protein content of the animal products was significantly higher in all product groups. As expected, the carbohydrate content was also significantly higher in pb products, except for cream cheese. The fat and SFA content did not differ significantly in the four food groups, regardless of whether the product was plant-or animal-based. Therefore, the low energy content in pb products can only be calculated significantly for sliced and cheddar cheese. Table 7 shows the mean percentage of the recommended daily intake according to the D-A-CH (Germany, Austria, Switzerland) reference values for the determined minerals, separated by gender and different age groups (19-25 years and ≥65 years). If no other recommendations are given for the male gender, the recommendations also apply to the female gender. The different age groups were chosen because the motivation to consume plant products may differ among age groups. The data for the animal products are based on the literature values. For the 19-25 age group (AG1+), zinc and iron in the alternative products were recalculated, a safety factor since it can be assumed that requirements for zinc of up to 50% higher and for iron of even 80% higher have to be met by a vegetarian diet rather than by a non-vegetarian diet [31]. Therefore, the daily requirement for iron, magnesium, copper, and sodium was better covered by pb products than by animal-based products. The same was true for calcium and phosphorus in the meat alternatives but not in the cheese alternatives. In the case of zinc content, the difference in product groups was noticeable. The calcium content by supplementation with calcium citrate was only increased in the product "Bedda"; with an addition of 700 mg/100 g this content is as high as in animal-based cheese products. This could be proven in the analysis. None of the other analyzed products contained any additional minerals. Vitamins The information on the water-and fat-soluble vitamins in Table 8 is to be read analogously to that for the minerals ( Table 7). The requirement for vitamins B 1 , B 3 , and pantothenic acid is covered by all pb alternative products, to a lesser or equal extent than by animal-based products. In contrast, the need for vitamin B 6 is met to a higher or equal extent. Pb cheese cannot meet the requirements for vitamin B 2 , folate, or biotin as well as pb meat alternatives. On the other hand, the cheese alternatives can cover up to 65.55% of the daily requirement for vitamin B 12 and even up to 95.09% of that for vitamin C in females and 82.12% in males. According to the product declaration, only two alternative cheese products (Violife, Bedda) were supplemented with vitamin B 12 . The enrichment of vitamin B 12 may explain the high value found. Vitamin C was added to the products in the form of ascorbic acid to extend shelf life. For the fat-soluble vitamins E and K, pb alternative products can cover the daily requirement equally well or significantly better than the animal-based alternatives, while vitamin A, however, was only present in small amounts. The detection of vitamin D was not possible in the group of pb products. Table 7. Coverage of the daily mineral requirements by pb alternative products compared to animal-based products *. Percentage related to the recommendations of the D-A-CH reference values, given in mg per 100 g of foods, and related to gender and age groups (19-25 Y (years) and ≥65 Y). Discussion Various studies have extensively proved that meat consumption has an impact on human and environmental health [2,[32][33][34][35][36][37][38]. However, a conversion of Western nutrition habits, with a high portion of meat, on a global level would be possible over a longer period of time. Though the meat consumption per capita in Germany decreased overall by 750 g in 2020 compared to the previous year, especially for pork and beef, it still remains at a high level of 57.3 kg [39]. Different strategies for change are conceivable, such as smaller portions of meat from sustainable farming ("less, but better") as well as greater consumption of vegetable proteins, where alternative products are relevant. Venti and Johnston [40] presented a vegetarian food pyramid with a subheading "Beans & Protein Foods", in which they also mention meatless burgers and chicken and "nondairy" foods. In addition to soy milk and yogurt, soy cheese was also named [40]. The Giessen Vegan Food Pyramid also includes milk and yogurt alternatives, as well as "legumes and other protein sources", which include tofu, lupine, and pea protein products [41]. Although pb foods such as tofu have been available for many years, they are currently not considered explicitly in country-specific guidelines [42] or documents such as that produced by the EAT-Lancet Commission [2]. Market Analysis The data from the market analysis showed that the availability of pb alternative products has increased, and consumers are presented with a new variety of products when choosing their food. Especially with the rapid increase in new pb meat and cheese alternative products available, as shown in the market analysis (Table 3), the importance of these products in consumers' daily food choices has increased. Recent studies have shown that shifting to a diet with a reduction or elimination of animal-based products to more whole-grain foods and pb foods has been one of the most important dietary strategies on a global scale for both the planet and human health [32][33][34][35]43]. A study by Kemper and White [44] showed that the pb product category's dietary implications and individual diversity have gained importance as more people adopt flexitarian, vegetarian, and vegan dietary styles. Sundar and Kardes [45] described the "health halo effect", through which consumers automatically perceive the variety of pb alternative products as healthier. The results of the present study, though, cannot confirm such theory, as the products studied here and most of the foods found in the market analysis have to be classified as ultraprocessed foods according to the NOVA classification (Group 4) of Monteiro et al. [23]. These food products contain various ingredients, particularly additives such as dyes and other colors, flavors, flavor enhancers, and emulsifiers, and are produced through a number of different industrial processes. They are usually ready to heat and eat and are enticingly packaged and intensively marketed [23]. Gehring et al. [46] found in their study that a significant avoidance of animal foods was associated with greater consumption of ultraprocessed foods. Thus, vegetarian/vegan diets are not necessarily beneficial to health, as studies have shown that consumption of ultra-processed foods can potentially negatively affect the nutrient quality and, thus, health outcomes [23,47]. The most important criteria for consumers in their purchasing and consumption decisions for meat alternatives were their sensory characteristics and the availability of the products, and only secondarily do animal welfare and environmental as well as health aspects influence consumers [48]. The number of alternative products has increased significantly from 2019 to 2021: by 57% for PBMA-hot, by 110% for PBMA-cold, and by 89% for PBCA. During this time, the products have evolved from niche products to mainstream products in German supermarkets, which often place them close to animal products. Dutch [49] and Australian [50] consumer studies investigated motivations for eating pb meat substitutes. They found that for switching to a pb sustainable diet, alternative products can be a valuable aid. Considering cultural and social factors, especially at family gatherings or other occasions where animal-based products are consumed, pb foods can provide a good alternative for consumers who prefer them [49,50]. Interestingly, consumer studies have shown that meat alternatives primarily appeal to consumers who want to replace meat in their meal, rather than consumers who identify themselves as vegetarians and vegans and are more likely to question the purpose of eating meat-like foods [50,51]. The "Big 7" of the Plant-Based Alternative Products Considering the categorization of foods by their energy content (low: ≤150 kcal/100 g; medium: 160-240 kcal/100 g; high: ≥250 kcal/100 g [52]), only the product group "fillet" had a low energy content in 2021. In the PBMA-hot category (Table 4), all other product groups had a medium energy content (160-240 kcal/100 g). A high energy content (≥250 kcal/100 g) was observed in two of the four PBMA-cold product groups (Table 5). For the PBCA category (Table 6), all products had a high energy content. However, compared to the animal-based products, in the three categories (PBMA-hot, PBMA-cold, PBCA), the energy content of the alternative products was significantly lower in burger, bratwurst, sausage, salami, sliced cheese, and cheddar. Since the global obesity epidemic is linked to excessive daily energy intake [53], low energy density foods should also be consumed to prevent secondary diseases such as cardiovascular diseases (CVDs) or cancer [36,54]. The total fat and SFA content in the meat alternatives was not substantial or significantly lower than in the animal products. It was noticeable that the amount of fat can vary considerably within all product categories. A high proportion of ultra-processed foods with a high energy density can also be associated with excessive fat intake in the long term [23]. In both years, the market analysis for pb cheese in Figure 3 shows that the main fat source for all products was coconut oil: 76.7% in 2019 and 71.5% in 2021. The high content of SFA in coconut oil may have implications for elevated blood concentrations of total and LDL cholesterol. Associations between coconut oil consumption and the risk of CVDs are controversially discussed in studies [55,56]. The present study shows that the main protein sources in meat alternatives are soy, wheat, and pea (Figures 1 and 2). The quality of dietary proteins can be determined by the protein digestibility corrected amino acid score (PDCAAS). The PDCAAS for milk and whey protein concentrate, soy protein isolate, and egg were rated at the highest possible score of 1.0, whereas pea protein concentrate was rated at 0.89. Wheat and wheat gluten had the lowest scores with 0.51 and 0.25, respectively. In comparison, red meat was rated at 0.92 [57,58]. Proteins of plant origin are often deficient in one or more essential amino acids. They are less digestible in their natural form than animal proteins due to antinutritional compounds, such as phytic acid [59]. Nevertheless, dietary intake of plant protein may be more positively evaluated than that of animal protein due to its potential health benefits. The studies by Shang et al. [37] and Song et al. [38] showed that a higher intake of plant protein tends to be associated with a low risk of type 2 diabetes and of all-cause and cardiovascular mortality. Accordingly, the protein quality of pea-, soy-, milk-, and/or eggbased meat alternatives was comparable to that of beef in terms of essential amino acids, but wheat-or gluten-based products had a lower protein quality than comparable meat products. In addition, many products contain combinations of multiple protein sources, and thus protein quality can be improved. The protein content of each meat alternative product, as well as of each category, sometimes differs significantly. For example, two PBMA-hot categories (bratwurst, sausage) had a considerably higher protein content than the respective animal category. On the other hand, the protein content in all five cheese groups was substantially lower than in the animal-based products (see Table 6), so that pb cheese products currently do not offer an adequate protein alternative. The results of the market analysis showed that the salt content in pb meat alternatives was significantly higher than in the respective meat product. Recommendations for salt reduction explicitly for meat alternatives were presented by Public Health England in 2020 [60]. Three subgroups were classified from that study [60] and recommendations for the salt content were made based on 100 g of product as follows: "plain meat alternatives" (e.g., fillets, mince) with a low salt content of 0.63 g/100 g, "meat-free products" (sausage, burger) with a medium salt content of 1.19 g/100 g, and "meat-free bacon" (cold cuts) with a high salt content of 1.78 g/100 g. Comparison of the product categories showed that all PBMA-hot products were above the maximum recommended value. In the PBMAcold category, there were three products in 2019 and one in 2021 that did not exceed the value of 1.78 g/100 g ( Table 5). The mineral analysis (Table 7) found significantly higher sodium levels in all the pb products analyzed here, compared to the data from the food databases [24][25][26][27] for the animal-based products. Especially in the products pb salami and pb sausage, the recommended daily intake was covered or even exceeded with 100 g of food. However, it should be noted that the consumer adds salt or sodium-containing seasonings to the less processed animal-based product (e.g., fillet, steak, minced meat) but not to the pb convenience product. Because of such additions, similar high sodium levels could be achieved as in the vegetable alternatives in this study. These results indicate the need for industry guidelines. Nutri-Score On calculating the Nutri-Score for alternative and animal-based products, significant differences were found. In PBMA-hot (Table 4) and PBMA-cold (Table 5), significantly higher scores were calculated for the meat products in nine of the twelve product groups, mainly due to the high proportion of SFA and salt. The results for cheese alternatives were the opposite; in four out of five product groups (Table 6), the pb alternatives had a higher Nutri-Score than the animal products. This can be explained by the very low protein content of pb products. That the Nutri-Score can be an effective tool to inform consumers and make healthier purchasing decisions has been shown in recent studies [15,16]. Therefore, manufacturers should increasingly declare this FOPL on their products. The Nutri-Score is colored like a traffic light and makes it easy for consumers to understand which product is the "healthier" one. However, it is not able to reflect the degree to which the products are processed. As shown by Romero Ferreiro et al. [61], there were ultra-processed foods in each Nutri-Score category (A-E), oriented according to the NOVA classification. According to their results, in category B, more than half (51.5%) of the products were highly processed foods. The prospective French cohort study, NutriNet-Santé, was able to show that ultraprocessed foods have negative effects on various diseases, such as associations with a higher risk of CVD [62] and depressive symptoms [63], and Schnabel et al. [47,64] observed effects with gastrointestinal disease and an overall higher risk of mortality. Given the negative impact that consumption of ultra-processed foods has on various aspects of health, FOPL with the Nutri-Score should be followed with at least additional labeling indicating the degree of processing, such as the NOVA classification. In this way, it becomes clear that each pb alternative product has individually different product characteristics and these different aspects have to be evaluated in order to classify a food as "healthy". Micronutrients of the Plant-Based Alternative Products The enrichment of pb foods with micronutrients is also becoming increasingly important. The more products are offered and the greater their acceptance among consumers, the more important it becomes to have a uniform European regulation to achieve nutritional equivalence compared to animal-based products in order to avoid possible deficits in calcium, iron, zinc, and vitamin B 12 in certain population groups [65]. The mineral and vitamin analyses in Tables 7 and 8 show that most of the pb cheese and meat alternatives meet the daily nutritional recommendations for single micronutrients. Iron in plant foods has generally a lower bioavailability than iron in meat, because some of the iron in meat is bound to hemoglobin (heme iron), which is a more bioavailable form of iron than non-heme iron [31,66]. In the present study, the iron content of the plant alternatives was higher than the reference data from the literature [24][25][26][27] for animal products. Even after calculating a safety factor of +80%, the results are the same or higher. The same results can be seen in Table 7 for the male gender and the age groups (AG1, AG1+). As the D-A-CH reference values for iron in older women (≥65 years) are lowered, the daily recommendations can be reached faster (AG2). Thus, the products selected here may provide a good alternative. In contrast, except for the salami alternatives, the zinc content of the products was significantly lower than the average literature references for animal-based products from the databases [24][25][26][27] and, thus, did not represent an adequate zinc alternative, especially when the safety factor was taken into account. In addition, antinutritional substances such as phytic acid may reduce zinc absorption in pb alternatives [67,68]. In an omnivorous diet, milk and dairy products, especially hard cheese, are an essential source of calcium [68]. The results of the present study showed that on average 100 g of sliced cheese covers 76% of the requirement, whereas the vegetable alternatives provide only 32% (Table 7). Such a high content can only be achieved if the manufacturer fortifies the product with calcium. On the other hand, meat alternatives also appear to be a source of calcium, as they can cover the requirement much better than meat-based products. Other important sources of calcium are green leafy vegetables and calcium-rich mineral water [68]. Vitamin B 12 was found exclusively in animal products. Vegetarians who consume milk, cheese, and eggs can get a sufficient supply. Vegans rely on enriched foods or supplements such as toothpaste containing vitamin B 12 to meet their needs [69,70]. However, only conventional foods can be supplemented with vitamins because it is not legally permitted for organic products in Europe [22]. Therefore, it would be beneficial if pb alternative products contained this nutrient. In the present study, only the cheese alternatives were able to meet the recommended daily intake to the same extent as the animal-based products, as the meat alternatives contain only very low levels of B 12 . The reason for this was the producer adding vitamin B 12 to the PBCA. To our knowledge, the present study was the first to focus solely on pb products available in online stores and to compare them with animal-based products and calculate the Nutri-Score for these products. However, due to the large number and variety of "modern" pb alternative products on the market, the results obtained in this study cannot be generalized in principle. Nevertheless, the selection of products from online stores represents a broad part of the market for meat and cheese alternatives, so that information and recommendations can be derived from it. Conclusions The trend towards pb alternative products is rising, especially in the Western world. These alternative products mainly provide high-quality vegetable protein. The present study showed that the content of fat, SFA, and salt in the pb products varied considerably. These nutrients are related to the most important dietary factor in the global burden of disease. The results of the micronutrient assessment show that the relationship to the reference values for the recommended daily intake is meaningful. In this way, deficiencies and surpluses of vitamins and minerals can be made clear. Due to the high degree of processing of the foods studied here, they can nevertheless not be recommended for the daily diet, even if they have a low Nutri-Score. However, consumers should also pay attention to the nutrition labeling of the individual alternative products, as these do not automatically represent a healthier alternative to an animal-based product. Therefore, intervention studies would be of particular interest to clarify whether substituting animalbased foods with pb alternatives has an impact on health. Furthermore, consumers need guidance on how to compose a balanced pb diet. Author Contributions: M.P. and E.P. planned and designed the experimental setup and wrote the manuscript. M.P. performed the experiments and analyzed the data. All authors have read and agreed to the published version of the manuscript. Funding: The study was part of the NES project (Pflanzlich orientierte Ernährungsstile als Schlüssel zur Nachhaltigkeit-Nachhaltige Ernährungsstile), which was financially supported by the Volkswagenstiftung, grant number ZN3382 and the Ministry for Science and Culture of Lower Saxony, grant number VWZN3255. Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Data Availability Statement: Data from measurements are available upon request from the corresponding author.
2022-02-01T16:08:33.671Z
2022-01-29T00:00:00.000
{ "year": 2022, "sha1": "0f767c3b24cf6a8a4b42d0ba6fcfbc761bbb1f72", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6643/14/3/601/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2af997135806c3f4e8ecfd061ad85c804b0e06e3", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
231749494
pes2o/s2orc
v3-fos-license
Effective Mori-Zwanzig equation for the reduced-order modeling of stochastic systems Built upon the hypoelliptic analysis of the effective Mori-Zwanzig (EMZ) equation for observables of stochastic dynamical systems, we show that the obtained semigroup estimates for the EMZ equation can be used to drive prior estimates of the observable statistics for system in the equilibrium and non-equilibrium state. In addition, we introduce both first-principle and data-driven methods to approximate the EMZ memory kernel, and prove the convergence of the data-driven parametrization schemes using the regularity estimate of the memory kernel. The analysis results are validated numerically via the Monte-Carlo simulation of the Langevin dynamics for a Fermi-Pasta-Ulam chain model. With the same example, we also show the effectiveness of the proposed memory kernel approximation methods. Introduction The projection operator method, which is also known as the Mori-Zwanzig (MZ) formulation [24,39], is a widely used dimension-reduction framework in statistical mechanics. The key feature of such formulation is that it allows us to formally derive the generalized Langevin equations (GLEs) [40,3,32,14] for coarsegrained quantities of interest based on microscopic equations of motion. Such GLEs can be found in a variety of applications, including molecular dynamics [20,33,11,10], fluid mechanics [26,12], and, more generally, systems described by nonlinear partial differential equations (PDEs) [31,29,23,22]. Although being used in the physics and applied mathematics communities for a rather long time, a systematic study of the MZ equation within an rigorous analytical framework is still lacking. This is closely related to the well-known difficulty on the quantification of the orthogonal dynamics in the MZ equation. Being a high-dimensional flow which is generated by an integro-differential operator, the mathematical properties such as the regularity and ergodicity of the orthogonal dynamics are not well understood. Hence from a theoretical point of view, there is no available prior estimate which helps to determine the properties of the MZ memory integral and the fluctuation force. As a result, the numerical approximations of these terms has to be done in an rather ad hoc manner. Some recent works have shed light on this direction. In particular, Kupferman, Givon and Hald proved [5] the existence and uniqueness of the orthogonal dynamics for a classical dynamical system with Mori's projection operator. More recently, Zhu and Venturi [34] were able to get the uniform boundedness of the orthogonal dynamics propagator for Hamiltonian systems using semigroup estimates [34]. The theoretical result obtained therein was later extended and greatly improved for the analysis of the effective Mori-Zwanzig (EMZ) equation corresponding to stochastic differential equations (SDEs) [38]. In particular, they developed a thorough mathematical analysis of the EMZ equation using the hypoelliptic technique developed mainly by Hérau, Nier, Eckmann, Hairer and Helffer [16,8,15]. The key finding is that the ergodicity and regularity of the stochastic flow generated by the Markovian semigroup e −tK , where K is the Kolmogorov operator corresponding to the SDE, implies the ergodicity and regularity of the stochastic flow generated by the EMZ orthogonal semigroup e −tQKQ , provided that P = I − Q is a Mori-type projection operator. This connection enables us to get a clear understanding on the dynamical properties of the orthogonal dynamics generated by e −tQKQ . In this work, we continue Zhu and Venturi's hypoelliptic study of the EMZ equation for stochastic dynamical systems. The main objective of the paper is twofold. First, we apply the semigroup estimate obtained in [38] to different stochastic systems and show that it enables us to derive useful prior estimates for the statistics of observables. In particular, we prove that the reduced-order observables in some commonly used stochastic models have exponentially decaying time autocorrelation function and EMZ memory kernel. This fact verifies the frequently used exponentially decaying assumption for the memory kernel from a theoretical point of view. Secondly, we will demonstrate the effectiveness of the series expansion approximation method for the memory kernel reconstruction of the EMZ equation. To this end, we will focus on the first-principle parametrization method [37] and the data-driven methods [1,2,4,21] developed over the years. For the numerical examples we considered, these two methods are proven to yield accurate simulation result within the range of their applicability. Moreover, we will prove the convergence of the commonly used data-driven method using the regularity estimate for the orthogonal dynamics. For the reduced-order modeling problem of a large-scale stochastic system, the proposed analysis for the EMZ equation shows the potential usage of the hypoelliptic method in analyzing the dynamical behavior of the reduced-order model. The numerical methodology provides a practical way to solve it. This paper is organized as follows. Section 2 briefly reviews the derivation of the effective Mori Zwanzig (EMZ) equation for the stochastic dynamical system driven by white noise. In Section 3, we focus on the equilibrium and nonequilibrium dynamics of the interacting anharmonic chains and derive prior estimates for various observable statistics such as the time autocorrelation function, the nonequilibrium mean, the EMZ memory kernel and the fluctuation force. In Section 4, we introduce different parametrization methods to approximate the EMZ memory kernel and prove their convergence. All these theoretical results are verified numerically in Section 5 via the simulation of the Langevin dynamics for a Fermi-Pasta-Ulam chain model. The main findings of this paper are summarized in Section 6. Effective Mori-Zwanzig equation for stochastic system The starting point of our work is the Mori-Zwanzig equation for the stochastic dynamical systems. Such a equation has been derived by different researchers [25,10,17,38]. Here we adopt the formulation introduced in [38]. To this end, we consider a d-dimensional stochastic differential equation in R d : where F : R d → R d and σ : R d → R d×m are smooth functions. ξ(t) is a m-dimensional Gaussian white noise with independent components, and x 0 = x(0) is a random initial state characterized in terms of a probability density function ρ 0 (x). It is well known that the system of SDEs (1) induces a d-dimensional Markovian process in R d . This allows us to define a composition operator M(t, 0) that pushes forward in time the average of the observable u(t) = u(x(t)) over the noise, i.e., Using Itô's interpretation for the stochastic integral, we note that M(t, 0) is a Markovian semigroup generated by the following (backward) Kolmogorov operator [28,18]: With the evolution operator M(t, 0) available, we can now derive the Mori-Zwanzig equation for noiseaveraged quantity E ξ(t) [u(x(t))|x 0 ]. To this end, we introduce a projection operator P and the complementary projection Q = I − P. By differentiating Dyson's identity [37,36,6] for the Markovian semigroup M(t, 0), we can obtain the exact evolution equation governing the evolution of (2): where u(0) = u(x 0 ). Note that in (4), e tQK QK is replaced by a another operator e tQKQ QK which makes it slightly different from the commonly used MZ equation [17,10]. Such a modification is needed for the semigroup estimation we are going to present. It is possible because Q is a projection operator, e tQK and e tQKQ are equivalent in the range of Q. The three terms on the right hand side of (4) are called streaming term, fluctuation (or noise) term, and memory term respectively. It is often useful to compute the evolution of the observable u(t) within a closed linear space such as the image of the projection operator P. Hence we apply the projection operator P to (4) and get the projected equation: Eqn (4) and its projected form (5) only describe the noise-averaged dynamics of the observable u(x(t)), hence they are called as the effective Mori-Zwanzig (EMZ) equations for the stochastic system. The EMZ equation and the classical MZ equation for deterministic (autonomous) systems [37,34,36] have the same structure. The only difference is that the Liouville operator L is replaced by a Kolmogorov operator K. In this paper, we mainly focus on the EMZ equation corresponding to Mori-type linear projection operator. To derive such a equation, we consider the weighted Hilbert space be the inner product in H. The Mori-type projection operator P is a finite-rank operator in H with the canonical form: where G ij = u i (0), u j (0) ρ and u i (0) = u i (x(0)) (i = 1, ..., M ) are M linearly independent functions with respect to inner product ·, · ρ . Since P is a finite rank operator, we can rewrite the EMZ equations (4)-(5) equivalently as: where f i (t) = e tQKQ QKu i (0) (fluctuation term). To be noticed that here we allow a slight abuse of notation and use u(x(t)) to represent its noise average E ξ(t) [u(x(t))|x 0 ]. This applies to all EMZ equations in the following sections. In statistical mechanics, the EMZ equation (8) and (9) are often called as the generalized Langevin equations (GLEs). The projection operator method provides a systematic way to derive such closed equations of motion for reduced-order observables u(x(t)) from the first principle. Depending on the choice of the Hilbert space weight function ρ, the EMZ equations (8)-(9) yield evolution equations for different dynamical quantities. When considering SDE (1) in the context of statistical physics, the most common setting of ρ is ρ = ρ 0 = ρ S , where ρ 0 = ρ 0 (x) is the distribution of the random initial condition (see (1)), and ρ S = ρ S (x) is the steady state distribution of the stochastic system. For such a case, GLE (8) yields the full dynamics of the noise-averaged quantity E ξ(t) [u(x(t))|x 0 ], which is a stochastic process since the initial condition x 0 ∼ ρ 0 is random. On the other hand, the projected GLE (9) yields the evolution equation of the steady state time-autocorrelation function of u(x(t)), which is defined as [27,38] C ij (t) : Using the projected EMZ equation (9) to derive the evolution equation for the time auto-correlation function (11) is the main technical difference between our EMZ framework and the ones used in [17,10]. As we will see in Section 5, approximating this projected equation is the key step of our reduced-order modeling. Other projection operators such as the Zwanzig-type projection are also used in the literature [17] to derive nonlinear GLEs for reduced-order quantities. This, however, is not the main focus of the current paper. Lastly, we emphasize that the GLEs for deterministic Hamiltonian systems in the Gibbs equilibrium state ρ = ρ eq = e −βH /Z satisfy the second fluctuation-dissipation theorem: becasue of the idempotence of the symmetric operator Q and the skew-adjointness of the Liouville operator L with respect to the inner product ·, · ρeq . However, since the Kolmogorov backward operator K in the EMZ equation is not skew-adjoint, the second fluctuation-dissipation theorem of the form (12) is no longer valid and needs to be generalized. We refer to our recent work [35] for a more detailed explorations in this regard. Applications of the hypoelliptic analysis for the EMZ equation In the previous section, we demonstrated how the EMZ equation is derived from the evolution operator e tK and the orthogonal e tQKQ . In this section, we focus on the prior estimation of these two semigroups and apply the established analytical results to various physical models. To be consistent with the literature on the hypoelliptic analysis, we will use the negative of K and QKQ as semigroup generators and write the semigroups appearing in EMZ equation (4) as e −tK and e −tQKQ . Moreover, all estimates are obtained first in the "flat" Hilbert space L 2 (R d ) and then transformed back in weighted Hilbert space L 2 (R d ; ρ). The relationship between L 2 (R d ), L 2 (R d ; ρ) and the operators defined therein can be summarized using the following commutative diagram: where U and its inverse U −1 are unitary transformations which will be specified later for differential stochastic models. More detailed explanations can be found in [38]. Throughout this paper, we denote the standard L 2 (R d ) norm as · . The inner product in L 2 (R d ; ρ) is defined as (6) with the induced weighted norm · L 2 ρ given by Unless otherwise stated we only consider scalar quantities of interest. The following theoretical results were proved in [38]: Proposition 1 (Zhu and Venturi [38]). Assuming a Kolmogorov operatorK of the form (3) is a maximalaccretive operator in L 2 (R d ) which satisfies the hypoelliptic conditions listed in Theorem 1 [38]. If the spectrum ofK in L 2 (R n ) is such that σ(K) ∩ iR = {0}, then there exits positive constants α and C = C(α) such that forũ 0 ∈ L 2 (R n ) and all t > 0, whereπ 0 is the spectral projection onto the kernel ofK. Moreover, the n-th order derivatives of the semigroup e −tK satisfies for some positive constant C and M . In [38], it is further shown that the similar semigroup estimates hold for orthogonal semigroup e tQKQ if P = I −Q is finite-rank, symmetric projection operator such as Mori's projection. In particular, we have Proposition 2 (Zhu and Venturi [38]). Assume thatK satisfies all conditions listed in Proposition 1. If P : L 2 (R n ) → L 2 (R n ) is a symmetric, finite-rank projection operator, and the spectrum ofQKQ in L 2 (R n ) is such that σ(QKQ) ∩ iR = {0}, then there exits positive constants αQ and C = C(αQ) such that for allũ 0 ∈ L 2 (R n ) and t > 0, whereπQ 0 is the spectral projection onto the kernel ofQKQ. Moreover, the n-th order derivatives of the semigroup e −tQKQ satisfies for some positive constant C and M Q . The proof of Proposition 1-2 mainly uses the spectrum estimate for operatorK andQKQ and the functional calculus. The analysis is rather technical and hence will not be repeated here. In the following subsections, we focus on applying these theoretical results to specific stochastic dynamical systems. Application to Langevin dynamics Consider the Langevin dynamics of an interactive particle system, described by the following system of SDEs in R 2d : In eqn (17), m is the mass of each particle, V (q) is the interaction potential and ξ(t) is a d-dimensional Gaussian white noise process modeling the physical Brownian motion. The parameters γ and σ are linked by the fluctuation-dissipation relation σ = (2γ/β) 1/2 , where β is proportional to the inverse of the thermodynamic temperature. The (negative) Kolmogorov operator (3) associated with the SDE (17) is given by where "·" denotes the standard dot product. If the interaction potential V (q) is strictly positive at infinity and satisfies the weak ellipticity assumption (Hypothesis 1 in [38]), then the Langevin equation (17) admits an unique invariant Gibbs distribution given by ρ eq (p, q) = e −βH /Z, where H = is the Hamiltonian and Z is the partition function. In [38], it is further proved that Proposition 1 holds for anyũ 0 ∈ L 2 (R 2d ) and t > 0 withπ 0 (·) = (·), e −βH/2 e −βH/2 . Now we choose ρ eq as the weight of Hilbert space L 2 (R 2d ; ρ eq ), then the L 2 -estimation (13) can be unitarily transformed [38] into the semigroup estimate in L 2 (R 2d ; ρ eq ) as: where π 0 (·) = (·) eq = E[(·)]. Similarly, for orthogonal semigroup e tQKQ we have: Different from the estimate for e −tK , the explicit expression of the kernel projection operator π Q 0 depends on the specific form of P. For Mori-type projection operator P we considered, if there exists unique observable set {w j } m j=1 such that w j , u i eq = 0 and Kw j = K * w j = u j , then π Q 0 admits analytical form Otherwise π Q 0 (·) = π 0 + P. With semigroup estimates (19) and (20), we can derive prior estimations for different observable statistics. Equilibrium state. The equilibrium Langevin dynamics was studied thoroughly in [38]. Here we only review the key estimation result while the derivation is omitted. If the initial condition of the Langevin dynamics is (17) set to be ρ 0 = ρ(t = 0) = ρ eq , then the system is in a statistical equilibrium state, the corresponding dynamics is called the equilibirum Langevin dynamics. For equilibrium system, the time autocorrelation function C(t) of a scalar observable u(x(t)) = u(p(t), q(t)) is stationary quantity satisfying C(t, s) = C(|t − s|, 0). Following the definition (11), we have Using Cauchy-Schwarz inequality and the semigroup estimate (19), for u 0 ∈ L 2 (R 2d ; ρ eq ) it is easy to get the asymptotic estimate for C(t): This implies the equilibrium correlation function C(t) approaches to the equilibrium value u 0 2 eq = E 2 [u 0 ] exponentially fast. To get the EMZ equation for observable u(t), we introduce Mori-type projection P = ·, u 0 eq u 0 . Substituting this into EMZ equation (8) and (9) yields: Here we note again that u(t) is the actually the white nosie-averaged quantity E ξ(t) [u(x(t))|x(0)] and Ω = u 0 , Ku 0 eq / u 2 0 eq . By using Cauchy-Schwarz inequality and the semigroup estimate (20), we can get the exponential convergence estimate for the EMZ memory kernel K(t) and the fluctuation force f (t): where K * eq is the adjoint operator of K in L 2 (R 2d ; ρ eq ) and the specific form of the kernel projection operator π Q 0 depends on P and the observable u 0 , as we explained in (21). Nonequilibrium nonsteady state. Semigroup estimate (19) can also be used to get prior estimates for nonequilibrium Langevin dynamics. If the initial condition of (17) is set to be ρ 0 = ρ(t = 0) = ρ eq , the system evolves from a nonequilibrium nonsteady state. We now study the dynamics of the nonequilibirum mean function M (t) defined as: M (t) encodes the statistical moment information for a scalar observable u(x(t)). Using the Cauchy-Schwarz inequality, the substitution ρ 0 = ρ 0 √ ρ eq / √ ρ eq and the estimate (19), we obtain the asymptotic estimate for Different from the equilibrium case, the convergence of the nonequilibrium mean M (t) requires the finiteness of the L 2 (R 2d ) norm ρ 2 0 /ρ eq , which imposes additional constraint on the initial probability distribution ρ 0 . For instance, if the initial probability density is set to be the Gibbs distribution ρ 0 = e −βH/4 /Z β/4 at high temperature T ∝ 4/β, we have ρ 2 /ρ eq = +∞ therefore the above estimate is not sufficient to guarantee the exponential convergence of M (t) towards the equilibrium value u 0 eq . Similar conclusion can be obtained from the return to equilibrium estimate for the probability density function ρ(t, p, q) (see [15], Section 6.5): The above estimate is a dual of (13) and holds only for , there is no theoretical guarantee that the marginal distribution ρ u (t) would converges to the equilibrium marginal distribution. Application to a heat conduction model Consider a chain of nearest-neighbor interacting anharmonic oscillators coupled to two heat baths at end of the chain. Without adding external forces, the chain dynamics is determined by the system Hamiltonian: Now we attach the boundary oscillators to two thermostats with temperature T L and T R , then the dynamics of the resulting heat conduction model [9,7,8] is described by the system of stochastic differential equations: where λ L , λ R are the coupling constants between the boundary oscillators and the heat bath. ξ L (t) and ξ R (t) are the standard Gaussian white noise. The Kolmogorov backward operator K corresponding to the system of SDEs (28) is given by: Equilibrium state. When T L = T R = T , the system admits an invariant probability density which is given by the extended Gibbs distribution ρ eq = e −βG(p,q,r) /Z, where β = 1/T and G(p, q, r) is the effective energy corresponding to the chain+heat bath system, defined as The analysis for the equilibrium heat conduction model is exactly the same as the one for the Langevin dynamics. For potential energy V 1 and V 2 satisfying suitable conditions listed in [8], Eckmann and Hairer proved that the spectrum of the transformed Kolomogorov operatorK in L 2 (R 2N +4 ) is discrete and has the cusp-shape spectrum SK. Therefore according to Proposition 1, ifK has no purely imaginary eigenvalue in L 2 (R 2N +4 ), then we have the exponentially decay estimate for scalar observable u(x(t)): where weighted Hilbert space L 2 eq = L 2 (R 2d ; ρ eq ). Following the procedure outlined in Section 3.1, it is easy to obtain corresponding exponentially decaying estimates for the equilibrium correlation function C(t), EMZ memory kernel K(t) and the fluctuation force f (t). For the sake of brevity, the derivation details are omitted. Nonquilibrium steady state. When T L = T R , it is proved in [7] that the system admits an unique invariant measure µ. Its density ρ S is an smooth function on R 2N +4 and can be represented as ρ S =h(p, q, r)e −β0G(p,q,r) . In (33), β 0 < min{β L , β R },h(p, q, r) ∈ γ>0 L 2 (R 2N +4 ; G 2γ (p, q, r)) is a function decays faster than any polynomial as x → ∞. ρ S characterises a nonequilibrium steady state of the system. General speaking, it is hard to get an explicit expression ofh(p, q, r), hence of the probability density (33). However, we can still use Gibbs form equilibrium probability density e −βG /Z as a reference state to derive prior estimates. To this end, we consider a weighted Hilbert space L 2 (R 2N +4 ; ρ r ), where ρ r = e −2β0G /Z and 1/β 0 = T 0 > max{T L , T R }. For the nonequilibrium case, the spectrum estimate obtained by Eckmann et al in [8] still hold, which implies the following exponentially decay estimate for scalar observable u(x(t)): At the steady state, the correlation function C(t) is stationary which can be defined as (11) if the initial condition of (28) satisfies ρ(0) = ρ S . Using Cauchy-Schwarz inequality and the formal expression of the steady state density (33), we obtain Sinceh(p, q, r) ∈ γ>0 L 2 (R 2N +4 ; G 2γ (p, q, r)), for any observable u(x(t)) ∈ S (R 2N +4 ), e.g. polynomial functions, we have h (p, q, x)u 0 u 0 L 2 r < +∞. The above estimate implies the steady state correlation function C(t) decays to u 0 2 ρ S exponentially fast. We emphasize that all estimates in Section 3 can be readily generalized to the N -dimensional EMZ equation (8) and (9) where the observable u(x(t)) is a N -dimensional vector [38]. Memory kernel parametrization and the reduced-order modelling From the previous discussion, we see that the prior estimation for the EMZ equation memory kernel implies that K(t) is bounded by an exponentially decaying function. However, it does not answer what K(t) exactly is, which is an important problem for the application the EMZ equation. In this section, we turn to focus on the numerical approximation of the EMZ memory kernel. The main method we will consider is the series expansion approach. Over the years, various basis functions have been used to construct approximation schemes of the classical system Mori-Zwanzig memory kernel [1,2,20,21,19,4,36], where the expansion coefficients (parameters) are obtained through first-principle or data-driven methods. We will show that for the EMZ equation corresponding to the SDEs, similar approaches can be used to parametrize the memory kernel. In particular, we will prove that many commonly used date-driven methods are convergent due to the regularity of the orthogonal flow. The first-principle method of parametrization A first-principle method to approximate the memory kernel was considered in [36,37]. It is shown that a series expansion of the memory kernel can be derived exactly from the semigroup expansion of orthogonal semigroup e tQKQ . Following the derivation given therein, we consider the series expansion of the orthogonal semigroup: where Φ n (QKQ) is the n-th order polynomial function of operator QKQ and g n (t) the corresponding temporal basis. The simplest choice is the Taylor expansion where Φ n and g n are Φ n (QKQ) = (QKQ) n and g n (t) = t n /n!. Other possible choice of Φ n (n = 0, . . . , N ) can be, e.g. the Faber polynomials [36], and the corresponding g n (t) = e −at J n (bt), where J n (bt) is the Bessel function of the first kind. Semigroup expansion (35) leads to function series expansions of the memory kernel. For a one dimensional EMZ equation, a substitution of (35) into (10c) leads to where k n is the n-th expansion coefficient which can be understood as the operator cumulant averaged with respect to the probability density ρ. Naturally, a truncation of the expansion series (36) yields an approximation of the exact memory kernel. From a theoretical point of view, it is hard to prove the convergence of expansion (36) for nonlinear SDEs due to unboundedness of the operator QKQ. However, the validity of this approximation method has been verified numerically for linear and nonlinear Hamiltonian system in the statistical equilibrium [36,37]. First-principle method to calculate k n . The first-principle method calculate k n via the evaluation of the operator cumulants in (36). This can be realized using a recursive scheme and the associated combinatorial algorithm introduced in [37]. The original method is developed for the MZ equation of deterministic Hamiltonian systems. However, it can be readily generalized to stochastic system EMZ equation with some slight modifications of the derivation. Here we only briefly review the main idea of the algorithm and refer to [37] for detailed explanations. Without loss of generality, it is convenient to consider a one-dimensional Mori's projection: and introduce the following notation Clearly, if we are given {µ 1 , . . . , µ n+2 }, then we can easily compute {k 1 , . . . , k n } in (36), therefore the n-th order approximation of the memory kernel K(t) for any given polynomial function Φ n . For example, if Φ n (QKQ) = (QKQ) n then k q = µ q+2 /q! (q = 0, . . . , n). Directly evaluating µ i is a daunting task since it involves taking operator powers and averaging of operator QKQ which is a integral-differential operator by definition. However, the following recursive formula indicates that µ i can be constructed iteratively from γ i : The proof of (39) is provided in Appendix A. Recurrence relation (39) shifts the problem of computing {µ 1 , . . . , µ n } to the problem of evaluating the coefficients {γ 1 , . . . , γ n } defined in (38). This can be done iteratively using the enumerative combinatorial algorithm introduced in [37], with the Livouille operator L used therein replaced by the Kolomogorov operator K. For the sake of brevity, we omit technical details which can be found in [37]. In Appendix B, we provide the derivation of the combinatrorial algorithm for the Langevin dynamics (17) of the Fermi-Pasta-Ulam (FPU) chain. The data-driven method of parametrization Different from the first-principle method, there are many established data-driven methods which can be used to parametrize the memory kernel. Generally speaking, these methods use data collected by simulating stochastic dynamics (1) to approximate the expansion coefficients k n . The expansion series can be formulated in the temporal space as well as the frequency space [19]. In this section, we are only concerned with the time-domain expansion and use the following ansatz to approximate K(t): In (40), {φ n (t)} is the basis function defined in some open interval I ⊂ R + . The common choice of which are the orthogonal functions in a weighted Hilbert space L 2 (I, ω). Under this setting, (40) becomes a generalized Fourier series. Hence, we can apply established results in approximation theory, say [13], to prove the convergence of the series expansion (40) as N → ∞. As a preparation, we first use the Cauchy-Schwartz inequality and semigroup estimate (16) to obtain the upper bounds of the n-th order derivative 1 of the memory kernel: According to the definition of B Q (t) in (16), estimate (42) implies that K (n) (t) is bounded by a continuous function of time in domain I = (T 1 , T 2 ), where 0 ≤ T 1 ≤ T 2 < +∞. Hence for suitable weight function ω, in the open interval I and H k ω (I) is the weighted Sobolev space defined in I. This regularity provides sufficient conditions for the convergence of expansion (40). If {φ n (t)} is chosen to be, say the shifted Jacobi-type polynomials defined in I, then 1 The definition of the n-th order derivative of K(t) is a rather technical problem. In (42), it is formally expressed using the time derivative of e tQKQ , i.e. K (n) (t) = K * e tQKQ (QKQ) n QKu 0 , u 0 ρ. Mathematically, K (n) (t) is actually a weak derivative defined via the Dunford functional integral: where R(λ, QKQ) = (λ − QKQ) −1 is the resolvent of operator QKQ and ∂U is the boundary of the cusp U which contains the spectrum of QKQ. Note that the right hand side of (41) is a smooth function of t, hence differentiable to an arbitrary order. More details on the weak convergence of the functional integral can be found in [15,38]. according to Theorem 6.2.4 in [13], the following convergence estimate holds for any 0 < m < N : Since K(t) is naturally defined in domain I = (0, +∞), we can also set {φ n (t)} to be the standard Laguerre polynomial with the weight function ω = e −t/2 . For fixed n ∈ N + , using (42) we can get which yields |K (m) (t)|t m/2 ∈ L 2 ω (I). According to Theorem 6.2.5 in [13], this leads to the following convergence estimate for any 0 < m < N : The semigroup estimate (16) we used in the above derivation holds in the uniform topology for any scalar observable u ∈ L 2 (R n ; ρ). As a consequence, the convergence rate we obtained on (43) and (45) are not optimal. But we already know the convergence is spectral, i.e. faster than any polynomials. As far as we are concerned, this is the first convergence result for data-driven methods used in the Mori-Zwanzig framework. We also note that error estimate (43) only implies the approximation of K(t) within (T 1 , T 2 ) is accurate. In order to maintain low extrapolation error, in applications we will use basis functions defined in I = (0, +∞) to approximate the memory kernel. Data-driven method to calculate k n . A substitution of the truncated expansion (40) into the projected EMZ equation (5) leads to the approximation scheme for Pu(t). Since for Mori's projection, we have Pu(t) = C(t) according to the definition (11), the scheme reads: where the stationary correlation function C(t) can be constructed from Monte-Carlo (MC) simulation data of the numerical solution to SDE (1), and the expansion coefficients k n can be obtained by solving numerically the following regression problem: In Section 5, we will use the LASSO regression [30] to solve (46) and get the approximated parameter set {k n } N n=1 . When compared with the first-principle method, the data-driven method in general has wider range of applicability but also demands more computational power because it requires the MC simulation data of the full dynamics. Reduced-order modeling With the memory kernel K(t) obtained using the first-principle or the data-driven parametrization method, we can now work on the reduced-order modeling for any low-dimensional observables u(x(t)) of the stochastic system. Under the Mori-type projection, one may see that the projected EMZ equation (9) and the full dynamics (8) (with random initial condition ρ 0 = ρ S ) shares the memory kernel K(t). In order to build a reduced-order model (ROM) for u(x(t)) using the EMZ equation (8), it therefore boils down to the approximation of the fluctuation force f (t). In the Mori-Zwanzig framework, f (t) is formally given by e tQKQ QKu 0 which is also a stochastic process since the initial condition u 0 is random. Due to the randomness, it is hard to use techniques such as the operator series expansion (35) to approximate f (t). However, since u(t) in the steady state is a stationary stochastic process, f (t) is also stationary and one may use the truncated Karhunen-Loéve (KL) expansion series to approximate it. Without loss of generality, we assume f (t) ρ = 0, then the KL expansion for f (t) can be written as: where {η k } K k=1 are the random coefficients and {λ k , e k } K k=1 are, respectively, eigenvalues and eigenfunctions of the homogeneous Fredholm integral equation of the second kind: where T is a certain numerical integration time and f (t), f (s) ρ is the time autocorrelation function of f (t). In this paper, we only consider a specific case which allows us to determine the random coefficients {η k } K k=1 and the correlation function f (t), f (s) ρ uniquely. To this end, we assume that the observable u(t) is a Gaussian process and satisfies the second fluctuation-dissipation theorem: f (t), f (s) ρ = K(|t − s|). It can be further verified [37] that f (t) is also a Gaussian processes and its KL expansion random coefficients {η k } K k=1 are necessarily i.i.d Gaussian random variables satisfying η i η j = δ ij . As a result, we obtain the following ROM for u(t): By sampling the random coefficients {η k } K k=1 and then solving numerically (49) with a proper numerical integrator, we obtain a ensemble of sample trajectories which, in principle, would imitate the dynamics of u(x(t)) in the steady state. In Section 5, we will also calculate the statistics from these simulated sample trajectories and compare them with the exact ones obtained from the molecular dynamics (MD) simulations to assess the effectiveness of the ROM. The modeling of f (t) is harder when the observable u(t) is a non-Gaussian process. In fact, this is a topic which is worth independent investigations. Here we only note some developed methods to address this problem. Specifically, Chu and Li [4] used a multiplicative noise to approximate f (t). Zhu and Venturi [37] introduced a sample-based, transformated KL expansion to approximate the fluctuation force. In our recent work [35], a modified Sakamoto-Graham algorithm were proposed to do the modelling. On the other hand, as we briefly mentioned at the end of Section 2, the second fluctuation-dissipation theorem is not generally valid for stochastic system observables. Further explorations reveal that there exists a generalized second fluctuation-dissipation theorem for stochastic systems which can be used in reduced-order modelling. We refer to [35] for more technical details. Applications In this section, we will use the Langevin dynamics of a Fermi-Pasta-Ulam (FPU) chain model to numerically verify the theoretical results obtained in previous sections and validate the parametrization method of the EMZ memory kernel. To this end, we consider the Hamiltonian of the FPU chain: where the potential energy is given by Figure 1: Sample path of the tagged oscillator momentum p 50 (t). We display the result for the stochastic FPU system (52) with weak (θ = 0.1) and strong nonlinearity (θ = 1) at high (β = 1) and low (β = 20) temperature. and {q j , p j } are, respectively, the generalized coordinate and momentum of the j-th oscillator. In addition, the periodic boundary condition q 0 = q N and p 0 = p N is imposed, and the total number of oscillators is set to be N = 100. For such a system, it is convenient to work on a new, non-canonical coordinate {r, p} where r j = q j − q j−1 is the distance between two neighboring oscillators. In the new coordinate, the Langevin dynamics (17) for the stochastic FPU model is given by The corresponding Kolmogorov backward operator is explicitly given by: where L(p, r) is the Liouville operator in the new coordinate {r, p}, S(p) is an advection-diffusion operator involving p. In Figure 1, we display the sample paths of the momentum p 50 (t) of a tagged oscillator for SDE (52) with different parameters. Memory kernel parametrization Stochastic FPU chain with weak nonlinearity. We first consider the equilibrium dynamics of the FPU chain with weak nonlinearity. To this end, we set modeling parameter ν = m = 1, γ j = γ = 1 and θ = 0.1. The initial condition of (17) is set to be x(0) ∼ ρ eq where ρ eq = e −βH is the equilibrium Gibbs distribution. For the weakly nonlinear system, we aim to verify the following claims: i) The observable statistics, in particular the auto-correlation function C(t) and the corresponding memory kernel K(t) defined in the projected EMZ (25), decays exponentially to its equilibrium value. ii) The first-principle method introduced in Section 4.1 yields accurate approximation to the memory kernel K(t), therefore of C(t). For claim i), we note that the FPU potential energy defined as (51) satisfies the weak ellipticity condition (Hypothesis 1) in [38]. Therefore the theoretical results in Section 3.1 hold for any polynomial-type observable. Now we choose the momentum p j (t) of a tagged oscillator as quantity of interest and use Mori's projection P(·) = (·), p j (0) eq / p 2 j (0) eq to derive the projected EMZ equation (25). Some simple calculation Figure 2: Temporal auto-correlation function of the tagged oscillator momentum p j (t) for weakly nonlinear FPU system at different temperature T ∝ 1/β. We compare results we obtained by calculating the EMZ memory from first principles using 14-th order Faber polynomials with results from MC simulation (10 6 sample paths). In the subplots, we display |C(t)/C(0)| and the exponentially decaying upper bound ce −αt with an estimated decaying rate α. implies Ω = −1, hence the projected EMZ equation for the momentum yields the evolution equation for the time correlation function: where C(t) = p j (t), p j (0) eq . According to estimate (23), the auto-correlation function C(t) decays to the equilibrium value p j (0) 2 eq = 0 exponentially fast with the rate α. In the non-canonical coordinate {p j , r j }, we have Kq j = K * eq q j = p j . Moreover, since the periodic boundary condition is posed q j cannot be written as a function of {p j , r j } (the linear transformation {q j } N j=1 → {r j } N j=1 is not invertible). Hence the kernel projection operator π Q 0 of QKQ admits the explicit form: π Q 0 (·) = E[(·)] + P(·), and the memory kernel estimate is given by (26). Then we obtain K * eq u 0 , π Q 0 QKu 0 eq = 0 and the memory kernel estimate: where C = C(p i (0)). In Figure 2, we plot the auto-correlation function C(t) obtained by Monte-Carlo (MC) simulation (10 5 sample paths) for FPU systems with mild nonlinearities (θ = 0.1) at different temperatures (β = 1 and β = 20). The corresponding memory kernel K(t) is shown in Figure 3 which is obtained by the first-principle parametrization method. In both plots, we can see that C(t) and K(t) approaches to the predicted asymptotic C(t = ∞) = 0 and K(t = ∞) = 0 exponentially fast. For claim ii), we adopt the MZ-Faber expansion of e −tQKQ [36] to approximate the memory kernel, where e −at J n (bt) is the basis function. The simulation reult is displayed in Figure 2. It can be seen that the MZ-Faber approximation of the EMZ memory kernel yields relatively accurate results for FPU systems with mild nonlinearties at both low (β = 20) and high temperature (β = 1). Stochastic FPU chain with strong nonlinearity. When the modeling parameter of the stochastic FPU chain is set to be ν = m = 1 and θ = 1, we get a strong nonlinear FPU chain. The first principle method introduced in Section 4.1 still can be applied here to approximate the EMZ memory kernel. However, large θ will lead to significant numerical instabilities at large t when calculating K(t) and C(t) [37]. Hence in this paragraph, we will adopt the data-driven method to approximate the memory kernel. To this end, we use the standard Laguerre polynomial [13] and Faber series [36] to construct the data-driven approximation scheme of the EMZ memory kernel. In particular, the LASSO regression is used to solve (46) numerically to get the approximated parameter {k n } N n=1 . The data-driven method is used to verify the following claims: iii) The auto-correlation function C(t) defined in the projected EMZ (25) decays exponentially to its equilibrium value. iv) The data-driven method introduced in Section 4.1 yields effective approximations to the memory kernel K(t), therefore of C(t). To demonstrate iii), we use MC simulation (10 5 sample paths) to calculate the momentum auto-correlation function. It is shown in the subplots of Figure 4 that C(t) defined in the projected EMZ (25) decays 0 exponentially fast. To validate iv), we adopt the Faber series and the standard Laguerre polynomials as the basis function to construct the data-driven approximation schemes for K(t). These calculation results, along with the one obtained by the established rational approximation method [19], are presented in Figure 4. We can see that the data-driven method leads to accurate predication of C(t). Reduced-order modeling In this subsection, we consider the equilibrium dynamics of the FPU chain with strong nonlinearity. However, we set the modeling parameters slightly different from that above with ν = m = θ = 1, γ 50 = 1 and γ j = 0 for j = 50 2 . It is easy to verify that with this setting, the equilibrium Gibbs distribution ρ eq = e −βH is still the stationary distribution of (52) with ∂ t ρ eq = K * ρ eq = 0, where K * is the adjoint of K. Hence (52) yields an equilibrium dynamics. Since in the equilibrium, p 50 (t) is obviously a Gaussian process, we can directly apply the ROM (49) to simulate the dynamics of p 50 (t). Specifically, we have: where by simple calculations, we get Ω = 0. By sampling η k in (55) and then solving it numerically using the 3rd-order Adams-Bashforth time integration scheme, we can get the solution of the ROM which can be regarded as a realization of p 50 (t) in the equilibrium. Figure 5 compares the sample trajectories of the ROM and the path of p 50 (t) obtained by MC simulations. One can see that they are pretty much comparable with each other. We also calculate the time autocorrelation functions C(t)/C(0) and the stationary marginal Figure 4: Temporal auto-correlation function of the tagged oscillator momentum p j (t) for strongly nonlinear FPU system at different temperature T ∝ 1/β. The MC simulation results (10 6 sample paths) of the correlation function are compared with the one obtained by the data-driven memory kernel using Faber series (20th order) and the standard Laguerre polynomials (20th order). In the subplots, we display |C(t)/C(0)| and the exponentially decaying upper bound ce −αt with an estimated decaying rate α. distributions ρ p50 of the stochastic process from the simulated sample paths. The correlation time of p 50 (t) is obviously longer than what obtained for the previous example. This difference is also reflected in the sample trajectories displayed in Figure 1 and Figure 5 because the former ones are rougher. The obtained result indicates ROM (55) imitates the dynamics ρ p50 in the equilibrium. We emphasize that the methodology also applies to nonequilibrium systems in the steady state. Summary In this paper, we mainly focus on the application of the effective Mori-Zwanzig (EMZ) equation on the reduced-order modeling of stochastic systems. In particular, we showed that the semigroup estimates for e −tK and e −tQKQ can be used to derive the exponentially decay upper bounds for various observable statistics associated with the EMZ equation, including the auto-correlation function C(t), the EMZ memory kernel K(t) and the fluctuation force. The results are presented for the Langevin dynamics of anharmonic oscillator chain and the heat conduction model in and out of statistical equilibrium. In addition, we introduced both the first-principle and data-driven methods to parametrize the EMZ memory kernel, and demonstrated that the regularity of K(t) enables us to prove the convergence of frequently used data-driven approximation schemes. As far as we are concerned, this is the first theoretical convergence result regarding the approximation of the memory kernel. All these theoretical findings are verified numerically by simulating the Langevin dynamics for a Fermi-Pasta-Ulam (FPU) chain model. With the same example, we also proved the effectiveness of the numerical methods within their range of applicability. We conclude by emphasizing that analytical results obtained in this paper can be generalized and applied to the EMZ equation of other hypoelliptic stochastic systems. The numerical methodology we considered can be used to build effective reduced-order models for nonequilibrium systems in the steady state. Figure 5: Comparison of the dynamics of the particle momentum p 50 (t) generated by the MC simulation and the ROM (55). The displayed results are for a stochastic FPU system with strong nonlinearity (θ = 1) at high temperature β = 1 (first row) and low temperature β = 20 (second row). In the first column, we compare the simulated sample paths. The time autocorrelation functions C(t)/C(0) (second column) are obtained by averaging a cluster of the sample trajectories. The third column compares the stationary distribution of the stochastic process ρp 50 which are obtained via kernel density estimations. By mathematical induction, the statement (A.1) holds for all 0 ≤ n ∈ N. Applying operator identity (A.1) on the observable u(0) and then using the definition (37) and (38), we can get the recurrence relation (39). For M -dimensional finite-rank projection (7), using the same trick we get the following matrix form recurrence relation: where M n , Γ n are M × M dimensional matrix, defined as PK(QK) (n−1) u(0) = M n u(0), PK n u(0) = Γ n u(0), and polynomial coefficients a (n) bi . With such maps available, we can transform the combined index set I (n) (representing K n r j ) to I (n+1) (representing K n+1 r j ). Specifically, we obtain I (n+1) = I On the other hand, since K * eq = −L(p, r) + S(p). It is easy to obtain the updating rule for the corresponding index set I * (n) from the formal expression (B.6). With these results available, we can immediately determine the coefficients γ j in (38) by averaging over the probability density ρ eq as γ n = K n r j , r j eq r j , r j eq =            K n 2 r j , K * n 2 eq r j eq r j , r j eq , n is even, K n+1 2 r j , K * n−1 2 eq r j eq r j , r j eq , n is odd. (B.7) Using formula (38), (39) and the exact expression of the polynomial Φ n (QKQ), we can get the expansion coefficient k n in (36).
2021-02-03T02:16:02.969Z
2021-02-02T00:00:00.000
{ "year": 2021, "sha1": "349e57fb18f89af56172ac66ab2a71b487b0eb95", "oa_license": null, "oa_url": "https://www.aimsciences.org/article/exportPdf?id=885db29c-bdff-44a4-a5bd-e33dca8937d9", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "349e57fb18f89af56172ac66ab2a71b487b0eb95", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Computer Science", "Physics", "Mathematics" ] }
238769777
pes2o/s2orc
v3-fos-license
The Opportunities and Dilemmas of the Transformation of Traditional Agricultural Towns into Modernized Agriculture — Based on the Experience of Shilong Town Since the reform and opening up, Chinese grain has grown continuously and agricultural construction has made great achievements. However, with Chinese economic development and industrial structure upgrading and transformation, the traditional agricultural development model has been unable to keep up with the needs of social development, so there is an urgent need for development to adapt to modern productivity. Traditional agricultural towns have inherent advantages but also face huge challenges to achieve agricultural transformation. This article collects information through interviews, and analyzes and summarizes the problems encountered in the transformation of Shilong Town, where traditional agriculture is the leading industry, in the process of transforming into a new type of agriculture. In response to these problems, solutions were proposed to provide the "stone dragon experience" for the transformation of traditional agricultural towns in Sichuan. INTRODUCTION Chinese modern agriculture has been developed for more than 30 years, and it has accumulated certain experience and achieved certain achievements in the development of rural modern agriculture. At present, China is accelerating the pace of rural revitalization. Agricultural modernization is an important part of rural revitalization. It can be said that the construction of agricultural modernization in traditional agricultural towns will push the rural revitalization strategy forward a big step. However, we must also realize that traditional agricultural towns also have many problems in the process of developing modern agriculture due to factors such as geographic location, economic status, and demographic structure. Unless these problems faced by traditional agricultural towns are solved, agricultural transformation cannot be truly achieved. If achieved, rural development cannot be achieved. Therefore, how to formulate a prescription to solve the problem of the transformation of traditional agricultural towns is a problem that must be solved at present. This article takes Shilong Town, Liangshan Prefecture as an example, discusses the difficulties faced by traditional agricultural towns in the process of developing modern agriculture, and proposes solutions to the difficulties, which can provide references for the modernization of other similar traditional agricultural towns. RESEARCH METHODS This study uses comparative analysis, literature analysis, and field investigation methods to analyze the current situation of the development of modern agriculture in Shilong Town, Liangshan Prefecture and the problems encountered in the development process, and study the countermeasures to solve the plight of the development of modern agriculture in traditional agricultural towns in China. The Concept of Modern Agriculture Modern agriculture is a new form of agriculture developed on the basis of primitive agriculture and traditional agriculture. The basis of modern agriculture is social progress and technological progress. After the new development concept of "innovation, coordination, green, openness and sharing" was put forward, Chinese modern agriculture has been endowed with new connotations. We pursue a modern agricultural production method in China based on the principles of ecological economy and green development as the goal. Modern agriculture has become an effective way to achieve sustainable agricultural development in my country and speed up the construction of beautiful rural areas, as well as an important means to upgrade traditional agriculture in my country to achieve high-quality rural development. Characteristics of Modern Agriculture The development of traditional Chinese agriculture from a farming society to today is showing new characteristics and new problems. Under the joint promotion of capital, market, and industry, on the one hand, China's traditional agriculture is rapidly being replaced by modern agriculture. On the other hand, traditional agricultural towns have exposed some new problems due to the characteristics of modern agriculture in the process of agricultural upgrading. Sustainable Agriculture In order to avoid the negative impact of industrial modernization on human development, and to take into account future food security, human development, environmental protection and other issues, modern agriculture is a sustainable agricultural production model. On the one hand, modern agriculture pursues the convenience of agricultural development brought by industrial clusters and pursues economic benefits. On the other hand, modern agriculture also pursues the sustainability of development and the pursuit of social benefits. High-quality Agriculture Modern agriculture is a technology-intensive and capital-intensive agricultural production method. Modern agriculture is the agriculture with product quality as the lifeline, the agriculture that realizes the specialization of the agricultural production structure, the agriculture with the goal of high economic returns, and the open agriculture promoted by technological innovation and management innovation. Modern high-quality agriculture ensures the whole process of quality assurance "from field to table". THE STATUS QUO OF MODERN AGRICULTURAL DEVELOPMENT IN SHILONG TOWN Shilong Town is located in Mianning, Sichuan. The agriculture is mainly planted with rice, barley, and corn. At the same time, the development of flue-cured tobacco, vegetables and sericulture, the sericulture breeding has become large-scale, and the establishment of sericulture breeding cooperatives. The animal husbandry is dominated by pig breeding. It is a pollution-free pig production base in Sichuan Province. A large-scale pig breeding base has been built in the town. The town has 1 junior high school, 5 village primary schools, and 1 health center. At present, Shilong Town is vigorously developing new types of planting and breeding, building more than 1,000 acres of vegetable and fruit greenhouses, and dozens of new-type family farms. Compared with traditional agriculture, Shilong Town has carried out a new round of land reform through land rectification and land transfer in recent years. In the past, the land of each household was relatively scattered, and management was difficult. The harvest of the crops depended on the weather. Nowadays, cooperatives have been built through land contracting, and new types of fruit and vegetable cultivation have also moved into large sheds, and the output and value of vegetables and fruits have been increased. Through field surveys and visits, we also found that in the process of modern agricultural construction, the farmers in Shilong Town are also facing great pressure. The Lack of Government Management Functions During the interview, it was discovered that the local farmers are mainly engaged in production and operation in a spontaneous manner. In the inquiry to the local cooperative, it was found that the cooperative was established on the basis of relatives. The government's participation in the establishment and operation of the entire cooperative, production and sales of products is relatively small, or even lack of government guidance, which has left a lot of hidden dangers in the modernization of local agriculture. Backward Rural Infrastructure The investigation found that there is a lack of large-scale trading markets in the area. At present, the only trading market has inconvenient transportation and imperfect management. The lack of good market transactions makes the local agricultural and sideline product transactions not high. Local residents can only ship the products to other township markets, which increases the overall cost of products and reduces profits. Large Constraints on Agricultural Financing The current planting is mainly divided into three forms: cooperatives, large-scale contracting, and individual contracting. Cooperatives and large households account for 80% of all planting. Although the scale of cooperatives and large households is large, the survey found that the first two years of greenhouse planting require continuous capital investment without return, and it is difficult for farmers to spontaneous family-style planting and breeding without third-party guarantees. Meanwhile, the small scale of individual planting makes it more difficult to obtain loans and even more difficult to manage. Single Industry Development By investigating the industrial structure of the area, it is found that the local industry is dominated by the primary industry, the secondary industry has just started, and the tertiary industry is almost nonexistent. The Government Strengthens Guidance and Improves Supporting Policies The local government should strengthen guidance and help villagers establish cooperatives to carry out larger-scale agricultural production. Only in this way will it be conducive to the formation of a new round of economies of scale in rural areas and help farmers increase production and income. At the same time, the government urgently needs to improve the supporting policies for agricultural development, strengthen infrastructure construction, and provide greater policy support and economic subsidies to help growers and farmers to establish a more scientific management team, so that the rural primary industry can continue to develop healthily. Guiding Industrial and Commercial Capital to Invest in Agriculture For agriculture to develop, the funding problem must be resolved. Farmers' low risk-bearing capacity and lack of third-party credit guarantees make it difficult to finance agricultural development. To resolve the difficulty of agricultural financing constraints, the government should first introduce some loan policies to guide banks and credit cooperatives to lend to farmers; second, the government can conduct certain investment promotion activities to guide foreign capital to invest. But at the same time, the government should also pay attention to the risks brought by the influx of industrial and commercial capital, and prevent the disorderly entry of capital from bringing huge financial risks to farmers. Development of Diversified Industries Only by taking the road of diversified development can agricultural modernization go further. A single industry often cannot withstand the test of the market. Once the market is slightly turbulent, it may cause a devastating blow to the local economy. In the context of the current Advances in Economics, Business and Management Research, volume 185 industrial structure upgrade, agriculture should also follow closely and derive industrial chains. Shilong Town should develop agricultural and sideline product processing, agricultural leisure tourism and other related industries based on planting and breeding, take multiple strategies to build a characteristic industrial chain, create a new development model of integrated operation and socialized services, and explore new situations A new model for the development of traditional agricultural towns. CONCLUSION In China, agriculture has played a very important role in Chinese economic development. For a long time, agriculture has served other industries, but with the country's rural revitalization strategy, rural agricultural development has once again occupied the spotlight on the stage. How to develop Chinese agriculture in the context of economic transformation and industrial restructuring? This is a big article that needs to be written by the whole society. The development of the country cannot be separated from the revitalization of agriculture, and the people's livelihood cannot be separated from the development of agriculture. The development of traditional agriculture to this day requires a modern transformation. Modern agriculture is an important part of Chinese rural revitalization and a new engine for Chinese rural economic growth. AUTHORS' CONTRIBUTIONS Yaping Jiang organized research work. He is responsible for collating survey data and writing papers. Binyu Hu participated in field research and she was in charge of writing records during the field investigation.
2021-08-27T17:01:11.737Z
2021-08-04T00:00:00.000
{ "year": 2021, "sha1": "20e4ef63306c4d2c3a9a0e9e105e284c8ca5e1dc", "oa_license": "CCBYNC", "oa_url": "https://www.atlantis-press.com/article/125959268.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "a4e62b698b8f4eb3faa9a1994468d385c79946de", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Economics" ], "extfieldsofstudy": [ "Business" ] }
202717928
pes2o/s2orc
v3-fos-license
Prevalence and associated risk factors of anaemia among women attending antenatal and post-natal clinics at a public health facility in Ghana Background Anaemia among pregnant women and post-partum mothers is a public health challenge in Ghana, especially in the Volta Region. While literature abounds on anaemia among pregnant women, the same cannot be said for anaemia among post-partum mothers in the region. This study, therefore, examined the prevalence and associated risk factors of anaemia among women attending antenatal care and post-natal care. Methods This descriptive cross-sectional survey recruited 409 pregnant women and 194 post-natal mothers attending antenatal and post-natal care, at the Hohoe Municipal Hospital. Background characteristics were collected using a semi-structured questionnaire, blood samples were analysed for the presence of anaemia and malaria parasitaemia and folders were reviewed for estimated blood loss. Results We found the prevalence of anaemia among pregnant women and post-partum mothers to be 33 and 16% respectively. Higher malaria parasitaemia (2%) was found in pregnant women compared with postpartum mothers (1%). We found that 4% of post-partum mothers had abnormal blood loss (301mls-500mls) whereas 5% of them had postpartum haemorrhage (>500mls) during child birth. A univariate logistics regression of anaemia status on some risk factors in pregnant women showed no significant association between anaemia and any of the risk factors. Among post-partum mothers, only mothers’ age was statistically significant in the univariate analysis [COR = 0.27 (95% CI:0.103, 0.72);0.008]. Mothers aged 20–29 were 73% less likely to be anaemic. Conclusion The prevalence of anaemia among pregnant women found in this study points to a situation of moderate public health problem according to WHO cut-off values for the public health significance of anaemia. Strategies should therefore be put in place to encourage thorough health education and promotion programmes among both pregnant and post-partum women. Background Anaemia, referred to as low level of haemoglobin (Hb) in the blood, is a global public health problem that affects low, middle, and high-income countries, with adverse effects on the health of populations [1]. Anaemia is defined as Hb level lower than 11.0 g/dl (Hb < 11.0 g/dl) in pregnant women [2], and Hb level lower than 10.0 g/dl (Hb < 10.0 g/dl) in postpartum mothers [3]. Though the condition affects males, pregnant and non-pregnant women as well as children are most vulnerable [4,5]. Anaemia is multifactorial in aetiology but mainly caused by iron-deficiency [6,7]. Anaemia can be dangerous to the health of both a pregnant woman and her baby, if left untreated [8] as it increases the risk of maternal and child mortality and also has a significant negative effect on both the cognitive and physical development of the child [4,9]. Global anaemia prevalence is estimated by the WHO to be 38% in pregnant women and 29% in all women of reproductive age [1]. According to WHO cut of points for significance of anaemia, prevalence of ≥40% is considered a severe public health problem [1]. In Africa, a 2013 cross-sectional study conducted among 384 pregnant women in Northwest Ethiopia found the prevalence of anaemia to be 22% [10]. A 2016 study conducted by Bekele, Tilahun and Mekuria [11] among 332 pregnant women in the same country, however, found anaemia prevalence to be 33%, an indication that the problem was on the ascendency. Furthermore, a cross-sectional secondary data analysis on anaemia prevalence among post-partum mothers in the same country found 22% prevalence rate [12], lower than the prevalence rates among pregnant women [10,11]. In Ghana, a report by the Family Health Division (FHD) of the Ghana Health Service (GHS), showed that anaemia among pregnant women at first antenatal clinic visit marginally increased by 1% in the year 2015 as compared to the previous year [13]. The report further stated that the prevalence of anaemia in pregnant women at 36 weeks of pregnancy also increased marginally from the year 2015 to 2016. The Volta Region has been identified as the region with the highest prevalence (49%) of anaemia among women in their reproductive age (15-49 years) in the country [14]. As such, it is the region with the highest proportion of antenatal clients having anaemia over the period of 2014 to 2016, with prevalence rates of 46, 46, and 47% in 2014, 2015 and 2016 respectively [13]. The FHD reported anaemia in the region at the time of antenatal clinic registration and at 36 weeks of pregnancy to be 46 and 32% respectively in 2014; 46 and 26% respectively in 2015; and 47 and 27% respectively in 2016 [13]. Although there is provision of iron and folic acid to postnatal mothers from the day of delivery up to the sixth week in the country [13], anaemia remains the leading cause of hospital admissions and maternal deaths in the Volta Region [15,16,17]. In the Hohoe Municipality where the present study was conducted, the prevalence of anaemia in pregnant women attending antenatal care was reported to be 60.3% [18], with 64% among new registrants and 58% among those with multiple visits. This prevalence is higher than the regional prevalence of 49%. Despite the available evidence on anaemia and its consequences, there has not been any study on postpartum anaemia in the Volta Region of Ghana. Postpartum period, characterized by some physiological losses due to pregnancy and labour is a very critical period which needs a lot of attention. The paucity of information on anaemia status of this vulnerable group in the region possess as a weakness of health system as no or little information is available to guide health professionals in ensuring good health of postpartum mothers. We, therefore, examined the prevalence and associated risk factors of anaemia among women attending antenatal and post-natal clinics at the Hohoe Municipal Hospital so that a holistic approach in addressing maternal anaemia in the municipality, the region, and the country as a whole could be adopted. Setting The Hohoe Municipality is one of the twenty-five administrative districts/municipalities in the Volta Region [19]. The Municipality lies within the middle zone of the region and shares boarders on the East with the Republic of Togo, on the Southwest with Kpando Municipality, Northwest with Biakoye District, on the North with Jasikan District, and on the South with Afadzato South District [19]. The Municipality consists of 102 communities with a population of 167,016 people and a population density of 196.0 persons per square kilometres [19]. Study design This descriptive cross-sectional study recruited 409 pregnant women (6 weeks to 36 weeks of gestation) and 194 postnatal mothers (6 weeks post) of ANC and PNC centres of Hohoe Municipal Hospital for the period March 2017. Procedures We estimated the sample sizes for pregnant women using the regional prevalence of 47% [13] and 14% for post-partum mothers [20], using the Cochrane formula [21]. Assuming z-statistic for 95% level of confidence and a 5% margin of error, the appropriate minimum sample size was estimated for the study. Adjusting for a non-response rate of 5%, a total sample size of 402 was reached for pregnant women and 194 for post-partum mothers. During data collection, thirty (30) pregnant women were randomly selected each day by balloting without replacement method. The balloting method allowed consented participants to either pick "Yes" or "No" of folded pieces that were placed in a container and thoroughly shaken to ensure randomization. Data was collected from participants who picked "Yes". This was repeated until the desired sample size was attained. The same sampling procedure was done with regards to the postnatal mothers. Data were collected at PNC clinic and post-delivery wards of the Hospital. With the aid of trained research assistants, data on socio-demographic characteristics and risk factors associated with anaemia (independent variables) in both pregnant and postpartum mothers were collected using a pre-tested semi-structured questionnaire. Haemoglobin concentration (Hb) were determined by fingerpricked blood test samples of participants using URIT-12 Haemoglobin photometer (URIT Medical Electronics Co., LTD, China). Anaemia was defined as Hb level lower than 11.0 g/dl (Hb < 11.0 g/dl) in pregnant women [2] and Hb level lower than 10.0 g/dl in postpartum mothers (Hb < 10.0 g/dl) [3]. Capillary blood sampling from the finger was used because it provides a reliable and suitable alternative for sampling blood in the clinical and field settings [22][23][24]. Estimated blood loss data of postpartum mothers were obtained from the maternal delivery records. Blood loss volume of ≤300 ml was considered normal and > 300 ml-500 ml was considered abnormal. Blood loss volume (>500mls) was considered postpartum haemorrhage (PPH) [21]. Parasitaemia in blood samples were detected using standardized blood film and staining procedures [25]. Three drops of blood were placed on a clean, dust free and dry frosted microscopic slide for thick blood film. Also a drop of blood was placed on the side of the thick film for the preparation of a thin film. A unique Identification number (ID) for each participant and date were written on each slide for easy identification. The slides were air dried and packed into slide boxes and transported to the SPH, UHAS laboratory. The dried slides were stained with 1% Giemsa stain for about 25-30 min. Buffered water (pH = 7) was used to rinse the stained slides. The prepared slides were examined under oil immersion with a light microscope (ocular magnification × 100). The thick film was used for the quantification of the malaria parasites while the thin film was used for identifying the malaria species. Parasite densities were estimated by counting the number of parasites per 200 white blood cells (WBCs) in a thick film by two microscopists. Counts of gametocyte were taken against 500 white blood cells in determining gametocyte density per microliter of blood. Light microscope was used to read the slides, a sample was considered negative only after 200 high power fields have been read. Parasite counts were converted to parasite per μ1, with the assumption that there is an average of 8000 leucocytes per μ1 of blood. In cases where there was a 50% discrepancy between parasite counts or when there was a discrepancy qualitatively (negative versus positive), a third microscopist read the slide and his reading was final and was used in the analysis of parasite density. All slides were stored in appropriately labelled slide boxes and kept at the laboratory. As part of the quality control monitoring, randomly selected stained slides from each batch of slides were given to an independent microscopist at the Municipal hospital for the determination of the sensitivity and specificity of the readers. Hb readings were quality controlled by trained laboratory scientists from the School of Public Health (SPH) of the University of Health and Allied Sciences (UHAS), research laboratory throughout the study period. Ethical issues This study was conducted in accordance with accepted principles on ethics in human experimentation and international conference on Harmonization/Good Clinical Practice (ICH/GCP). Ethical approval for the study was obtained from the Ethics Review Committee (ERC) of the University of Health and Allied Sciences (UHAS) with Ethical Approval number UHAS-ERC A.6 [6] 17-18. Permission was sought from the Hohoe Municipal Hospital before the commencement of the study. Written informed consent was obtained from participants on standard consent form before they were included in the study. Data analysis The data were entered into EpiData version 3.0 and exported to Stata version 14.1 for analysis. Descriptive statistics, frequencies and percentages were used for categorical variables. Normality was determined for continuous variables such Age, Hb, gravidity, parity and family size. Mean ± SD was determined for continuous variables using t-test. Chi-square test was used to determine association between independent variables (socio-demographic characteristics and risk factors) and the dependent variable (anaemia status). A univariate logistic regression was used to determine the strength of the association between the independent and dependent variable. The dependent variable considered in the univariate logistic regression was anaemia status. However, a multivariable logistic regression could not be used because only one variable in the univariate logistic regression showed a statistically significant association with the dependent variable (Anaemia status). Therefore, there was no need to use a multivariable logistic regression. A p-value less than 0.05 was considered statistically significant at 95% Confidence Interval (CI). Table 1 shows the background characteristics of the respondents, for both ANC and PNC attendees. The continuous variables ag and Hb were normally distributed. Out of the 409 pregnant women surveyed, majority (49%) of them were aged 20-29 years. The mean age of the pregnant women was 28 ± 7 years. Most, (83%) of the pregnant women were married and had at least Junior Secondary education (65%). With regards to occupation, about two-fifth of the pregnant women (38%) were involved in trading at the time of the study. Majority (71%) of them were Ewes, with Christianity being the dominant religion (85%). More than half (54%) of the pregnant women were at least, pregnant for the second time (Gravida 2) in their lives at the time of the survey. A little less than half of (41%) respondents had at least one or two children. More than half (77%) of pregnant women were in their 3rd trimester. Prevalence of anaemia and malaria among pregnant and postpartum women attending ANC and PNC clinics Out of the total number (409) of pregnant women whose Hb levels were measured, 33% them were found to be anaemic (Hb < 11.0 g/dl) with a mean Hb of 9.72 ± 0.97. Out of the mothers (194) attending PNC, 16% of them were anaemic (Hb < 10 g/dl). The mean Hb recorded for the postpartum mothers was 7.76 ± 1.68 (Fig. 1). The prevalence of malaria parasitaemia by microscopy was 2 and 1% for pregnant women and postpartum mothers respectively (Fig. 1). Prevalence of anaemia and malaria among pregnant women by gestational period The prevalence of anaemia among pregnant women by gestational period was highest (38%) among women in their 2nd trimester and lowest (31%) among those in their 1st trimester (Fig. 2). Malaria prevalence followed similar pattern as anaemia, 5% for pregnant women in their 2nd trimester, 1.6% in the 3rd trimester and none in the 1st trimester. Association between Anaemia and some risk factors among pregnant women Table 2 shows the results of the univariate logistic regression of anaemia status (dependent variable) on some risk factors, namely age group, marital status, religion, ethnicity, occupation, gravidity, parity, blood film and gestational age of the mother. There was no significant association between anaemia status and any of the risk factors in a univariate analysis, no further multivariable logistic regression was necessary. Association between Anaemia and some risk factors among post-partum mothers Discussion This study examined the prevalence of anaemia and its risk factors among women attending ANC and PNC at the Hohoe Municipal Hospital, Ghana. The prevalence rate of anaemia among the ANC cohort was 33%. This is slightly lower than the 38% global anaemia prevalence in pregnant women reported by [1]. A 2018 study by Kweku et al., found anaemia prevalence among women attending ANC at the same facility to be 60% [18]. The prevalence of this study points to a situation of moderate public health problem according to WHO cut-off values for the public health significance of anaemia. Based, on their findings, it was recommended that measures be instituted to address anaemia among pregnant women in the Municipality. As such, routine iron and folic acid supplementation to pregnant women was intensified at the facility, which could have played a significant role in the low prevalence of anaemia cases recorded in this study. Interestingly, a similar study in 2016 by Bekele, Tilahun, & Mekuria, [11] reported the same anaemia prevalence (33%) among pregnant women in Ethiopia however, those recorded in South Africa (43%) and Nigeria (55%) among the same cohort were higher [26,27]. However, our study further revealed that, anaemia prevalence among post-partum mothers attending postnatal care clinic was 16%. This is lower than what was found in a study conducted among post-partum mothers in Ethiopia, 43% [28] and also lower than that of a study among post-partum mothers in the southern region of Madrid, 29% [29], an indication of the role iron and folic acid supplementation played as 73% of the post-partum cohort received them during pregnancy. Hence could account for the stark difference between the current study and that of [28], as no mention was made to iron and folic acid supplementation in their study. In addition, it could be due to the findings that a greater proportion of the postpartum women in our study are employed as a study by Lakew et al. in Ethiopia showed that working lactating mothers had a lower odds (AOR: 0.71; 95%CI 0.63 to 0.80) of being anaemic. In the absence of other risk factors, postnatal mothers' age was associated with anaemia status in our study. However, a similar study by [30] found no significant association between mothers' age and anaemia status. The current study provides information on anaemia among post-partum mothers as currently paucity of literature exits in the Volta Region and Ghana as a whole on post-natal anaemia. This information may help in future trend analysis of anaemia among postpartum mothers to identify the burden of this condition on the vulnerable population. In the year 2003, the Ghana Health Service (GHS) in collaboration with some stakeholders rolled out a five-year integrated strategy for anaemia control in Ghana which targeted pregnant women, pre-school and school-aged children [31]. However, the vulnerable group of post-natal mothers were not targeted in this strategy for anaemia control. Post-partum mothers, who undergo some physiological losses during child birth were not target in program and this could have been as result of paucity on post-partum anaemia as at the time the program was being rolled out by the GHS. Malaria on the other hand was less prevalent (1%) among post-partum mothers and insignificantly linked with anaemia in post-partum mothers. With fewer (62%) mothers sleeping under insecticide treated mosquito nets and the few malaria cases were recorded, this can be attributed to some form of immunity developed against malaria by postpartum women. Thus, McLean and colleagues' study which showed a strong link between gravidity and antibodies development in improving their chances of not being anaemic [32] is evident. Therefore, the strong immunity against malaria could have been developed during pregnancy and maintained at post-partum period, fending off possible malaria attacks. Limitations Since data could not be collected throughout the year, seasonality of anaemia could not be ascertained. Moreover, data on HIV/AIDS status and on dietary diversity of respondents were not collected in the study which could have had an effect on the outcomes of the study. Conclusion Although the prevalence of anaemia among pregnant women found in this study was lower than the regional prevalence rate of women attending ANC (47%), it still remains unacceptably high as it points to a situation of moderate public health problem according to WHO cutoff values for the public health significance of anaemia. Age of mother has an association with anaemia in postpartum mothers in the Hohoe municipality, as younger mothers are more likely to be anaemic than older mothers. This could be due to insufficient interaction with health care providers as a result of infrequent visits by these women because of social stigma against young mothers who are unmarried. Strategies should therefore be put in place to encourage frequent postnatal visits by women in the younger age group. Measures must also be put in place to adopt programmes to address abnormal blood loss and PPH. This could be achieved through health education and promotion programmes. Further studies however need to be done to establish the causal effect relationship between anaemia and these risk factors in the Municipality.
2019-09-23T13:23:09.401Z
2019-09-23T00:00:00.000
{ "year": 2019, "sha1": "6c088682a34f6281bcc012d41db8591c5a75b942", "oa_license": "CCBY", "oa_url": "https://bmcnutr.biomedcentral.com/track/pdf/10.1186/s40795-019-0303-x", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9283bc516c5ae64976fb410acc2d2b2ea51aeaa2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
224812662
pes2o/s2orc
v3-fos-license
Association between sites and severity of eczema and the onset of cow’s milk and egg allergy in children Background Cow’s milk allergy (CMA) and egg allergy (EA) are common and can reduce quality of life in children. Infantile eczema is a well-established risk factor for the onset of food allergy via transdermal sensitization; however, various types of infantile eczema have not yet been evaluated. Therefore, we assessed the association between CMA and EA and the sites and the severity of infantile eczema. Methods This retrospective study was based on data from patients aged 2–19 years with atopic disease who were treated between July 2015 and March 2019 in a pediatric allergy clinic in Japan. Data regarding the history of IgE-mediated symptoms, eczema in the first year of life, parental history of atopic diseases, and infantile nutrition were collected. Results A total of 289 patients were included in the study, of which 81 and 111 children had IgE-mediated CMA and EA, respectively. The rates of CMA and EA were higher in the children with infantile eczema than in those without (30% vs. 9% and 42% vs. 21%). The rate of CMA was also higher in children with eczema on the face. Significant differences were noted in the rate of CMA among children with facial eczema of exudation (adjusted odds ratio 2.398; P = 0.017) and papules (adjusted odds ratio 2.787; P = 0.008), using multivariate analysis. Conclusion The rate of IgE-mediated CMA was high among children with atopic disease having severe facial eczema during infancy. Introduction Food allergy is a frequently occurring disease in children in numerous countries [1]. Cow's milk and egg are the most frequently reported antigens involved in food allergy, and the incidence of immunoglobulin E (IgE)-mediated cow's milk allergy (CMA) and egg allergy (EA) are 2.3% and 2.5% in Europe, respectively [2]. Provided that cow's milk is the primary source of infantile nutrition following breast milk, the presence of CMA greatly impairs the quality of life of children and their caregivers [3]. Egg is also a key nutrient which provides fatty acids, vitamins and proteins and benefit to child nutrition and brain development [4]. The reported incidence of food allergy ranges from 6% to 10% in Europe, the United States, and Japan [5][6][7]. The incidence of food allergy has increased to 8% over the past two decades, according to a survey conducted in 98 countries worldwide in the third phase of the International Study of Asthma and Allergies in Childhood [1]. Transdermal sensitization is a factor extensively involved in the development of food allergy [8], and interventions aimed at preventing eczema onset are expected to prevent transdermal sensitization [9,10]. The establishment of effective prevention methods can limit the development of subsequent allergic diseases which is termed as "allergic march" [11]. Notably, early-onset and severe eczema in infancy were reported to be strongly associated with food allergy [8,12]. However, to the best of our knowledge, no studies have evaluated the relationship of infantile eczema based on the sites involved in or specific types of food allergy. Therefore, the present study aimed to investigate the characteristics of infantile eczema in the first year of life that were associated with CMA onset among atopic children. Additionally, we also investigated the association between infantile eczema and EA. Materials and methods This study was retrospective in nature. We collected clinical data of pediatric patients 1) who had single or multiple atopic diseases; 2) who were aged 2-19 years; and 3) who visited the Pediatric Allergy Clinic in National Hospital Organization Nagoya Medical Center, Japan, between July 2015 and March 2019. According to the institutional review board, patients meeting these criteria were enrolled in the study unless their caregivers opted out of data sharing. Patients and caregivers did not provide informed written consent, but they could access detailed information of this study on the website. The study data were collected from electronic medical records with patient identifiers and included infantile eczema, previous immediate allergic symptoms to cow's milk or egg, history of other atopic diseases, parental history of atopic diseases, parental history of smoking during pregnancy, ingestion of cow milk or its products during breastfeeding period and the first year of life, and age at solid food introduction. The cow's milk-specific and egg white-specific serum IgE antibody levels were measured using ImmunoCAP 1 test (Phadia AB, Uppsala, Sweden) to assess sensitization. In the current study, CMA was defined as positive for cow's milk-specific IgE antibody �0.35 IU/ml accompanied with a history of immediate allergic symptoms after the intake of cow's milk or its products. EA was also defined as immediate allergic symptom after intake of egg products with positive egg white-specific IgE antibody. We did not include any non-IgE-mediated symptoms to cow's milk or egg. Caregivers of all patients were asked whether their children presented with eczema in the first year of life and, if so, to indicate the affected sites including the head, face, trunk, and upper and lower limbs. In particular, data of the presence of symptoms including redness, dryness, exudation and/or papules on the face were collected for children with facial eczema. The study analyses included a comparison of the differences in skin conditions and other factors between children with and without each specific food allergy. Pearson's chi-squared test was used for between-group comparisons and a logistic regression analysis was used to calculate odds ratios. Spearman's correlation coefficient was calculated for correlation between different sites of infantile eczema. The multiplicity of tests was adjusted by Holm-Bonferroni correction. P-values of less than 0.05 was considered statistically significant. All statistical analyses were performed using SAS 1 statistical software version 9.4 (SAS Institute, Cary, NC, USA). The study protocol was approved by the institutional review board of the National Hospital Organization Nagoya Medical Center (reference number 2019-008). Results A total of 289 children aged 2-19 years were included in the present study. In the study cohort, there were 81 (28%) children with CMA and 111(38%) with EA, and infantile eczema was confirmed in 84% of the children. The demographic data of the study cohort are shown in Table 1. No factor related to infantile nutrition, including sex, parental atopic disease history and parental smoking during pregnancy, were significantly associated with CMA and EA. Although there was no relationship between cow's milk intake in infancy and CMA (P = 1.000), the proportion of children with cow's milk intake after 7 months of age was significantly lower in patients with CMA than in those without CMA (11% vs. 37%, P < 0.001). The incidence of CMA was significantly higher in the group with longer breastfeeding periods. However, there was no statistical relationship with cow's milk intake or breastfeeding period in patients with EA. More than 80% of the children were introduced to solid food between 5 and 7 months of age. None of the children with EA were introduced to solid food before four months of age. There was no significant high prevalence of CMA and EA among the age groups of solid food introduction. More than 80% of the children had episodes of infantile eczema. Positive correlations were observed between all eczema site pairs. Strong correlation was found between the upper and lower limbs (correlation coefficient r = 0.83). Moderate correlations were found between the trunk and the upper limb (r = 0.69) and between the trunk and the lower limb (r = 0.65). The other pairs were weakly correlated (all r < 0.40). CMA and EA incidence were higher in children with infantile eczema than in those without infantile eczema (30% vs. 9% and 42% vs. 21%, respectively) ( Table 2). In univariate analysis of the affected sites, CMA incidence was significantly higher in children with eczema on the face than in those without these. However, the factor in CMA and EA was not significant in multivariate analysis (adjusted odd ratio 3.496, P = 0.078).The analysis of the characteristics of face eczema lesions revealed that the incidence of CMA was significantly higher in children with erythema than in those without erythema (33% vs. 20%, P = 0.043); in children with exudation than in those without exudation (40% vs. 22%, P = 0.011); and in those with papules than in those without papules (41% vs. 23%, P = 0.017). These differences in the proportion of CMA based on the presence of exudation and papules remained significant in the multivariate analysis (adjusted odd ratio, 2.398 and 2.787; P = 0.017 and P = 0.008, respectively) ( Table 3). In the multivariate analysis, children with any face lesions did not have significantly different proportion of EA as compared to those without it. Discussion The current retrospective study revealed that the presence of severe facial eczema in children with atopic disease was associated with the development of CMA, although there was no association between CMA and eczema at other sites. The immature skin barrier at this age might underlie the relationship between infantile eczema and food allergy, because an immature skin barrier has been shown to enhance transdermal allergic sensitization and to contribute to the onset of food allergy [13]. In the current study, the proportion of CMA and EA was higher in children with infantile eczema. The maturity of skin barriers is determined by number of horny cells and transepidermal water loss (TEWL); higher TEWL is observed in infancy [14]. Conversely, the TEWL declines and the stratum corneum becomes thicker until the age of four years [14]; small and immature keratinocytes are associated with higher TEWL. We also found that CMA was significantly more frequent in children with infantile eczema on the face. Next to the skin in the genital areas, face has the smallest horny cells and is one of the locations with highest TEWL in the entire body. In addition, the serine proteases kallikrein 5 and kallikrein 7, which are most abundant in cheek keratinocytes [15], are highly expressed PLOS ONE in epidermal keratinocytes; these proteases promote the desquamation of the horny layer, thereby lowering the skin barrier. These factors might underlie much promotion of CMA by severe eczema with exudation or papules. PLOS ONE Cow's milk and egg are the most common antigen involved in IgE-mediated food allergy in Japan [7]. Early intake of egg white and peanuts is recommended to prevent the development of IgE-mediated allergy [16]. In addition, the introduction of milk between four and six months of age was reported to prevent the onset of CMA [17]. Delaying the introduction of solid foods is not recommended for any children, even those with high allergy risk [18]. Most children in the present study had already been introduced to solid foods during the recommended periods before the first visit. Prior to the introduction of these recommendations, many caregivers had already started introducing the milk proteins before introducing the solid foods, which allowed the infants to have the opportunity to have contact with cow's milk on their skin in early life. Severe eczema on the face associated with damaged keratinocytes might have enhanced the sensitisation to cow's milk during infancy due to contact with the lesions. The results from previous studies suggest that earlier and more severe eczema in infancy might be due to more frequent food allergy [8,12]. The current study showed that the rate of CMA was significantly lower in the children who were initiated on cow's milk intake after 7 months of age; however, the rate of EA was not significantly different in these children. Children with CMA were expected to have fewer opportunities for cow's milk intake, suggesting that most of the children developed CMA within 6 months of age because there was no correlation with cow's milk intake in the first year of life. The presence of CMA might have also prolonged the breastfeeding period; therefore, the initiation of cow's milk intake up to six months of age was included as a covariate in multivariate analysis. The current study has several limitations. First, the retrospective study design did not allow us to determine the potential causal relationship between severe facial eczema and food allergy. We asked caregivers regarding the presence of eczema in the first year of life at the first visit when their children were 2 years or more of age. Therefore, most children were considered to have developed CMA after infantile eczema. Second, we did not include the age at introduction of egg as a covariate because this information was not included in the electronic medical records. Although most children in the current study had started solid food after 5 months of age, few children are introduced to egg products in early infancy. We cannot deny that the EA results could have been different with direct information about egg intake. However, we believe that it did not have a major effect on the CMA results. Third, we did not analyze details regarding the duration of breastfeeding or cow's milk intake. The results revealed that the proportion of CMA was not dependent on the duration of breastfeeding or the cow's milk intake at the age of six months, suggesting that the exclusive intake of breast milk, while not preventative against CMA, might prevent the overall development of atopic diseases. Finally, the current study cohort was limited to atopic children. It remains unclear whether the study findings can be expanded to the general population. However, we could observe specific relationship of eczema with CMA, but not with EA, in atopic children. Conclusions In conclusion, the present retrospective study revealed that the incidence of IgE-mediated CMA was higher in atopic children with severe facial eczema during the first year of life. The cause-effect relationship between severe facial eczema and CMA and the relationship with other food allergies require evidence from future birth cohort studies.
2020-10-21T13:06:17.919Z
2020-10-19T00:00:00.000
{ "year": 2020, "sha1": "29cc9706fcd305e19dcb9266c4b3ac64be3c751c", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0240980&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "54c62e9a0b136b0c467f7e9650e792c8e5fba879", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
247026132
pes2o/s2orc
v3-fos-license
HighDist Framework: Algorithms and Applications We introduce the problem of determining if the mode of the output distribution of a quantum circuit (given as a black-box) is larger than a given threshold, named HighDist, and a similar problem based on the absolute values of the amplitudes, named HighAmp. We design quantum algorithms for promised versions of these problems whose space complexities are logarithmic in the size of the domain of the distribution, but the query complexities are independent. Using these, we further design algorithms to estimate the largest probability and the largest amplitude among the output distribution of a quantum black-box. All of these allow us to improve the query complexity of a few recently studied problems, namely, $k$-distinctness and its gapped version, estimating the largest frequency in an array, estimating the min-entropy of a distribution, and the non-linearity of a Boolean function, in the $\tilde{O}(1)$-qubits scenario. The time-complexities of almost all of our algorithms have a small overhead over their query complexities making them efficiently implementable on currently available quantum backends. Introduction A quantum circuit is always associated with a distribution, say D, over the observation outcomes 1 that can, in principle, encode complex information. Given a threshold τ, and a blackbox to run the circuit, it may be useful to know if there is any outcome with probability at least τ. We denote this problem HighDist. We also introduce HighAmp that determines if the absolute value of the amplitude of any outcome is above a given threshold; even though this problem appears equivalent to HighDist, however, an annoying difference crawls in if we allow absolute or relative errors with respect to the threshold. The most interesting takeaway from this work areÕ(1)-qubits algorithms for the above problems whose query complexities and time complexities are independent of the size of the domain of D. The framework offered by these problems supports interesting tasks. For example, a binary search over τ (tweaked to handle the above annoyance) can be a way to compute the largest probability among all the outcomes -we call this the P max problem. Similarly, non-linearity of a Boolean function can be computed by finding the largest amplitude of the output of the Deutsch-Jozsa quantum circuit [5]. Going further, we observed surprising connections of HighDist and P max to few other problems that have been recently studied in the realm of quantum algorithms, viz., k-Distinctness [2,3], Gapped k-Distinctness [16], Min-Entropy [14], and F ∞ [16,9]. Using the above framework we designed query and time-efficient quantum algorithms for those problems that require very few qubits, often exponentially low compared to the existing algorithms. HighDist, HighAmp and P max being fundamental questions about blackboxes that generate a probability distribution, we are hopeful that space-bounded quantum algorithms with low query complexities could be designed for more problems by reducing to them. An interesting outcome of this work is a unified study of the problems given above, each of which have received separate attention. For example, Li et al. [14] recently considered the min-entropy estimation problem of a multiset which is equivalent to computing its F ∞ , a problem studied just a few years ago by Montanaro [16] and Bun et al. [9]. We illustrate the reductions in Figure 1. (See Appendix J for details.) The main contributions of this work can be summarized as follows. 1. We introduce the HighDist and the HighAmp framework which allows us to answer interesting questions about the output distribution of a quantum circuit, like the largest probability, denoted P max (a similar algorithm can also be designed for the largest absolute value among the amplitudes). 2. We present space and query-efficient algorithms for the absolute and the relative error versions of the above problems. The algorithms for HighDist and P max are adapted from a recently published algorithm, and while they can be used to solve HighAmp, we bettered their query complexities by designing a novel algorithm to run multiple amplitude estimations, in "parallel", and using a variant of the Hadamard test algorithm to estimate the inner-product of two output states. 3. We show how to employ the above algorithms to improve the upper bounds on the query complexities of k-Distinctness, Gapped k-Distinctness, Min-Entropy, F ∞ , and non-linearity estimation, all of which are now possible with logarithmic number of qubits -often exponentially less compared to the existing approaches and leading to better space-time complexities. The reductions are mostly trivial, but the implications are interesting as discussed below. • Our algorithm for k-Distinctness makes optimal number of queries (up to logarithmic factors) when k = Ω(n), and that too usingÕ(1) qubits. Previous quantum algorithms for large k have an exponential query complexity and require a larger number of qubits [2]. • Our algorithm for HighDist can be used to identify the presence of high-frequency items in an array (above a given threshold -also known as "heavy hitters") usingÕ(log 1 ) qubits; it also generates a superposition of such items along with estimates of their frequencies. The best algorithms for identifying heavy hitters in low space classical algorithms are of streaming nature but requireÕ( 1 ) space [12]. Here ∈(0,1] indicates the inaccuracy in frequency estimation. • Watson established the classical intractability of estimating min-entropy of a probabilistic source [19]; we show that the problem becomes easier for quantum algorithms when allowed to err in a small number of cases. • Valiant and Valiant showed thatÕ( m 2 ) samples of an m-valued array are sufficient to classically estimate common statistical properties of the distribution of values in the array [18]. Recently it was shown that fewer samples of the order ofÕ( 1 g 2 ) can be used if we want to identify the item with the largest probability (denoted p max ) [13]; here g denotes the gap between p max and the second largest probability and is always less than p max . Our P max quantum algorithm makes onlyÕ( 1 g √ pmax ) queries, finds the item and estimates its frequency with additive error. • We recently showed that HighDist and P max can estimate non-linearity of any Boolean function with additive accuracy λ usingÕ(1) qubits andÕ( 1 λ 2f max ) queries [5]; heref max denotes the largest absolute value of any Walsh coefficient of the function. Now, we can use HighAmp instead of HighDist to do the same but using onlyÕ( 1 λfmax ) queries. It should be noted in this context that the best known lower bound for non-linearity estimation is Ω( 1 λ ) [5]. Table 1: Results for the k-Distinctness problem k-Distinctness Prior upper bound [2] Our upper bound k ∈{2,3,4} Setting r =k, O(( n k ) k/2 ) queries, O(log(m)+log(n)) spaceÕ (n 3/2 / √ k) queries, O (log(m)+log(n))log n δk space k =ω (1) and k ≥4 O( n 2 k ) queries, O(log(m)+log(n)) space for r ≥kÕ Table 3: Algorithms for non-linearity estimation (λ denotes additive error) Approach Query complexity Space complexity Using HighDist [5] O( 1 δλ space † Although the query complexity of the algorithm is presented asÕ( 1 λ 3 ) queries in [5], since we are merely estimating the largest probability in the output of the Deutsch-Jozsa algorith, using Pmax gives us this tighter bound. A summary of our results is presented in Tables 1, 2, and 3. Our algorithms work in the bounded-error setting and we shall often hide the log() factors in the complexities underÕ(). The time-complexities, except for HighAmp, are same as the query complexities with logarithmic overheads since the techniques rely on quantum amplitude estimating, amplification and simple classical steps. When space is not a constraint, query complexity of a problem for an n-sized array is O(n) which is achievable by querying and caching the entire input at the beginning. However, this is not feasible when space is limited. This is also the scenario in the streaming setting, however, the focus there is to reduce the number of passes over the input under restricted space. In contrast, our algorithms are allowed only constant many logarithmic-sized registers, and they try to optimize the number of queries. To restrict the number of qubits toÕ(1) we end up using super-linear queries for most of the problems. A rigorous space-time analysis can settle the tightness of those query complexities; we leave this direction open. Algorithms for HighDist and HighAmp Promise versions of the HighDist problem play a central role in this work. Problem 1 (HighDist). We are given a (log(m)+a)-qubit quantum oracle O D that generates a distribution D : p x =|α x | 2 m x=1 upon measurement of the first log(m) qubits of in the standard basis. We are also given a threshold τ ∈(0,1) and the task is to identify any x such that p x = |α x | 2 ≥ τ, or report its absence. In the promise version with additive accuracy, we are given an additional ∈(0,τ), and the goal is to decide whether there exists any x such that p x ≥τ or if p x <τ − for all x, under the promise that only one of the cases is true. In the promise version with relative accuracy, the goal is to similarly decide between p x <(1− r )τ and p x ≥τ, given some r ∈(0,1). The algorithm for HighDist follows these high-level steps. • Estimate p x = |α x | 2 for all x in another register, allowing relative or additive error as required, by employing vanilla quantum amplitude estimation (except the final measurement step, denoted QAE). This requires two copies of |ψ , one on which to operate the QFT-based circuit, and another, to furnish the "good" states (whose probabilities should be estimated). • Compare each estimate |p x with the threshold, hardcoded as |τ . The comparison actually happens with a scaled version of τ since QAE does not generate p x directly. The states |x for which p x ≥τ are marked (in another register). • The probability of finding a marked state, given there is one, is amplified using amplitude amplification. Care has to be taken to ensure that any x for which p x <τ − but whose estimate is above τ is not sufficiently amplified. The novelty of this workflow is the execution of QAE in parallel and a complex analysis showing that errors are not overwhelming. This is essentially the strategy followed by the QBoundFMax quantum circuit that was recently proposed by us for estimating non-linearity [5,Algorithm 3]. We observed that QBoundF-Max can be repurposed based on the three following observations. First, QBoundFMax identified whether there exists any basis state whose probability, upon observing the output of a Deutsch-Jozsa circuit, is larger than a threshold in a promised setting; however, no specific property of Deutsch-Jozsa circuit was being used. Secondly, amplitude estimation can be used to estimate |α x | 2 (with bounded error) in x α x |x |ξ x for any x∈[n] by designing a sub-circuit on only the first log(m) qubits to identify "good" states (this subcircuit was referred to as EQ in QBoundFMax). Lastly, amplifying some states in a superposition retains their relative probabilities. These observations not only allow us to modify the QBoundFMax algorithm for HighDist, but also enable us to identify some x such that |α x | 2 ≥τ, along with an estimate of |α x | 2 . Lemma 1 (Additive-error algorithm for HighDist). Given an oracle O D for the HighDist problem, m -the domain-length of the distribution it generates, and a threshold τ, along with parameters 0 < < τ for additive accuracy and δ for error, HighDist-Algo is quantum algorithm that uses O (log(n)+log 1 +a)log 1 δτ qubits and makes O( 1 √ τ log 1 δτ ) queries to O D . When its final state is measured in the standard basis, we observe the following. 1. If p x <τ − for all x then the output register is observed in the state |0 with probability at least 1−δ. 2. If p x ≥τ for any x, then with probability at least 1−δ the output register is observed in the state |1 . It is reasonable to require that τ/2, and in that case the query complexity can be bounded bỹ O( 1 3/2 ). The above algorithm can be converted to work with a relative accuracy by setting = r τ. Lemma 2 (Relative-error algorithm for HighDist). There exists an algorithm to solve the promise version of HighDist with relative inaccuracy r in the similar manner as stated in Lemma 1 that makes O( 1 rτ 3/2 log 1 δτ ) queries to O D and uses O (log(m)+log 1 rτ +a)log 1 δτ qubits. Algorithm for HighAmp In the HighAmp problem, the setup is same as that of HighDist, but we are now interested to identify any x such that |α x |≥τ. Though this is identical to HighDist with threshold τ 2 , we have to set the threshold to τ 2 and additive accuracy to 2 if we want to use Lemma 1 directly; this leads to query complexitỹ O( 1 2 τ ). We design a new algorithm to improve upon this based on the observation that, despite the name, QAE actually estimates the probability of a "good" state; thus, why not estimate the amplitudes directly? • For all x (in superposition), generate a state in another register which is |0 with probability |α x |. For this we designed an algorithm to essentially estimate the inner product of two states using a generalization of the Hadamard test, instead of the swap test. • Employ amplitude amplification to estimate the probability of the state being |0 , allowing relative or additive error as required. To do this in superposition, i.e., for all x, with a low query complexity required us to design an algorithm for simultaneous amplitude estimation. The estimate is stored in another register as | α x | . • Compare each estimate | α x | with the threshold, hardcoded as |τ , and followup with similar steps as before. Hadamard test to estimate inner product of two states Say, we have two algorithms A ψ and A φ that generate the states A ψ |0 n = |ψ and A φ |0 n = |φ , respectively, and we want to produce a state |0 |ξ 0 +|1 |ξ 1 such that the probability of observing the first register to be in the state |0 is linearly related to | ψ|φ |. Though swap-test is commonly used towards this purpose, there the probability is proportional to | ψ|φ | 2 ; this subtle difference becomes a bottleneck if we are trying to use amplitude estimation to estimate that probability with additive accuracy, say . We show that the Hadamard test can do the estimation using O(1/ ) queries to the algorithms whereas it would be O(1/ 2 ) if we use the swap test. The Hadamard test circuit requires one additional qubit, initialized as |0 on which the H-gate is first applied. Then, we apply a conditional gate controlled by the above qubit that applies A ψ to the second register, initialized to |0 n , if the first register is in the state |0 , and applies A φ if the first register is in the state |1 . Finally, the H-gate is again applied on the first register. Simultaneous Amplitude Estimation Let [N]={y :0≤y <2 n −1=N −1} be an index set for some n∈N. Suppose that we are given a family of quantum algorithms {A y : y ∈[N]} each making k queries to an oracle O, for some known constant k. Then for each y, A y can be expressed as A y =U (k,y) OU (k−1,y) ···U (1,y) OU (0,y) with suitable U (i,y) unitaries. Let the action of A y on |0 be defined as A y |0 =β 0y |0 +β 1y |1 denoted |ξ y . (This can also be easily generalized if A y s are n qubit algorithms.) Given an algorithm A initial to prepare the state |Ψ = y α y |y , the objective is to simultaneously estimate the "probability" of |0 in each A y |0 , i.e., obtain a state of the form |Φ = y α y |y |ξ y β 0y , where, for each y, sin 2 β 0y π 2 m =β 0y is an estimate of β 0y such that |β 0y −β 0y |≤ for some given 0< ≤1. A naive approach to solve this problem would be to perform amplitude estimation of |0 in the state |ξ y , conditioned on the first register being in |y , serially for each individual y. Then, the total number of queries to the oracle O would be O( Nk ) where O(k/ ) is the query complexity due to a single amplitude estimation. However, we present an algorithm that performs the same task but with just O( k ) queries to the oracle O. For this we require a controlled-version of the {A y } circuits. Let A be an algorithm defined as A= y |y y|⊗A y that operates A y on the second register if the first register is in |y . We denote the amplitude estimation operator due to Brassard et al. [6] as AmpEst. The operator to obtain an estimate with m bits of precision can be expressed as where F m is the Fourier transform on m qubits, Λ m (G) is the conditional operator defined as x |x x|⊗G x , G=−AS 0 AS χ is the Grover operator and G x implies that the G operator is applied x times in succession. Also let AmpEst y be defined as Then notice that |Φ can be obtained from |Ψ , as |Φ = y |y y|⊗AmpEst y A⊗I m ·|Ψ |0 |0 m . By U we denote the operator y |y y| ⊗ AmpEst y . We show that U can be implemented using O(k·2 m )=O(k/ ) queries to the oracle O at the expense of additional non-query gates which can even be exponential in n. Theorem 1 (Simultaneous Amplitude Estimation). Given an oracle O, a description of an algorithm A= y |y y|⊗A y as defined earlier, an initial algorithm A initial , an accuracy parameter and an error parameter δ, SimulAE-Algo uses O( k ) queries to the oracle O and with probability at least 1−δ outputs |Φ = y α y |y |ξ y β 0y , where sin 2 β 0y π 2 m =β 0y is an -estimate of β 0y for each y. Details of the HighDist and HighAmp algorithms and their analysis can be found in Appendix B, and those for the Hadamard test and simultaneous amplitude estimation can be found in Appendix E. P max and Min-Entropy problem The P max problem is a natural extension of HighDist. Algorithm 1 Simultaneous Amplitude Estimation Algorithm SimulAE-Algo Require: Oracle O, the set of indexed algorithms {A y }, the algorithm A initial , accuracy and error δ. 1: Set m= 1 +3. 2: Initialize three registers R 1 R 2 R 3 as |0 n |0 |0 m . 3: Apply A initial on R 1 . 4: Apply A= y |y y|⊗A y on R 1 R 2 . 5: Apply the quantum Fourier transform (QFT) F m on R 3 . 6: for i in 1 to m, conditioned on i th qubit of R 3 being in |1 , for 2 i many times do 7: Apply S χ 8: for j in 1 to k−1 do 9: for y in 0 to N −1 do 10: Apply U (j,y) on R 2 conditioned on R 1 being |y . Apply O on R 2 . 13: end for 14: for y in 0 to N −1 do 15: Apply U (k,y) on R 2 conditioned on R 1 being |y . 16: end for 18: Apply the transpose of the operations from line 8 to line 16 in reverse. 19: end for 20: Apply the inverse QFT F −1 m on R 3 . 21: return R 1 R 2 R 3 . Problem 2 (P max ). Compute p max =max i∈[n] p i given a distribution oracle as required for the HighDist problem. The min-entropy of a distribution D =(p i ) m i=1 is defined as max i∈[m] log(1/p i ) and the Min-Entropy problem is to estimate this value; clearly, estimating it with an additive accuracy is equivalent to estimating max m i=1 p i with relative accuracy. The currently known approach for this problem, in an array setting, involves reducing it to k-Distinctness [14] with a very large k, however, we show that we can perform better if we binary search for the largest threshold successfully found by the HighDist problem. Lemma 3 (Approximating p max with additive error). Given an oracle as required for the High-Dist problem, additive accuracy ∈ (0, 1) and error δ, there is a quantum algorithm that makes O( 1 √ pmax log( 1 )log 1 δ·pmax ) queries to the oracle and outputs an estimate p max such that |p max − p max |≤ with probability 1−δ. The algorithm uses O (log(m)+log( 1 )+a)log There is a similar algorithm that estimates p max as The algorithm for additive accuracy is essentially the IntervalSearch algorithm that we recently proposed [5]. We further modified the binary search boundaries to adapt it for relative accuracy. We are not aware of significant attempts to estimate p max (or min-entropy) using a blackbox generating some distribution, except a result by Valiant and Valiant in which they showed how to approximate the distribution by a histogram [18] that requiresÕ( m 2 logm ) samples, and another by Dutta et al [13] for finding the mode of an array. In the latter work the authors show that the modal element ofÕ( 1 g 2 ) samples from D is the modal element of D with high probability, in which g is the difference of the mode to the second highest frequency. Suppose we are given g or some upper bound. Setting = g 2 in Lemma 3 allows us to obtain the modal element usingÕ( 1 g 3/2 ) queries. The former technique requires keepingÕ( m 2 logm ) elements, and the latter technique requires storage ofÕ( 1 g 3/2 ) elements (each element requires an additional log(m) bits); our technique, on the other hand, requires O(log m log 1 δpmax )=O(log m g log 1 δg ) qubits. Details of the P max algorithms and their analyses can be found in Appendix D. Problems based on arrays and Boolean functions The algorithms for k-distinctness, ∆-Gapped k-Distinctness , F ∞ , and non-linearity estimation are obtained by reducing them to HighDist or P max (see Appendices G, H, and F for details). A subtlety in those reductions is an implementation of O D given an oracle to an array -this is explained in Appendix C. The k-Distinctness and the Gapped k-Distinctness problems The ElementDistinctness problem [8, 2, 1] is being studied for a long time both in the classical and the quantum domain. It is a special case of the k-Distinctness problem [2,3] with k =2. Problem 3 (k-Distinctness). Given an oracle to an n-sized m-valued array A, decide if A has k distinct indices with identical values. By an m-valued array we mean an array whose entries are from {0,...,m − 1}. Observe that, k-Distinctness can be reduced to HighDist with τ = k n , assuming the ability to uniformly sample from A. The best known classical algorithm for k-Distinctness uses sorting and has a time complexity of O(nlog(n)) with a space complexity O(n). In the quantum domain, apart from k =2, the k =3 setting has also been studied earlier [4,11]. The focus of all these algorithms has been primarily to reduce their query complexities. As a result their space requirement is significant (polynomial in the size of the list), and beyond the scope of the currently available quantum backends with a small number of qubits. Recently Li et al. [14] reduced the Min-Entropy problem to k-Distinctness with a very large k making it all the more difficult to implement. The k-Distinctness problem was further generalized to ∆-Gapped k-Distinctness by Montanaro [16] which comes with a promise that either some value appears at least k times or every value appears at most k−∆ times for a given gap ∆. The F ∞ problem [16,9] wants to determine, or approximate, the number of times the most frequent element appears in an array, also known as the modal frequency. Montanaro related this problem to the Gapped k-Distinctness problem but did not provide any specific algorithm and left open its query complexity [16]. So it appears that an efficient algorithm for ∆-Gapped k-Distinctness can positively affect the query complexities of all the above problems. However, ∆-Gapped k-Distinctness has not been studied elsewhere to the best of our knowledge. Upper bounds for the k-Distinctness problem The k =2 version is the ElementDistinctnessproblem which was first solved by Buhrman et al. [8]; their algorithm makes O(n 3/4 log(n)) queries (with roughly the same time complexity), but requires the entire array to be stored using qubits. A better algorithm was later proposed by Ambainis [2] using a quantum walk on a Johnson graph whose nodes represent r-sized subsets of [n], for some suitable parameter r ≥k. He used the same technique to design an algorithm for k-Distinctness as well that usesÕ(r) qubits and O(r+ (n/r) k/2 √ r) queries (with roughly the same time complexity). Later Belovs designed a learning-graph for the k-Distinctness problem, but only for constant k, and obtained a tighter bound of O(n Thus it appears that even though efficient algorithms may exist for small values of k, the situation is not very pleasant for large k, especially k = Ω(n) -the learning graph idea may not work (even if the corresponding algorithm could be implemented in a time-efficient manner) and the quantum walk algorithm uses Ω(k) space. Our algorithm addresses this concern and is specifically designed to useÕ(1) qubits; as an added benefit, it works for any k. This algorithm has two attractive features. First is that it improves upon the algorithm proposed by Ambainis for k ≥4 when we require thatÕ(1) space be used, and secondly its query complexity does not increase with k. There have been separate attempts to design algorithms for specific values of k. For example, for k =3 Belovs designed a slightly different algorithm compared to the above [4] and Childs et al. [11] gave a random walk based algorithm both of which uses O(n 5/7 ) queries and O(n 5/7 ) space. These algorithm improved upon the O(n 3/2 )-query algorithm proposed earlier by Ambainis [2]. Our algorithm provides an alternative that matches the query complexity of the latter and can come in handy when a small number of qubits are available. For k that is large, e.g. Ω(n), the query complexity of Ambainis' algorithm is exponential in n and that of ours is O(n 3/2 ). Montanaro used a reduction from the CountDecision problem [17] to prove a lower bound of Ω(n) queries for k =Ω(n) -of course, assuming unrestricted space [16]. Our algorithm matches this lower bound, but with onlyÕ(1) space. Upper bounds for the Gapped k-Distinctness problem The Gapped k-Distinctness problem was introduced by Montanaro [16, Sec 2.3] as a generalization of the k-Distinctness problem to solve the F ∞ problem; we modified the "gap" therein to additive to suit the results of this paper. Problem 4 (∆-Gapped k-Distinctness). This is the same as the k-Distinctness problem along with a promise that either there exists a set of k distinct indices with identical values or no value appears more than k−∆ times. Montanaro observed that this problem can be reduced to F ∞ estimation and vice-versa with a log(n) overhead for binary search; however, he left open an algorithm or the query complexity of this problem. We are able to design a constant space algorithm by reducing it to our HighDist problem. Our results are summarised in Table 1. Upper bounds for F ∞ The F ∞ problem is a special case of the P max problem on a finite array. Problem 5 (F ∞ ). Given an oracle to query an n-sized array A with values in {1,...,m}, compute the frequency of the most frequent element, also known as the modal frequency. Li et al. [14] studied this problem in the context of min-entropy of an array. They reduced the problem of Min-Entropy estimation (of an m-valued array with additive error ∈ (0,1)) to that of k-Distinctness with k = 16log(m) 2 . However they did not proceed further and made the remark that "Existing quantum algorithms for the k-distinctness problem ...do not behave well for super-constant ks.". Indeed, it is possible to run the quantum-walk based algorithm for k-Distinctness [2] and thereby solve F ∞ estimation; this turns out to be not very effective with O(n) query complexity and O(n) space complexity. (See Appendix I for a rough analysis.) Instead, we reduce the F ∞ problem to that of HighDist and obtain aÕ(1)-space algorithm to estimate the modal frequency with additive error. Montanaro proposed two methods to accurately compute the modal frequency, one of which closely matches the complexities of our proposed algorithm but our approach has a lower query complexity when =poly(1/n). The results are summarised in Table 2. There is a quantum algorithm to estimate F ∞ with . . . Heavy hitters: A discrete version of the HighDist problem has been studied as "heavy hitters" in the streaming domain, in which items (of an n-sized array) are given to an algorithm one by one, and the algorithm has to identify all items with frequency above a certain threshold, say τn. Since their objective was to return a list of items, naturally they used more thanÕ(1) space; further, even though they employed randomized techniques like sampling and hashing, they processed all items (query complexity is O(n)) [15,12,10]. The space required for all such algorithms areÕ( 1 ) where indicates the permissible error during estimation of frequencies. Our approach decides if there is any heavy hitter, and if there are any, then samples from them; it makes use of onlyÕ log 1 qubits. A key feature of our algorithms is o(1) queries to D. o(1)-query classical algorithms are possible if only sublinear samples are drawn. Valiant and Valiant showed that O( m 2 logm ) samples are sufficient to construct an approximate histogram of D (with support at most m) with additive "error" [18], and further showed that Ω( m logm ) samples are necessary to compute some simple properties of D, such as Shannon entropy. It was not immediately clear to us if their lower bound extends to heavy hitters (or even the presence of heavy hitters); however, their approximate histogram can surely be used to identify them. Our quantum algorithm has a lower query complexityÕ( 1 3/2 ). Non-linearity estimation of a Boolean function Non-linearity of a function f is defined in terms of the largest absolute-value of its Walsh-Hadamard coefficient [5]: Since the output state of the Deutsch-Jozsa circuit is xf (x)|x , i.e., the probability of observing |x isf(x) 2 , it immediately follows that we can utilize the P max algorithm (that in itself uses HighDist) to estimatef 2 max , and hence, non-linearity, with additive inaccuracy. However, instead of HighDist we can use HighAmp and then use the same binary search strategy as P max to estimatef max instead off 2 max . This reduces the number of queries since the complexity of the binary-search based P max algorithm depends upon p max itself, and further a larger inaccuracy can be tolerated (to estimatef max within ±λ, it now suffices to call HighAmp with inaccuracy λ, instead of calling HighDist with inaccuracy λ 2 ). This leads to a quadratic improvement in the query complexity in form ofÕ( 1 λfmax ). Details can be found in Appendix F. A Amplitude amplification, amplitude estimation and majority In this section, we present details on the quantum amplitude estimation and amplitude amplification subroutines that are used as part of our algorithms. We also explain the MAJ operator. A.1 Amplitude amplification The amplitude amplification algorithm (AA) is a generalization of the novel Grover's algorithm. Given an n-qubit algorithm A that outputs the state |φ = k α k |k on |0 n and a set of basis states G={|a } of interest, the goal of the amplitude amplification algorithm is to amplify the amplitude α a corresponding to the basis state |a for all |a ∈G such that the probability that the final measurement output belongs to G is close to 1. In the most general setting, one is given access to the set G via an oracle O G that marks all the states |a ∈G in any given state |φ ; i.e., O G acts as Now, for any G, any state |φ = k α k |k can be written as Notice that the states |ν and |ν are normalized and are orthogonal to each other. The action of the amplitude amplification algorithm can then be given as where β satisfies |β|<δ and δ is the desired error probability. This implies that on measuring the final state of AA, the measurement outcome |a belongs to G with probability |1−β| which is at least 1−δ. A.2 Quantum amplitude estimation (QAE) Consider a quantum circuit A on n qubits whose final state is |ψ on input |0 n . Let |a be some basis state (in the standard basis -this can be easily generalized to any arbitrary basis). Given an accuracy parameter ∈(0,1), the amplitude estimation problem is to estimate the probability p of observing |a upon measuring |ψ in the standard basis, up to an additive accuracy . Brassard et al., in [6], proposed a quantum amplitude estimation circuit, which we call AmpEst, that acts on two registers of size m and n qubits and makes 2 m −1 calls to controlled-A to output an estimatẽ p∈[0,1] of p that behaves as mentioned below. Theorem 2. The amplitude estimation algorithm returns an estimatep that has a confidence interval with probability at least 8 π 2 if k = 1 and with probability at least 1− 1 if k ≥2. It uses exactly 2 m −1 evaluations of the oracle. If p=0 or 1 thenp=p with certainty. The following corollary is obtained directly from the above theorem. Proof. Set k =1 in Theorem 2. Since p≤1, we get p(1−p)≤ 1 2 . Then we have The last inequality follows from the fact that π 2 m < 1 (which is true when m ≥ 2). Now, set m = q +3 to prove the corollary. Now, let p a be the probability of obtaining the basis state |a on measuring the state |ψ . The amplitude estimation circuit referred to above uses an oracle, denoted O a to mark the "good state" |a , and involves measuring the output of the AmpEst circuit in the standard basis; actually, it suffices to only measure the first register. We can summarise the behaviour of the AmpEst circuit (without the final measurement) in the following lemma. Lemma 8. Given an oracle O x that marks |x in some state |ψ , AmpEst on an input state |ψ |0 m generates the following state. where |β x,s | 2 , the probability of obtaining the good estimate, is at least 8 π 2 , and |p x is an m-qubit normalized state of the form |p x =γ + |p x,+ +γ − |p x,− such that for p∈{p x,+ ,p x,− }=S px (say), sin 2 (π p 2 m ) approximates p x up to m−3 bits of accuracy. Further, |E x is an m-qubit error state (normalized) such that any basis state in |E x corresponds to a bad estimate, i.e., we can express it as |E In an alternate setting where the oracle O x is not provided, AmpEst can still be performed if the basis state |a is provided -one can construct a quantum circuit, say EQ, that takes as input |φ |x and marks the state |x of the superposition state |φ as described in section B. We name this extended-AmpEst circuit as EQAmpEst which implements the following operation. EQAmpEst |x |ψ |0 m − →|x β x,s |ψ |p x +β x,s |ψ |E x where the notations are as defined above and the quantum circuit EQ is used wherever the oracle O x was used in the previous setting. In such a scenario, since EQAmpEst is a quantum circuit, we could replace the state |x by a superposition x α x |x . We then obtain the following. Corollary 2. Given an EQ circuit, the EQAmpEst on an input state x α x |x |ψ |0 m outputs a final state of the form Notice that on measuring the first and the third registers of the output, with probability |α x β x,s | 2 ≥ 8 π 2 |α x | 2 we would obtain as measurement outcome a pair |a |x where sin 2 (π a 2 m ) =p is within ± 1 2 m−3 of the probability p x of observing the basis state |x when the state |ψ is measured. Observe in this setting that the subroutine essentially estimates the amplitude of all the basis states |x . However, with a single measurement we can obtain the information of at most one of the estimates. We will be using this in HighDist-Algo. A.3 MAJ operator Let X 1 ...X k be Bernoulli random variables with success probability p > 1/2. Let Maj denote their majority value (that appears more than k/2 times). Using Hoeffding's bound 2 , it can be easily proved that Maj has a success probability at least 1−δ, for any given δ, if we choose k ≥ 2p (p−1/2) 2 ln 1 δ . We require a quantum formulation of the same. Suppose we have k copies of the quantum state |ψ =|ψ 0 |0 +|ψ 1 |1 in which we define "success" as observing |0 (without loss of generality) and k is chosen as above. Let p= |ψ 0 2 denote the probability of success. Suppose we measure the final qubit after applying (I k ⊗MAJ) in which the MAJ operator acts on the second registers of each copy of |ψ . Then it is easy to show, essentially using the same analysis as above, that in which |Γ 0 2 ≥1−δ. The MAJ operator can be implemented without additional queries and with poly(k) gates and log(k) qubits. B Algorithms for HighDist and HighAmp problems B.1 Algorithm for HighDist problem We design an algorithm for a promise version of HighDist with additive error, which we refer to as Promise-HighDist. For HighDist we are given a quantum black } are normalized states. Let p x = |α x | 2 denote the probability of observing the first log(m) qubits in the standard-basis |x . The objective of HighDist is to determine whether there exists any x such that p x ≥ τ for any specified threshold τ ∈ (0,1) and the task of P max is to compute max x p x . For this task we generalize QBoundFMax from our earlier work on estimating non-linearity [5, Algorithm 3]. The repurposing of that algorithm follows from three observations. First, QBoundFMax identified whether there exists any basis state whose probability, upon observing the output of a Deutsch-Jozsa circuit, is larger than a threshold in a promised setting; however, no specific property of Deutsch-Jozsa circuit was being used. Secondly, amplitude estimation can be used to estimate |α x | 2 (with bounded error) in x α x |x |ξ x for any x∈[n] by designing a sub-circuit on only the first log(m) qubits to identify "good" states (this sub-circuit was referred to as EQ in QBoundFMax). Lastly, amplifying some states in a superposition retains their relative probabilities. These observations not only allow us to modify the QBoundFMax algorithm for HighDist, but also enable us to identify some x such that |α x | 2 ≥τ, along with an estimate of |α x | 2 . Our space-efficient algorithm for Promise-HighDist requires a few subroutines which we borrow from our earlier work on estimating non-linearity [5]. HD q : When the target qubit is |0 q , and with a q−bit string y in the control register, HD computes the absolute difference of y int from 2 q−1 and outputs it as a string where y int is the integer corresponding to the string y. It can be represented as HD q |y |b = |b⊕ỹ |y where y,b ∈ {0,1} q andỹ is the bit string corresponding to the integer 2 q−1 −y int . Even though the operator HD requires two registers, the second register will always be in the state |0 q and shall be reused by uncomputing (using HD † ) after the CMP gate. For all practical purposes, this operator can be treated as the mapping |y →|ỹ . CMP: The CMP operator is defined as CMP|y 1 |y 2 |b = |y 1 |y 2 |b⊕(y 2 ≤y 1 ) where y 1 ,y 2 ∈ {0,1} n and b∈{0,1}. It simply checks if the integer corresponding to the basis state in the first register is at most that in the second register. The algorithm for Promise-HighDist with additive accuracy is presented as HighDist-Algo in Algorithm 2. The quantum circuit of the algorithm is illustrated in Figure 2. Its operation can be explained in stages. For convenience, let us call the set G={|z :z ∈[m],p z ≥τ} as the 'good' set and its elements as the 'good' states. In the first stage, we initialize the registers R 1 R 2 R 3 in the state |0 r |0 r |τ 1 . We then apply the oracle O D on R 1 and R2 to obtain the state of R 1 and R2 as x∈[m] α x |x |ξ x . Let |x,ξ x denote the state |x |ξ x . In stage two, we initialize c · ln 1 δ 2 τ 2 copies of the registers R k 4 R k 5 in the state 0 l |0 . For all k =1···c·ln 1 δ 2 τ 2 , we then apply amplitude estimation collectively on the registers R 1 ,R 2 and R k 4 in a way that for every basis state |z in the first log(m) qubits of R 1 , a string a z is output on R k 4 such that sin 2 ( azπ 2 q )=p z (say)∈[p z − 1 2 q ,p z + 1 2 q ] with probability at least 8/π 2 . Stage three is essentially about filtering out the good states. We use the subroutines HD q and CMP to perform the filtering and marking all the good states |z by flipping the state of R 5 to |1 for such states. So, the state in the circuit after stage three is |ψ 3 = x∈[n] α x |x,ξ x |φ |a x |τ 1 |p x ≥τ 1 . Notice that the probability of measuring R 5 as |1 in |ψ 3 is either 0 or is lower bounded by τ due to the promise. In stage four, for each basis state |x , we perform a conditional majority over all R k 5 registers conditioned on the R 1 being |x and store the result in a new register R f . This stage ensures that the error caused due to amplitude estimation does not amplify to more than δ during the amplitude amplification in stage five. Finally at stage five, we use the amplitude amplification to amplify the probability of obtaining the state |1 in R f . Now for any x that is marked, we havep x ≥ τ 1 . If the probability of observing |1 in 7: Stage 3: Use HD l on R 3 and R k 4 individually. 8: Use CMP on R 3 =|τ 1 and R k 4 as input registers and R k 5 as output register. 9: Use HD † l on R 3 and R k 4 individually. times on R f with error at most δ/2 using |1 as the good state and measure R f as out. 13: If out=|1 return TRUE else return FALSE R f is non-zero, then since we have a lower bound on that probability, we have an upper bound on the number of amplifications needed to observe the state |1 in R f with high probability. The above exposition is a simplified explanation of the algorithm that does not take into account errors and inaccuracies, especially those arising from amplitude estimating and interfering with amplitude amplification. The detailed proof of correctness and query complexity of the algorithm is discussed in the proof of Lemma 1 below. Algorithm HighDist-Algo contains an easter egg. When the output register is observed in the state |1 , for majority of k ∈ {1,2,...clog 1 δ 2 τ 2 }, R k 5 would be in |1 with high probability; the index register R 1 would contain some superposition of all good x's. The algorithm for Promise-HighDist with additive error can be used to solve Promise-HighDist with relative error r by setting = r τ Lemma 1 (Additive-error algorithm for HighDist). Given an oracle O D for the HighDist problem, m -the domain-length of the distribution it generates, and a threshold τ, along with parameters 0 < < τ for additive accuracy and δ for error, HighDist-Algo is quantum algorithm that uses O (log(n)+log 1 +a)log 1 δτ qubits and makes O( 1 √ τ log 1 δτ ) queries to O D . When its final state is measured in the standard basis, we observe the following. 1. If p x <τ − for all x then the output register is observed in the state |0 with probability at least 1−δ. 2. If p x ≥τ for any x, then with probability at least 1−δ the output register is observed in the state |1 . Proof. Before we provide the correctness of the algorithm we introduce a few propositions that will be useful in proving the correctness of the algorithm. We now analyse the algorithm. Recall that O D 0 log(m) |0 a = m−1 x=0 α x |x |ξ x which we denote |φ , and Stage-1: Consider the registers R 1 R 2 R 3 along with one of the cln 1 δ 2 τ 2 independent copies and neglect the superscript on the registers. The state of the circuit after stage-1, just before amplitude estimation, is Stage-2: After the amplitude estimation step, we obtain a state of the form where |a x is a normalized state of the form |a x = γ + |a x,+ +γ − |a x,− that on measurement outputs a ∈ {a x,+ , a x,+ } which is an l-bit string that behaves as sin 2 aπ 2 l −p x ≤ 1 2 q . We denote the set {a x,+ ,a x,− } by S ax . Stage-3: Notice that stage 3 affects only the registers R 3 ,R 4 and R 5 . For any computational basis state |u and |v , the transformation of a state of the form |u |v |0 due to stage 3 can be given as We will analyse the states |I{a x± ≤τ 1 } by considering two types of index x∈[m]. Stage-4 and Stage-5: It is evident from the above analysis that R 5 is correctly set to |0 or |1 for x such that p x <τ − or p x ≥τ, respectively, however only with certain probability. In fact, amplitude estimation will not succeed with some probability, and will yield some a ∈S ax in R 4 some of which may produce erroneous results in R 5 after comparison with τ 1 in R 3 . We need to pin down the probability of error to analyse this stage. For this, we consider the two scenarios corresponding to the promises of Promise-HighDist. Case (i): Consider the case when for all x ∈ [m], p x < τ − . Then, that state after stage 3 can be written as Recall that β x,s ≤ 1− 8 π 2 < 0.2 for all x. Therefore, on measuring R 5 , the probability of obtaining |1 (false positive) can be given as At this point we perform the conditional majority operator on clog 1 δ 2 τ 2 independent copies of R 5 conditioned on R 1 being in the basis state |x for each x ∈ {0,1} n . Then using Hoeffding's inequality, the following relation is straight forward for each x∈{0,1} n : As this relation is true for each x∈{0,1} n , we have that P r R f =|1 ≤δ 2 τ 2 . In stage 5 we perform any of the amplitude amplification algorithms that can operate using a lower bound on the success probability [6]. Since there are many such methods, we avoid choosing any specific one; however, all of them will involve some m iterations where m = O( t δ √ 0.8τ ) for some suitable t. We will now show that even after amplitude amplification with m iterations, the probability of false positive will be at most δ. Notice that O( t δ δ·τ ) iterations are required to amplify a minimum probability of δ 2 τ 2 to δ. But Then the state after stage 3 can be given as Notice that in the above summation, we simply break the state |ψ 3 into two summands of which one contains the summation over all x∈G and the other contains the summation over all x / ∈G. Next, in stage-4, for every x∈{0,1} n , conditioned on the register R 1 being in state |x , we perform a conditional majority over all the R k 5 registers and store the output in R f . Then, using Hoeffding's bound as in case(i), we get that for any x∈G, and for any x / ∈G we have, Therefore, the overall probability of obtaining |1 in R f after stage-5 can be expressed as under the reasonable assumption that the target error probability δ < 1 2 . Now, we present the query complexity of the algorithm. It is obvious that the number of calls made by amplitude estimation with accuracy 1 2 q and error at most 1− 8 π 2 is O(2 q )=O( 1 ). The subroutines HD q and CMP are query independent. In total, we perform log 1 δ =O(log 1 δτ ) many independent estimates and comparisons in the worst case. Again, computing the majority does not require any oracle queries. In the final stage, we perform the amplitude amplification with O( 1 B.2 Algorithm for HighAmp problem We present the algorithm for HighAmp problem as Algorithm 3. The algorithm for HighAmp differs from the HighDist-Algo only at stages-1 and 2. Algorithm 3 Algorithm HighAmp-Algo for HighAmp problem Require: Oracle O D (with parameters m, a), threshold τ, accuracy and error δ. 8: Stage 3: Use HD l on R 3 and R k 4 individually. 9: Use CMP on R 3 =|τ 1 and R k 4 as input registers and R k 5 as output register. 10: Use HD † l on R 3 and R k 4 individually. 11: end loop 12: Stage 4: For each basis state |x in R 1 , for i = 1...c·ln 1 δ 2 τ 2 compute the majority of the basis states of each R i 5 register conditioned on the R 1 to be in |x , and store the result in R f . 13: Stage 5: Apply Amplitude Amplification (AA) O( 1 √ τ ) times on R f with error at most δ/2 using |0 as the good state and measure R f as out. 14: If out=|0 return TRUE else return FALSE Before we prove the correctness of the algorithm, we establish the following proposition whose proof is straightforward. Proposition 3. For any α x , a threshold τ and some , Proof of Algorithm 3. We now analyse the algorithm stage by stage. Stage-1: Consider the registers R 1 R 21 R 22 R 3 . The state of these registers after stage-1 can be given as where |φ = x α x |x,ξ x . Given that the state in R 1 is |x,ξ x , the probability of obtaining |0 in R 21 can then be given as Using this, |ψ 1 can be given as for some normalized states |η x0 and |η x1 where |ν x0 | 2 = 1 2 (1−|α x |). Stage-2: Now consider the registers R 1 R 21 R 22 R 3 along with one of the cln 1 δ 2 τ 2 independent copies. Neglect the superscript on the registers. Notice that the simultaneous amplitude estimation is performed on the registers R 1 R 21 R 4 . This operation can be given as x |x x|⊗AmpEst x where AmpEst x uses the Grover iterator G x =−A x U 0 A x U 0 and A x is the algorithm that acts as A x |0 =|x . Then, we obtain a state of the form where |a x is a normalized state of the form |a x = γ + |a x,+ +γ − |a x,− that on measurement outputs a ∈ {a x,+ ,a x,+ } which is an l-bit string that behaves as sin 2 aπ 2 l −|ν x0 | 2 ≤ 1 2 q . We denote the set {a x,+ ,a x,− } by S ax . Stages-3 to 5: Let by p x and p τ we denote ν 2 x0 and 1 2 (1−τ) = • τ. Then from Proposition 3, we can observe that the only possible cases for any x∈{0,1} n are p x ≤ • τ − and p x > • τ. The proof of correctness for these cases follow directly from the proof for HighDist-Algo. Hence, at the end of Stage-4 we have that for any x such that p x > • τ, and for any x such that p x ≤ • τ − we have, So, using Proposition 3, we get that for any x such that |α x |<τ, P r R f =|0 R 1 =|x <δ, and for any x such that |α x |≥τ +2 we have, Therefore, the overall probability of obtaining |1 in R f after stage-5 can be expressed as assuming that δ < 1 2 . For the query complexity of this algorithm, we use O( 1 ) queries in SAE and perform the estimation in a total of O(log 1 δτ ) independent copies. In the last stage, the number of iterations of amplitude amplification done is O( 1 τ ). Hence, we have to query complexity as O( 1 τ log 1 δτ ) queries. C Choice of oracles Bravyi et al. [7] worked on designing quantum algorithms to analyse probability distributions induced by multisets. They considered an oracle, say O S , to query an n-sized multiset, say S, in which an element can take one of m values. Hence, the probabilities in the distribution of elements in those multisets are always multiples of 1/n. They further proved that the query complexity of an algorithm in this oracle model is same as the sample complexity when sampled from the said distribution in a classical scenario. Li and Wu [14] too used the same type of oracles for estimating entropies of a multiset. We consider a general oracle in which the probabilities can be any real number, and are encoded in the amplitudes of the superposition generated by an oracle. We show below how to implement an oracle of our type for any distribution D, denoted O D , using O S . It should be noted that one call to O D invokes O S only once. Here the |ξ j states are normalized, and the probability of observing the second register is |α j | 2 . Hence, ignoring the first register gives us the desired output of O D 0 log(m) in the second register. We use O D for HighDist and P max , and O S for the other array-based problems, namely, F ∞ and variants of element distinctness. D Algorithm for P max problem D.1 P max problem with additive accuracy An algorithm for Promise-HighDist, with additive accuracy set to some e, is used to decide whether to search in the right half or the left half. It suffices to choose k and e such that 1/2 k ≤ /2 and e≤ /2 and repeatedly call the Promise-HighDist algorithm with accuracy e. Suppose t 2 k is the threshold passed to the Promise-HighDist algorithm at some point. Then, if the algorithm returns TRUE then p max ≥ t 2 k −e, and so we continue to search towards the right of the current threshold; on the other hand if the algorithm returns FALSE then p max < t 2 k , so we search towards its left. At the end some t is obtained such that p max ∈ [ t 2 k −e, t+1 2 k ), an interval of length at most . This is the idea behind the IntervalSearch algorithm from our earlier work on non-linearity estimation [5, Algorithm 1]. Once such a t is obtained, t/2 k can be output as an estimate of p max which is at most away from the actual value. Lemma 3 follows from Lemma 1 and the observation that k binary searches have to be performed. We now describe a quantum algorithm to estimate max x∈[m] p x = |α x | 2 with an additive accuracy given a quantum black-box O D with the following behaviour. The black-box generates the distribution D =(p x ) m x=1 when its first log(m) qubits are measured in the standard basis. There is a similar algorithm that estimates p max as (1− ) 2 p max ≤p max ≤ p max usingÕ( m 3/2 ) queries on O (log m +a)log 1 δpmax qubits. We design an algorithm namely IntervalSearch to prove the lemma. The algorithm originally appeared in [5]. The idea behind Algorithm IntervalSearch is quite simple. The algorithm essentially combines the HighDist-Algo with the classical binary search. Recall that given any threshold τ, accuracy and error δ, if HighDist-Algo outputs TRUE then p max ≥τ −2 else if the output is FALSE then p max <τ. The IntervalSearch algorithm is as presented in Algorithm 4. Algorithm 4 Algorithm IntervalSearch to find out an -length interval containing max i∈[n] p i Require: Distribution oracle O D , size of the oracle r =log(m)+a, size of the distribution m, accuracy and probability of error δ Set k = log 2 1 +1 k is the smallest integer s.t. 1 2 k ≤ 2 ; thus, 4 < 1 2 k ≤ 2 Set gap g = 4 Set boundaries lower = 1 n , upper =1 and threshold τ = 1 Update upper =τ, τ =τ − 1 2 i+1 ; lower is unchanged end if end for return [lower,upper) Proof. Notice that the interval [lower,upper) at the start of the i th iteration is such that the size of the interval is either 1 2 i or 1 2 i −g. The algorithm essentially attempts to find a τ which is a multiple of 1 2 k in such a way that at k−1 th iteration, τ is (almost) the center of an interval J of size 1 2 k−1 and After iteration 1 After iteration 2 After iteration k Figure 3: Illustration of the IntervalSearch algorithm p max ∈J. It is clear that after the k th iteration the algorithm returns an interval of the form [ t 2 k −g, t+1 2 k ) for t ∈ {1,2,···2 k − 1} and the length of the returned interval is at most 1 2 k + g ≤ 2 + g ≤ as desired. The correctness of the algorithm then follows from the correctness of HighDist-Algo. IntervalSearch makes k =O(log 1 ) invocations of the HighDist-Algo. Since the accuracy parameter of each invocation of HighDist-Algo in IntervalSearch is 4 and the error parameter is δ k , from Lemma 1 we get that the query complexity of each invocation of HighDist-Algo is O( 1 √ τ i log log( ) log 1 δτ i ) where τ i denotes the threshold at iteration i. Hence, we get the total query complexity of IntervalSearch as ). The last equality uses the fact that τ i ≥p max /2 for any i∈[k]. Now, since each time HighDist-Algo is invoked with the error parameter k δ , using union bounds we can say that the IntervalSearch algorithm returns an erred output with probability at most δ. D.2 P max problem with relative accuracy The algorithm for P max with relative accuracy, denoted r , follows a similar idea as that of its additive accuracy version, except that it searches among the thresholds 1,(1− r ),(1− r ) 2 ,...,(1− r ) k−1 in which k is chosen to be the smallest integer for which (1− r ) k−1 ≤ 1 m . Further, it calls the above algorithm for Promise-HighDist with relative error r . At the end of the binary search among the k thresholds, we obtain some t such that p max ∈[(1− r )(1− r ) t+1 ,(1− r ) t ). Clearly, if we output (1− r ) t as the estimate p max , then p max ≤ p max and p max ≥(1− r ) 2 p max as required. Now, we present an algorithm to approximate p max with relative error. Lemma (Approximating p max with relative error). Given an oracle as required for the HighDist problem, relative accuracy ∈(0,1) and error δ, there is a quantum algorithm that makesÕ( m 3/2 ) queries to the oracle and outputs an estimatep max such that with probability 1−δ, it holds that (1− )p max ≤p max <p max . The algorithm uses log m +a) qubits. To solve the relative version of the P max problem, we introduce a relative version of IntervalSearch which we call IntervalSearchRel. Similar to the IntervalSearch algorithm, IntervalSearchRel also combines the HighDist-Algo with a classical binary search. But here the binary search is over the powers of (1− ) where =(1− √ 1− ) rather than on intervals of length 1 2 k . The algorithm is as in Algorithm 5. Proof. First observe that for any relative accuracy and a threshold τ, deciding the HighDist problem with relative accuracy is equivalent to deciding the additive HighDist problem with additive accuracy τ. So, HighDist-Algo(r,m,τ,( τ), δ k ) even though is in additive terms, essentially solves the HighDist problem with relative accuracy . Next, in IntervalSearchRel, at the end of i th iteration, any interval E Simultaneous Amplitude Estimation and Hadamard Test E.1 Simultaneous Amplitude Estimation Let [N] = {y : 0 ≤ y < 2 n − 1 = N − 1} be an index set for some n ∈ N. Let A be an algorithm defined as A= y |y y|⊗A y where A y s are algorithms indexed by y and all of which use some oracle O. Also let the number of times O is called in any A y is k. Then for each y, A y can be given as A y =U (k,y) OU (k−1,y) ···U (1,y) OU (0,y) with suitable U (i,y) unitaries. Let the action of A y on |0 be defined as A y |0 = β 0y |0 +β 1y |1 (This can also be easily generalized if A y s are n qubit algorithms.) So, the action of A on a state of the form y α y |y |0 can be given as A y α y |y |0 = y α y |y β 0y |0 +β 1y |1 = y α y |y |ξ y (say)=|Ψ . Now, without loss of generality assume that |0 is the good state and our objective is to obtain the estimates of β 0y s in parallel using some extra ancilla qubits, i.e, we would like to obtain a state of the form |Φ = y α y |y |ξ y β 0y where, for each y, sin 2 β 0y π 2 m =β 0y is an estimate of β 0y such that |β 0y −β 0y |≤ for some given 0< ≤1. We call this problem of simultaneous estimation of β 0y s as SimulAEProb problem. A naive approach to solve this problem would be to perform amplitude estimation of the state |ξ y conditioned on the first register being in |y for each individual y. Then, the total number of queries to the oracle O would be O( Nk ) where O(k/ ) is the query complexity due to a single amplitude estimation. However, this is very costly. We give an algorithm that performs the same task but with just O( k ) queries to the oracle O. We denote the amplitude estimation algorithm due to Brassard et al. [6] as AmpEst. The amplitude estimation algorithm to obtain an estimate with m bits of precision can be given as AmpEst where F m is the Fourier transform on m qubits, Λ m (G) is the conditional operator defined as x |x x|⊗G x , G=−AS 0 AS χ is the Grover operator and G x implies that the G operator is applied x times in succession. Also let AmpEst y be defined as for j in 1 to k−1 do 7: for y in 0 to N −1 do 8: Apply U (j,y) on R 2 conditioned on R 1 being |y . Apply O on R 2 . 11: end for 12: for y in 0 to N −1 do 13: Apply U (k,y) on R 2 conditioned R 1 being |y . 14: end for 15: end for 16: Apply the inverse QFT F −1 m on R 3 . 17: return R 1 R 2 R 3 . where sin 2 β 0y π 2 m =β 0y is an -estimate of β 0y for each y. Before we proceed to prove Theorem 3, consider the following lemmas which would be useful in proving Theorem 3. Here C (i,p) (U) denotes the operator I i−1 ⊗|p p|⊗I m−i ⊗U. where δ p,q =1 if p=q and 0 otherwise. Proof. Lemma 11. For any two unitaries A and B, we have Proof. Notice that the middle operator in the above equation can be rephrased as: Now see that for any i, Next, since we have A y =U (k,y) OU (k−1,y) ···U (1,y) OU (0,y) , we can write Each of the y |y y|⊗ C (i,1) U (j,y) +C (i,0) (I) terms can be implemented as |y y|⊗ C (i,1) U (j,y) +C (i,0) (I) + x =y |x x|⊗I m ⊗I which can be identified as a sequence of N controlled gates that do not use any queries to the oracle O. Next notice that for each i, the operator I n ⊗ C (i,1) (O)+C (i,0) (I) is applied independent of the state in the first register. So, this operator can be implemented as a single controlled-oracle operation that uses 1 oracle query. With that we can see that the number of oracle queries required to implement y |y y|⊗ C (i,1) −A y +C (i,0) (I) (operator in Equation 6). is exactly k. Using similar analysis for the operator y |y y|⊗ C (i,1) A † y +C (i,0) (I) , we can see that the required number of oracle queries required to implement this operator is k. Now, using the equivalence between the operators in Equation 4 and Equation 5, the total number of oracle queries required for the operation in Equation 4, can be calculated as 2k·2 i since the controlled-Grover operator is applied 2 i times. This in turn implies that the total number of calls to oracle O that is required to implement the operation in Equation 3 is m i=1 2k·2 i =O(k·2 m ). Since, we have set m=O(1/ ) we get the query complexity of SimulAE-Algo as O(k/ ). E.2 Hadamard test for inner product estimation Suppose that we have two algorithms A ψ and A φ that generate the state A ψ |0 n =|ψ and A φ |0 n =|φ respectively. Our task is to return an estimate to | ψ|φ | with accuracy. Since, we have description of both A ψ and A φ , it is quite straightforward to estimate the probability of obtaining |0 n in the state A † ψ A φ |0 n with 2 accuracy from which one can obtain an estimate of | ψ|φ | with accuracy. The query complexity of such an algorithm would be O(1/ 2 ). We show that obtaining such an estimate is possible with just O(1/ ) queries to A φ and A ψ . Now, consider the following algorithm: Proof of Algorithm 7. The state evolution in Algorithm 7 can be seen as follows: The probability of measuring the R 1 register as |0 in the final state can be calculated as P r |0 R 1 = 1 2 |ψ +|φ 2 = 1 2 1− ψ|φ . Observe that to obtain | ψ|φ | with accuracy, it suffices to estimate 1 2 1−| ψ|φ | with /2 accuracy which can be performed by the quantum amplitude amplification algorithm using O(1/ ) queries to A φ and A ψ . F Non-linearity Estimation The non-linearity estimation problem is essentially the amplitude version of the P max problem with the Deutsch-Jozsa circuit as the oracle O D . Combining the algorithm for HighAmp with the intervalsearch algorithm, we obtain the following lemma. G Application of HighDist for k-Distinctness In [16], Montanaro hinted at a possible algorithm for the promise problem ∆-Gapped k-Distinctness by reducing it to the F ∞ problem 3 . The idea is to estimate the modal frequency of an array A up to an additive accuracy ∆/2 and then use this estimate to decide if there is some element of A with frequency at least k. The query complexity would be same as that of F ∞ . Here we show a reduction from ∆-Gapped k-Distinctness to a promise version of HighDist which allows us to shave off a log n ∆ factor from the above complexity. For ∆-Gapped k-Distinctness we are given an oracle O S to access the elements of A. First use O S to implement an oracle O D for the distribution D =(p i ) m i=1 induced by the frequencies of the values in A. Then call the algorithm for Promise-HighDist with threshold k/n and additive accuracy ∆/n. Now observe that if there exists some i∈ [1,...m] whose frequency is at least k, then p i ≥ k n , and the Promise-HighDist algorithm will return TRUE. On the other hand, if the frequency of every element is less than k−∆, then for all i, p i < k n − ∆ n ; the Promise-HighDist algorithm will return FALSE. The query complexity of this algorithm isÕ 1 ∆/n 1 √ k/n which proves Lemma 5. The space complexity is the same as that of solving Promise-HighDist problem. As for Lemma 4, it is easy to see that k-distinctness is equivalent to ∆-Gapped k-Distinctness with ∆=1 and so the above algorithm can be used. H Application of P max for F ∞ To compute the modal frequency of an array A, given an oracle O S to it, we first use O S to implement O D whose amplitudes contain the distribution D A induced by the values of A: D A = (p i ) m i=1 where p i =|{i∈[n]:A[i]=x}|/n. Then we can use the algorithms for P max for O D . The estimate obtained from that algorithm has to rescaled by multiplying it by n to obtain an estimate of the largest frequency of A. If we call the additive accuracy algorithm for P max with accuracy set to /n, then we get an estimate of F ∞ with additive error . No such scaling of the error is required if we call the relative accuracy algorithm for P max to obtain an estimate of F ∞ with relative error. Thus Lemma 6 is proved. I Complexity analysis of P max estimation by Li et al. [14] It is well known that the current best known algorithm for solving k-distinctness problem for any general k is the quantum walk based algorithm due to Ambainis [2] which has a query complexity of O(n k/k+1 ). Here, we show that using that quantum walk based algorithm, the query complexity of P max estimation algorithm proposed in [14,Algorithm 7], which we call LiWuAlgo, with relative error is in fact O(n). Theorem 7.1 of [14] states that the quantum query complexity of approximating max i∈[n] p i within a multiplicative error 0< ≤1 with success probability at least Ω(1) using LiWuAlgois the query complexity of 16log(n) ≥n/2. The second last equality is due to the fact that n 1 log(n) = e. So for any relative error , the algorithm makes O(n) queries to the oracle. J Reductions between problems In this section, we describe all the reductions between various problems encountered in this draft. HighDist T P max : Given HighDist (O D ,τ, ), solve P max (O D , /3) and return TRUE ifp max ≥ τ − 2 else return FALSE.
2021-03-17T01:16:28.968Z
2021-03-16T00:00:00.000
{ "year": 2021, "sha1": "ba7bad4f6b9614f3f85890eeabd9ed5ae3d43145", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "ba7bad4f6b9614f3f85890eeabd9ed5ae3d43145", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Physics", "Computer Science" ] }
115888019
pes2o/s2orc
v3-fos-license
Excitation spectra of a 3He impurity on 4He clusters The diffusion Monte Carlo technique is used to calculate and analyze the excitation spectrum of a single 3He atom bound to a cluster with N 4He atoms, with the aim of establishing the most adequate filling ordering of single-fermion orbits to the mixed clusters with a large number of 3He atoms. The resulting ordering looks like the rotational spectrum of a diatomic molecule, being classified only by the angular momentum of the level, although vibrational-like excitations appear at higher energies for sufficiently large N. I. INTRODUCTION The study of isotopic 4 He-3 He mixed clusters is attracting a growing interest in recent years. From the experimental viewpoint, the diffraction of clusters from a transmission grating [1] has opened new perspectives in the detection and identification of small clusters. There is no evidence for the existence of the dimer 4 He 3 He, but clusters 4 He N 3 He with N = 2, 3, 4 have been definitely detected [2]. It seems possible at present to resolve clearly the clusters of mass up to about 25 amu [3]. As the weakly van der Waals He-He interaction is isotope independent, the properties of such mixed clusters are determined solely by quantal effects, namely the different zero-point motion and the different statistics of the two isotopes. It turns out that helium clusters are weakly bound systems, and the lighter ones are challenging for microscopic theoretical methods. The stability of small mixed clusters has been the object of several recent microscopic studies. Guardiola and Navarro have investigated clusters containing up to eight 4 He atoms and up to 20 3 He atoms, based on both a Variational Monte Carlo (VMC) wave function [4,5] and the diffusion Monte Carlo (DMC) method [6] in the fixed-node approximation. Bressanini and collaborators [7,8,9] have considered clusters with up to 17 4 He and up to three 3 He atoms, by means of the DMC method, also in the fixed node approximation. The DMC description is based in an importance sampling wave function which plays a triple role: it controls the variance of the ground-state energy, it carries on the quantum numbers and other properties of the considered cluster, and it specifies the nodal (or set of nodal) surfaces. In particular, in Refs. [4,5,6] the antisymmetry required for fermions has been taken into account by means of two Slater determinants, one for each spin orientation, which have been built up in terms of harmonic polynomials in the Cartesian coordinates of each fermion. Moreover, a harmonic-oscillator (HO) ordering of the fermionic shells has been assumed. Although this seems a reasonable hypothesis, supported by the findings of density-functional calculations in medium sized mixed droplets [10], from a microscopic point of view there are no conclusive a priori arguments in favor of such an ordering. For instance, to describe mixed systems with three 3 He atoms, a configuration with total angular momentum L = 0 has been assumed in Refs. [8,9], which corresponds to the filling of the single-particle levels 1s 2 2s. In contrast, the 1s 2 1p HO ordering, which has been assumed in Refs. [4,5,6], results in an angular momentum L = 1. The comparison of respective binding energies indicates that the L = 1 state has a lower energy than the L = 0 one. In conclusion, a more general criterion to select the shell ordering needs to be specified. The aim of this paper is to determine the excitation spectrum of a single 3 He atom bound to a 4 He N cluster. The ordering of these single-particle levels will be relevant to describe mixed clusters with a higher number of 3 He atoms. It is also worth noticing that the knowledge of the one-fermion spectra in terms of the number of bosons is also relevant to determine the constants entering the rate equations which establish the formation chemical process [11], and thus the abundances in the production experiments. Our calculations are based in the DMC method, using an importance sampling wave function which carries out the orbital angular momentum of the 3 He relative to the 4 He N cluster. The DMC procedure is thus adapted so as to determine the lowest energy state of the subspace of orbital angular momentum L. In order to obtain the excited states within each subspace of angular momentum L we use an optimized form of the upper bounds provided by the sum rules method. The paper is organized as follows. In Section II we briefly review previous investigations of 4 He N 3 He clusters. In Section III we present a detailed description of the method used to study the ground and excited states of these clusters. Our results are presented and discussed in Section IV. Some final comments are given in Section V. II. A SURVEY OF PREVIOUS RESULTS ON 4 HEN 3 HE CLUSTERS A pure 4 He N +1 cluster is described by the Hamiltonian where m 4 is the mass of 4 He, and V (r) is the interaction potential. Recent forms [13,14,15] of this interaction are quite similar and we will use along the paper the one known as HFD-B [13] potential. Given that the He-He interaction is a consequence of the interaction between the electrons in the atoms, it is independent of the mass or the spin of the nucleus. To convert the (N + 1)-th atom into an 3 He atom, thus dealing with the cluster 4 He N 3 He, corresponds to a simple change in the Hamiltonian where m 3 is the mass of 3 He. The corresponding manybody problem is then not much different from that of a pure 4 He cluster. In the rest of the paper we will use the subindex F instead of N + 1, to alleviate the notation. Mixed 4 He N 3 He clusters containing a single fermion have been investigated using several theoretical methods. The first systematic study of the excitation spectrum of the 3 He atom was made by Dalfovo [16], based on a zerorange density functional. The use of a non-local finiterange density functional [10] results in small quantitative differences, related to the fact that finite-range functionals are more repulsive than zero-range ones. As the size on the drop increases, the 3 He atom is pushed to the surface region, due to its large zero-point motion, and for large enough clusters, the centrifugal term L(L + 1)/r 2 entering the Schrödinger equation for the 3 He atom can be treated as a perturbation. Actually [12], the general trend of the spectrum is a series of tight rotational bands on top of radial excitations, related to the number of nodes in the radial wave function. In the limit of very large N , the spectrum of the 3 He atom becomes independent of the quantum angular momentum, forming the analogous of the two-dimensional Andreev states in bulk helium. The so-called Lekner approximation was used in Ref. [17], where a VMC calculation was performed, as well as in Ref. [18], based on the Hypernetted-Chain method in its optimized version. Such approximation, used by Lekner [19] to analyze the Andreev states in bulk liquid, assumes that the pair correlations between 3 He and 4 He atoms are the same as those between pairs of 4 He atoms, so that the cluster 4 He N 3 He can be considered as a perturbation of the cluster 4 He N +1 , the perturbation being given by Further elaboration of the perturbation scheme results in a single-particle Schrödinger equation describing the 3 He atom with an effective potential given by where ρ 4 (r) and τ 4 (r) are respectively the 4 He particle and kinetic energy densities, both defined in the unperturbed system with N + 1 bosons. Notice that the Laplacian operator acting on the square root of ρ 4 produces a strongly attractive force peaked at the surface of the cluster. In order to classify the resulting spectra, Krotscheck and Zillich [18] defined an effective wave number k = L(L + 1)/R for each excitation characterized by the orbital angular momentum L, where R is the equivalent hard sphere radius R = 5/3r rms , defined in terms of the root mean square radius r rms of the droplet. By plotting all excitation energies as a function of k for a large number of clusters, Krotscheck and Zillich found that all results fall reasonably well on a universal quadratic line, in nice agreement with the density-functional results. Both sets of results may be approximately pictured as molecular rigid rotor, in which the 4 He N 3 He cluster is viewed as a two body system, the cluster formed by the N bosons plus the single fermion, tied by a spring with a rather large rigidity constant. Various angular momentum L states are associated to each vibrationallike state of the spring, obeying the law δE L ≈h 2 L(L + 1)/2I ≡ KL(L + 1). This equation defines the rotational constant K (units of energy) in terms of the momentum of inertia I, the later being proportional to R 2 , where R is the average distance of the fermion to the center-of-mass of the bosonic cluster. In the case of light clusters (N < 40) the calculations of Ref. [18] find appreciable deviations from the universal behavior Eq. (5). Small 4 He N 3 He clusters have been studied in Ref. [7] based on the DMC method, and in Refs. [4,5,6] without the use of the Lekner approximation. To this respect it is worth stressing that the Lekner approximation is basically a weak coupling description of the interaction of a 3 He atom with a 4 He cluster, because it does not include the perturbation that the outer 3 He atom should generate on the binding cluster. One may expect this picture to be satisfactory for large bosonic clusters, but inappropriate for small N . As a consequence, the interesting area to explore corresponds to the case of a small 4 He cluster, which may be appreciably modified by the 3 He atom. III. GROUND AND EXCITED STATES DESCRIPTION To carry out the Diffusion Monte Carlo calculation requires an importance sampling or guiding wave function, which incorporates as much as possible the characteristics of the system to be described. In particular, due to the strong short-range repulsion of the atom-atom interaction, it is advisable to introduce at least two-body Jastrow correlations. Moreover, the guiding wave function must confine the system and, finally, it has to include the bosonic symmetry related to the 4 He atoms and the desired quantum numbers for the 3 He atom. In this Section we shall describe a variational wave function which will be used to calculate the lowest-energy states for a given value of the angular momentum L of the system. Excited states corresponding to radial excitations will be estimated by means of sume-rule techniques. A. Ground state and angular momentum excitations In order to describe the system 4 He N 3 He in a state where the 3 He atom is in an orbital angular momentum L with respect to the 4 He N system, a simple but nevertheless complete wave function is given by where the subindexes B and F stand for bosons and fermions, respectively, whereas M refers to the mixed boson-fermion part of the wave function. The bosonic coordinates run from 1 to N and the coordinate of the fermion is labelled by F . Finally, R B represents the center-of-mass coordinate of the bosonic subsystem. This model wave function includes an internal bosonic part (Φ B ) and the coupling of the fermion to individual bosons (Φ M ) as well as to the bosonic cluster (Φ L ). We have taken Φ B and Φ M to be of the Jastrow form with The two-body correlation terms include a short-range part, associated with the parameters b B and b M , and a long-range confining part associated with the parameters p B and p M . The short-range part is mainly related to the small-distance behavior of the relative two-body wave function. Consequently, the parameters b B,M and ν have been taken to be the same for all systems studied, and in our calculations they have been kept fixed to the values b B = 2.95Å, b M = 2.90Å and ν = 5.2, as obtained in our previous calculations for pure 4 He and 3 He clusters [20,21,22], by direct minimization of the expectation value of the energy. On the other hand, the long-range confining parameters p B,M have been determined by means of the VMC method. In the absence of the last term Φ L of the importance sampling wave function (6) we would describe a state of null angular momentum, explicitly translational invariant and including the bosonic symmetry of the indistinguishable 4 He atoms. The role of the last term Φ L of Eq. (6), describing the motion of the fermion, is to determine the value of the orbital angular momentum L. It has been taken as a long-range wave function depending on the relative coordinate of the fermion with respect to the center-of-mass of the bosons, r = r F − R B . The explicit form used is the harmonic polynomial This function is particularly simple and corresponds to a state with orbital angular momentum L and null third component. One could have taken a more sophisticated form, by putting a radial dependence different from the simple r L , but it is reasonable to expect the DMC algorithm to be able to improve this simple and computationally convenient form. By making this part of the trial wave function to depend on the relative distance of the fermion to the center-of-mass of the bosons the translational invariance of the importance sampling wave function is not spoiled. Notice that if we had considered the function to depend on the distance of the fermion to the center-of-mass of the full system, the only difference would have been a trivial scale factor. With the structure of the importance sampling wave functions one may describe the lowest-energy states for each angular momentum L. It should be mentioned that apart from the case L = 0 for which Φ L = 1, all other cases correspond to functions with a nodal surface, with nodes depending only on the angular variables. This fact has to be taken into account when using the DMC algorithm. B. Radial excitations When considering a subspace of angular momentum L the DMC algorithm gives only the energy of the ground state of that subspace, and there is no information about the excited states of the same angular momentum. A way to have an estimate, actually an upper bound, of the first excited state is to use the sum rules method [23]. Consider the exact ground state for a given angular momentum L, represented here by Ψ 0L , and the full set of eigenstates of this subspace ordered by increasing energy and represented by {Ψ nL , E n,L }, n = 0, 1 . . .. Let Q(R) be an arbitrary Hermitian operator which may depend on all atomic coordinates, which is assumed to be scalar under rotations, i.e., to commute with L and S. Let us consider the sum rule of order p M (p) (11) where the sum extends to all eigenstates of the Hamiltonian but the lowest energy state of angular momentum L. This is important because in order to obtain easily computable properties it will convenient to use the completeness relation. Because of the assumed properties of Q only states with angular momentum ℓ = L will contribute to the sum. The p = 1 rule fulfills the property from which one obtains an upper bound to the energy of the first excited state of the subspace L The evaluation of the sum rules is simpler than seem, because of the relations The double commutator may be simplified for a general Hamiltonian of the form given in Eq. (2), obtaining Note that to compute these expressions one only requires knowledge of the ground state wave function of the angular momentum L subspace. This method was used in Ref. [23] to obtain upper bounds to the first L = 0 excitation, as well as to the low-lying even-L states. Given that we are obtaining the L = 0 excitations directly from the DMC procedure, the sum rules method will be used here to obtain the energies of the first excited states in each L-subspace. In the Appendix we use the sum rule method to also estimate L = 0 excitations based in the knowledge of Ψ 00 by relaxing the scalar character of Q as an alternative to the direct DMC calculations. The upper bound given by Eq. (13) is a functional of the operator Q, so it may be variationally optimized by equating to zero its functional derivative with respect to Q. Unrestricted minimization will give rise to the unpractical relation Q|Ψ 0L = |Ψ 1L , its solution being equivalent to the solution of the many-body Schrödinger equation for the excited state. An alternative is to optimize the operator inside a restricted subspace, which is the approach followed by Chin and Krotscheck [24,25] and is closely related to the procedure of Krisna and Whaley [26]. Here we have followed a simpler procedure, based in the linear expansion of the operator in a basis of easily computable operators. To determine the basis we have assumed a single-particle-like form for the operator, by considering that it depends only on r F − R B , i.e., on the coordinate of the fermion referred to the bosonic centerof-mass This simple form preserves the translational invariance and does not spoil the boson symmetry of the 4 He subsystem. For this type of general operator Eq. (16) can be further simplified, since the following relation holds. The resulting sum rule M 1 becomes In the calculations to be described below the monopole operator has been optimized by using a simple functional form depending on few parameters, with The minimization of the upper bound of Eq. (13) with respect to C m , for angular momentum L, gives rise to a generalized eigenvalue problem, with a Hamiltonian-like matrix and a normalization matrix The lowest eigenvalue of Eq. (21) provides an optimized upper bound. By inserting Eq. (20) into Eqs. (22) and (23) the matrices are further simplified to and It is worth keeping in mind that in our DMC calculations the matrix elements M Some insight on the structure of these excitation may be drawn by considering the leading m = n = 1 terms of both the Hamiltonian and norm matrices. For this one-dimensional subspace the upper bound to the radial excitation energy is given by The size of this bound depends on the difference between the mean square radius and the squared mean radius of the fermion with respect to the boson center-ofmass. This difference will be small if the distribution of the fermion with respect to the boson center-of-mass is sharply peaked near a given value say R 0 , corresponding to a rather rigid spring. Then the denominator will be small and the radial excitation will have large energy. The excited states considered in this section result from excitations related to the distance between the fermion and the center-of-mass of the cluster of bosons. The underlying optimal wave function (not determined, however, from the DMC procedure) should have at least a node along this radial coordinate, in order to be orthogonal to the angular momentum L ground state, and may be properly termed as a radial excitation. C. Computational details The DMC algorithm [27,28] is nowadays a wellknown and used technology. It is based in integrating the imaginary-time Schrödinger equation for an auxiliary function f (R, t) = Φ var (R)Ψ(R, t) which is the product of a trial wave function Φ var and the true ground-state wave function Ψ(R, t). The solution is given in terms of an approximate small-time Green funtion G(R, R ′ , τ ) by means of a series of small time steps τ . We have used the O(τ 3 ) approximate Green function [29,30] which provides an O(τ 2 ) approximation for the energy. In our calculations we have used a set of 1000 walkers, on the average, a value of τ = 0.0002 K −1 , 8000 iterations to settle down the system and 80000 iterations to compute the averages. For N ≥ 40 we have used a smaller time step τ = 0.00015 K −1 and the number of iterations has been doubled. The wave function is not definite positive when L = 0, and this leads to the known and irritating sign problem of the DMC algorithm. We have used the so-called fixed node approximation. The auxiliary function f (R, t) will remain positive if both functions Φ var and Ψ have (at any time) the same nodal surfaces. The fixed node approximation consists in killing any walker which attempts to cross a nodal surface. It has been shown [27,28,31] that this procedure leads to an upper bound to the ground state. IV. ENERGETICS OF 4 HEN 3 HE CLUSTERS We have first performed a VMC calculation to determine the free parameters p B and p M of the importance sampling wave function. As mentioned in the previous Section, the parameters b B , b M and ν are fixed by the atom-atom interaction at very short distances, and do not depend on the size of the cluster. On the contrary, the long-range parameters p B,M are very sensitive to the number of bosons. Their values, determined by minimizing the ground state expectation value of the Hamiltonian, are reported in Table I. One can see that the values of these parameters decrease when the number of bosons increase, as it should correspond to a drop, its size growing with the number of constituents. Notice that, for any given number of bosons, the parameter p M , controlling the boson-fermion distance, is significantly smaller than the parameter p B controlling the boson-boson distance. This reflects the fact that the light particle stands near the surface of the bosonic drop. In Table I the ground state energies obtained within the VMC calculation as well as with the improved DMC method are also displayed. The later are significantly lower than the former, thus revealing that the variational trial functions are not of high quality. They could be improved by adding either medium-range terms to the twobody Jastrow correlation or three-body Jastrow correlations. Notice that improving the importance sampling wave function will not affect the DMC results of Table I except for the statistical error. Nevertheless, it could lead to better upper bounds for the L = 0 energies. The statistical errors of the DMC energies grow steadily with the number of bosons. As a consequence, the determination of the excitation energies becomes less and less accurate for large N . The excitation energies E L are obtained as the difference of two independent calculations, one for L = 0 (E 00 ) and the other for the desired value of the angular momentum (E 0L ). These two energies have very close values, thus magnifying the statistical error of their difference. As a consequence, the direct plot of the excitation energies will show fluctuations. In order to control them we have fitted all cases between N = 30 and N = 50 with a liquid-drop like formula, i.e. a third-order polynomial fit in terms of the variable N 1/3 . The 3 He chemical potential, or 3 He dissociation energy, is defined as It corresponds to the energy required to eject the 3 He atom from the mixed system in its ground state (L = 0). According to the definition, µ F is a positive quantity and its value is relevant because the states whose excitation energy is above it are not bound. To control the statistical fluctuations in µ F we have again fitted the raw differences with a liquid-drop formula. In Fig. 1 are plotted the raw DMC results for the chemical potentials and the excitation energies, as well as their respective fits as described above. For L = 4 there were too few points to carry out that fit, and we have only plotted the raw DMC results. The corresponding values for bound levels are displayed in Table II for excited states with L = 1 to L = 4 and for systems with different number of bosons. The total energy of the ground state (E 00 ) has been already quoted in the last column of Table I. The excitation energies displayed in Table II are the raw DMC differences for systems with N < 20 and the results of the least squares fit otherwise. The excitation energies of radially excited levels obtained with the sum rules method described above are shown in Fig. 2 for L = 0, 1, 2. These levels are close to the dissociation limit, above it for N < 20 and below afterwards, for L = 0, 1. The radial excitation for L = 2 is always above the dissociation limit, with the exception of N near 50, which signals the threshold for the binding of this level. One should keep in mind that the upper bound character of the excitation energies, as expressed by Eq. (13), is not strictly satisfied in the present calcu- II: Excitation energies EL, in K, of 4 HeN 3 He clusters for L = 1 − 4 and N = 5 − 50 in steps of 5 atoms. The values quoted are the result of the least squares fit described above for N ≥ 20 and raw DMC results for N < 20, with the exception of the L = 4 column which contains the raw DMC results. The last column displays the 3 He dissociation limit. The bound level spectra resulting from our calculations are collected up in Fig. 3. The bound levels are grouped by the value L of the angular momentum, indicated in the right side of the figure, with the symbol 0 * signaling the radial excitations of the ground state. As we have mentioned in Section II, one could expect these spectra to look like a series of rotational bands, with the excitation energies roughly proportional to the value L(L + 1), where L is the angular momentum of the excited level. That is to say that the quantities which correspond to the rotational constants (cf. Eq. 5), should be expected to depend only on N , and not on the angular momentum L. This is indeed the case, as shown in Table III. The values of the rotational constants smoothly decreases as N increases, as expected because of the dominant dependence on 1/|r F − R B | 2 . V. STRUCTURE OF 4 HEN 3 HE CLUSTERS A complementary information about the nature of the excitations is provided by the density distributions of the fermion with respect to the center-of-mass of the system. Given that we are dealing with non-zero angular momentum states, to simplify the presentations we show them in Figure 4, for the bound states of systems with N = 10, 20, 30, and 40. No plot of radially excited states is given, Table IV lists the values of the root mean square radii of bosons and fermions with respect to the center of mass. Only the ground state boson radius has been displayed in this table since it is almost independent of the angular momentum L. The boson radius grows monotonically with the number of bosons, following a rough N 1/3 law, as could be expected. The fermion radii are given for the states with angular momentum from L = 0 − 4. At fixed L, the fermion radii increases smoothly with N , except for the lowest value signaling the threshold of stability. In that case, the fermion radius may be abnormally large, indicating that the system is only slightly bound. A flashy case is N = 30, L = 4 due to its very large fermion radius. This level is almost surely unbound and the DMC algorithm ejects the fermion far away from the center-of-mass of the drop; eventually the 3 He will move to infinity if the random walk were long enough. The picture which emerges from the densities plot in Fig. 4 and from the values of radii in Table IV is the ex- pected one. The fermion is always located at the surface of the boson cluster, and increasing the value of the angular momentum produces the fermion to go away from the boson cluster. To ascertain the goodness of Lekner approximation we have plotted in Fig. 5 the boson distributions corresponding to two pure 4 He system with N and N + 1 together with the boson distribution for N bosons plus one fermion. The distributions corresponding to N = 40 are almost superimposed, thus revealing the rigidity of the bosonic core in front of the addition of a fermion. On the other hand, there are sizable differences between the three distributions for N = 10, revealing the effect of the dopant on the bosonic cluster. In other words, there is a weak coupling regime (Lekner approximation) for large N but a strong coupling regime for light drops. VI. CONCLUSIONS We have permormed DMC simulations, in conjunction with the sum rule method of Ref. [23], to compute the excitation spectrum of a 3 He impurity in 4 He clusters. The rotational levels, namely the lowest energy levels within each angular momentum L subspace have been computed by including in the guiding function a term Φ L (r F −R B ), generating an eigenfunction with good angular momentum quantum numbers. On the other hand, the radial excitations have been estimated by computing an optimized upper bound obtained with the sum rules of order 0 and 1. Important results have been obtained for the shell ordering of the 3 He orbitals. First of all, the excitation spectrum contains a limited set of bound excited levels, whose number increases with the number N of bosons. Indicating the n-th radial excitation of the state with angular momentum L with notation (n + 1)L, using the usual spectroscopic letters for the values of L, we find the excitation energies to follow a rotational spectrum, thus suggesting a shell ordering of levels 1s 1p 1d 1f 1g . . .. Starting at N ≈ 20, the 2s radial excited state appears as an intruder within the rotational band. Presumably, at a larger value of N , the 2p radial excitation will appear, an so on. The obtained level ordering is different from both the 1s 1p 1d 2s 1f 2p . . . of the threedimensional harmonic oscillator and the 1s 2s 1p 3s 2p . . . typical of atoms. This ordering of levels should be taken into account specially when dealing with mixed drops with a number of bosons much larger than the number of fermions. Note however that whereas the DMC algorithm is able to improve the quality of the model or importance sampling wave function as far as the bosonic correlations are concerned, with respect to the fermionic part it will maintain the structure of the nodal surfaces. There remains however an important question, namely the relevance of these results for pure fermionic systems or for mixtures with a comparable number of bosons and fermions. The fermions are expected to play a double role: on the one side, fermions are creating some kind of self-consistent central field, analogously to the bosons, and on the other side they are subject to the Pauli principle effects. So, in a first approximation, one may assume that the level ordering of such systems is close to that of the system APPENDIX A: THE SUM RULES METHOD APPLIED TO EXCITATIONS OF ANGULAR MOMENTUM L We have described above the use of the moment method to compute the radial excitation energies. The method may be also used to determine upper bounds to the excitation energies of states with angular momentum L, as we shall show in this Appendix. The resulting information will complement the one obtained directly by the DMC method, and is particularly relevant for the cases L = 1, 2 where the direct calculation of the excitation energies is affected by a rather large relative error, being the difference of two large quantities, specially for large values of N . To obtain upper bounds to the excitation energy of a state of angular momentum L = 0 it is convenient to use for the operator Q a form which behaves as an angular momentum tensor of rank L. A simple way is to consider the value For the function f we have considered a power expansion with parameters C n to be determined after optimization of the upper bound. To fulfill the requirements which lead to Eq. (13) we should consider the Hermitian part of this operator. In fact, as we are interested in excitations of angular momentum L, irrespective of the value of the projection of angular momentum along some fixed axis, a linear combination likẽ Q (L) = Q (L) + Q (L) † / √ 2 will be adequate. This combination has the advantadge that sum rules M 0 and M 1 are expressed by Eqs. (14) and (16), replacing operator Q in these expressions with either Q (L) or its Hermitian conjugate Q (L) † . As in the case of radial excitations, we end up with a generalized eigenvalue problem, the required matrix elements of the moment operators being given by which recalls the naive the rotational model. It is worth mentioning the difference between this bound for angular excitations, Eq. (A8) and the bound obtained for radial excitations, Eq. (26), which is manifested specifically in the denominator. In Table A are given the values obtained for these upper bounds, after solving the generalized eigenvalue problem, and are quite close to the DMC excitation energies displayed in Table II. As in the case of radial excitations, the sum rules have been calculated by means of mixed matrix elements, so that they are not strictly variational.
2019-04-14T02:04:29.447Z
2004-01-27T00:00:00.000
{ "year": 2004, "sha1": "0f20034e04abe28e29488fb8663d918b491defd9", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cond-mat/0401542", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "64af001c3f7f6ac7afaa31c52464deda85504df7", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
55507367
pes2o/s2orc
v3-fos-license
Shock Induced Symmetric Compression in a Spherical Target Shock induced symmetric compression has been studied in a spherical target. The shock induced interfacial radius will shrink and would reach a minimum point during implosion situation. However, after implosion the plasma tries to expand in blow off/explosion situation and as a result the interfacial radius will increase. Effects of plasma parameters like density and temperature have been studied numerically. It is seen that the density increases many times due to the mass conservation in imploding situation of a compressible shell like ICF. However, temperature will change rapidly due to change of inner density and so would be the pressure of compressible fluid following adiabatic law. Our analytical results agree qualitatively with those of simulation results in spherical geometry and also experimental observations conducted in cylindrical container. Introduction Dynamics of interfacial radius of an imploded spherical shell like ICF is an interesting task to the fusion community.The motion of the interface of two fluids and the effects of the plasma parameters like density and temperature on the motion have been studied both analytically and numerically.This study can explain the underwater explosion [1] or find out the time taken in filling up the cavity suddenly formed inside an incompressible fluid [2].Stability conditions of symmetric compression for incompressible and immiscible viscous fluids were discussed earlier by Plesset [1] considering a source and sink at the origin of the spherical shells.Linear perturbation method shows that the thermal electron conduction may be symmetric in laser driven fusion [3] [4].In both cases, analytical results predict that spherical shell will shrink depending on the pressure difference at the fluid interface [5].However, implosion dynamics has been studied in spherical targets considering the solid compressibility and the mass loss by ablation [6].Later, he described the two layer spherical shell target implosion considering a single pressure [7].Mikaelian [8] considered only radial incompressible flow to explain RT and RM growth in a cylindrical target.Later, he pointed out that smooth density gradient at the interface could stabilize the fluid instabilities after considering the constant steep density gradient at the multiple-shell interface [9] in ICF related problems.Again, the problem of implosion and explosion has been studied in spherical geometry [10] [11].In ICF, deuterium-tritium fuel is kept in a micro balloon surrounded by a foam like target.The laser shock immediately produces the target into plasma which tries to expand whereas shock velocity impinges inside the lighter DT fluid.In this stage hydrodynamic instabilities arise, which oppose to efficient compression.There are several answers to consider the potential flow model to describe the motion of the interface.Hwang et al. considered potential flow model for incompressible fluid of a hydromagnetic situation to study the linear behavior of the dynamic effects of plasma in the cylindrical geometry [12].The problem of implosion in cylindrical geometry under a variety of physical conditions has recently been investigated both experimentally and numerically [13]- [15].The results of these studies show that during a laser induced implosion (deceleration phase), the interface of cylinder/spherical target is compressed and the radius of the cylinder/sphere decreases and consequently, the density of the internal plasma increases [16] [17] and reaches a maximum value for a critical value of the two fluid interface radius.This is followed by a blow off stage when the interfacial radius expands with associated lowering of density.These features characterize the compression in ICF process.The interfacial radius will shrink due to shock impingement at the interface and reaches a minimum during implosion situation.However, after implosion the plasma expands due to blow off/ explosion situation and consequently the interfacial radius will increase. In this paper, we have shown that the interfacial radius will shrink due to the shock impingement producing the interfacial velocity and the density will increase many times due to the mass concentrated in imploding situation of a compressible shell like ICF.However, temperature also changes the inner shell obeying adiabatic compression for very fast process.Our analytical results are found to agree qualitatively with recent simulation and experimental results obtained in cylindrical geometry.Also in the spherical geometry, during compression the density of the DT fuel increases many times more than the density of the DT gas [18] [19] in the compressed shell within the core of the spherical target. We have described the paper as follows: In section 2, we describe the geometry and physical situation of the problem with analytical results.Numerical results and its discussion has been given in section 3. Formulation of the Problem: Spherical Target In this paper, we assume a model which focuses on the post shock phenomena of interface motion during implosion-explosion situation and thereby changes density, temperature.Consider two concentric spherical shells containing two different fluids of different densities: fluid having density ( ) R t  . The inner shell with radius ( ) R t is set in motion by impingement of a shock.We are interested in the dynamics of the interface leading to consequent compression of the contained fluid and the effects of plasma parameters like density and temperature on the dynamics (Figure 1). Assuming mass conservation relation for inner fluid, we can write for fluid density 1 ρ : ( ) ( ) Now,we have the equation of continuity ( ) where v is interface velocity after passage of shock. Assuming that the impingement of shock gives rise to uniform compression from all directions so that the above equation may be considered to be independent of angular variable.In spherical coordinates, Equation (2) can be written as Solving it one can get the expression of velocity in the following form Using the boundary conditions that at the two fluid interface ( ) where R  is the shock generated interfacial velocity.Then velocity of the interface in Equation (4) leads to Now assuming the potential flow motion, we can write ( ) ; so that , 5 Since, there is no source/sink of velocity at the origin, the integration constant can be neglected.Now, we employ Bernoulli's equation for compressible fluid in inner shell ( ) where 1 R is the minimum value of ( ) R t can reach.l P is internal pressure of lighter fluid.To calculate the r.h.s term of the above equation, we assume adiabatic compression i.e., for very fast process as in ICF situation, which leads to ( ) ( ) After some straightforward algebraic calculation and keeping terms up to third order of the ratio , we arrive at the following equation ( ) ( ) where 1 1 , P p are the pressure at 1 r R = and r R = , respectively.However, outer shell fluid satisfy the Laplace's equation, Using the boundary condition at the interface the solution of Laplace's Equation ( 11) becomes Bernoulli's equation for the fluid with 2 ρ , gives ( ) Using 2 φ as given by Equation ( 13) we get, ( ) ( ) ( ) where 2 2 , P p are the pressure at very large distance from the interface and r R = , respectively.Also, we have neglected ( ) and combining Equations ( 10) and ( 15) we arrive at the following equation for ( ) which will describe the evolution of the interface and other parameters.Equation ( 16) can be integrated once to obtain a relation between  vs R, from analytical result in (Equation ( 17)).Numerical calculation com-pletely agrees with the analytical result. where Numerical Results and Discussions To solve Equation ( 16), we first make the equation dimensionless and write a set of first order equations, where normalized variables are Now, we solve the set of Equations ( 18) and ( 19) using Runge-Kutta-Fehlberg technique and plot interface and its velocity in reaches a minimum point near the origin of the target during deceleration phase.This is the implosion situation.Mass conservation suggests that since the volume is reduced, the density increases and reaches a maximum corresponding to the minimum of interfacial radius.Then plasma tries to expand in acceleration phase.Thus interfacial radius increases with time.This is the explosion situation.The interfacial radius reaches minimum At this instant the density attains its maximum value ( ) (γ → adiabatic index).It is to be noted that the time dependence of temperature thus exhibited also agrees qualitatively with experimental and simulation results in cylindrical geometry [15]. Figure 1 . Figure 1.Geometry of the problem. Figure 3 (Figure 3 . Figure 3. (a) and (b) represent nondimensional interfacial radius 21)It is shown in Figure4(a).Again, the velocity changes like a shock during changing of implosion state to explosion state.It has been shown in Figure3(b).It is to be noted that in our uniform compression model predicts a good qualitative agreement with the corresponding experimental and simulation results for both cylindrical[13]-[15] and spherical geo-metries[18] [19].However, there may be more complicated physics behind the compression during stagnation time.Finally, if we assume that the compression of the inner spherical shell fluid occurs adiabatically and that there occurs little or no conduction of heat across the interfacial radius ( ) R τ , the shell fluid temperature ( ) γ is the adiabatic constant.Thus the time evolution of the temperature ( ) Figure 4 . Figure 4. (a) and (b) represent density and temperature variations, respectively.Initial conditions are the same as those in Figure 1.
2018-12-07T01:27:11.165Z
2015-10-15T00:00:00.000
{ "year": 2015, "sha1": "7502e035eee99d10ac20688ce7ad15afb9704304", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=60290", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "7502e035eee99d10ac20688ce7ad15afb9704304", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
236611612
pes2o/s2orc
v3-fos-license
Effect of demonetization on microfinance industry in India High-value denomination currency was scrapped off from the Indian economy on 8th November 2016, Rs. 500 and Rs. 1,000 currency notes were declared invalid at midnight hour. Taking such a bold step by Hon. Prime Minister of India Mr. Narendra Modi, means declaring 86% of cash in circulation an illegal tender overnight. This study is conducted on the immediate effect of demonetization on the microfinance industry in India and its after-effects in the later years. We have analyzed the data collected from the rural sector, on the collections and lending, impact on the farmers, how it has affected the small microfinance companies. We have also analyzed RBI’s initiative towards financial inclusion, Pradhan Mantri Jan Dhan Yojana (PMJDY), the emergence of new Small Finance Banks, Digital Banks, Payment banks, etc. We have critically analyzed the impact of cost versus the benefit from the demonetization exercise for the rural and microfinance industry. The aim of this study is to analyze the overall financial inclusion of the rural and small sectors towards the dream of a Cashless Economy. © 2021 by the authors. Licensee SSBFNET, Istanbul, Turkey. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). Introduction On 8 th November, 2016, the Prime Minister of India made an important announcement, which changed not only the course of the financial industry, but the overall economy of the country. The Government declared ₹500 and ₹ 1,000 denominated currency as invalid overnight. Demonetization was not new to Indian economy; similar measures were taken twice in the recent past. The government had not just one reason to declare its highest currency of ₹ 1,000 and ₹500 to be invalid. One of the main reason for demonetization is to curb black money, and control terrorism, inflicting losses to the criminals having black money in high denominated currencies. Another main reason for demonetization was to bring India close to a digital and cashless economy. Ewallets and digital payments apps like Paytm, have made tremendous growth post announcement of demonetization, whereas other sectors like Microfinance Institutions (MFI) and agricultural sectors faced a major blow during this period. Microfinance industry (MFI) is deeply rooted in the semi-urban and rural parts of the country, and most of the people using microfinance industry belong to the uneducated mass, who know nothing about the use of computers and digital banking. MFIs do collection on daily, weekly or monthly basis, and that too majority of their collections are in small cash amounts. demonetization had an immediate adverse effect on the economy. There was liquidity crunch among the people, especially rural segment of the society, who are nascent to banking system and deal with cash only. The worst affected were Non-Banking Micro-Finance Industry, however they were rewarded later with banking license from RBI. Business Standard reported based on data published by Microfinance Institutions Network (MFIN), that loan Portfolio at Risk (PAR) for delinquencies of more than 30 days (PAR>30) was almost 6% at the end of December 2017, up 0.3 % against December 2016, soared to as high as 14.1% by the end of march 2017. In this study we have done extensive study on cause, effect and benefits of demonetization on the future of Indian economy. The objective of this study is to understand the challenges faced by microfinance industry in India, post demonetization. We have also studied the various IJRBS VOL 10 NO 3 ISSN: 2147-4478 353 challenges faced in financial inclusion and challenges faced by MFIs after demonetization. Also there is an effort to understand the impact on lending by MFIs during the demonetization period. Theoretical Background Related studies show familiarity with the body of knowledge and gives an idea of studies already conducted in the area. Our study is to analyze the impact of demonetization on micro-finance sector, however we have studied the existing literature by Rastogi (2018) who has justified the timing of demonetization, because the economy was performing well, it was capable of handling short term shocks. He also discussed the reasons and impact of demonetization on black money, fake currency, terror financing, note bank politics, economic growth of India. Bansal, N (2019) in his study on impact of demonetization on banking sector has given a very positive findings, he suggested that demonetization has reduced black money circulation in the economy, and in long run it will improve the fiscal deficit, control inflation, reduce corruption, eliminate counterfeit currency, improve tax compliance and overall lead to a sustainable and stable economy. Kamala Devi, L.K., & Rajavalli Devi, L.K. (2018) has shown the positive aspect of demonetization by including through various financial inclusive schemes for the poors and the deprived. In their research, they have shown that number of digital transactions post demonetization has significantly increased in the rural areas, the number of digital payment systems like NEFT, IMPS, PPIs, Mobile Banking has shown significant increase in post demonetization. More people have started using more of financial services after demonetization. There has been huge shift from non-user to first time users increasing the financial inclusion in the country. Radhika et all (2017) have done an extensive study on the immediate impact of demonetization on the micro-finance sector, and its scope of financial inclusion in long run. They have concluded that although demonetization has had a very hard-hitting impact on the growth of Indian economy from 9.7% to 5.7% within one year, however, it's a short-term impact, paving way for a long run benefit of financial inclusion through bringing the economically backward class of people to the mainstream of digitalization and use of digital banking, mobile wallets. Micro-Finance Institution (MFI) were very unorganized and carry the risk of holding cash, but post demonetization the MFIs are more organized, transparent and using modern technologies for operating the company. Taruna & Kumar, N (2017) have shown the immediate impact of demonetization on the rural sector of Lucknow. They reasoned for an adverse effect of demonetization on the poors and came up with an explanation that it was all because of improper implementation of such a beautiful policy. The long-term benefits are yet to be seen, but the poors of rural areas are very badly impacted, because they were only dependent on cash currency for their day-to-day work. When Taruna et all projected that only poors of rural parts were very badly impacted, Sinha, A & Rai, D (2016) said that all strata of life came to a standstill, by withdrawing 86% of circulating currency with a blink of time. They too blamed for inadequate and improper planning on government's part for implementing demonetization. Shahare V. B. (2017) emphasized in his research for the need of demonetization to fight black money, however he blamed for the lack of proper planning and inadequate measures for implementation of this policy. He said that agrarian economy, which is majorly dependent on cash and is way far from understanding the concept of digital currency, had to face the roughest challenge post demonetization. He proposed to strengthen and empower the Microfinance sector with digital payments mechanism. Use of government schools like Anganwadi, colleges, panchayats to create awareness among the rural section and deprived section of the society, about use of digital banking and payment mechanism. Krishnan, D & Siegel, S (2017) did a survey in 28 slum or lower income neighborhoods in Mumbai. According to their findings, post demonetization lead to drop in average income level of 10% in the month of December, 2016. This drops in income lead to further drop in consumption as well as change in family's savings. Also, an increase in banking transactions increased with corresponding decline in use of cash for day-to-day transaction. Abhijeet Banerjee et al (2019) did an experimental research on how should information be disseminated to large populations, especially during the chaotic 2016 demonetization. They varied how information about the policy was delivered to the villages along two dimensions, and how people were initially informed (broadcasting vs seeding) and whether the identities of the initially informed were publicly disclosed (common knowledge). They concluded through the field experimental study, that broadcasting message could be useful for simple messages, but for complex message like digital banking and cashless economy, seeding is the best option for spreading information. This research is based on secondary data collected from various sources like indexed journals, books, published articles on news website, RBI website and other government websites. This research has also analysed some observations and interactions with people in the aftermath of demonetization. The period of the study is in between November 2016 to October, 2019. Demonetization and micro-finance industry Microfinance is aimed at providing banking and financial services to the ones, who are not eligible access conventional banking and financial services in an economy. It is promoted by the central bank with the aim of financial inclusion of the weaker and deprived section of the society. By integration of micro-finance industry with conventional banking in the economy, Reserve Bank of India (RBI) aims to build a strong, robust and efficient financial system in the country. In today's scenario, Microfinance Industry (MFI) holds significant share of economy, and comes under the regulations of RBI. As per the data from Care Ratings (2017), there are 71 MFIs registered with RBI. Just a month before Demonetization, the gross total loan portfolio of Non-Banking Financial Corporation (NBFC-MFI) stood at Rs. 55,254 crores with more than 99% of collection rate. With the government announcing demonetization in November, 2016, the microfinance industry witnessed a major blow to its collections, and loan repayments. NBFC-MFIs comes second largest source of micro-credit in the country. A month after announcement of demonetization, the number total loan disbursed by NBFC-MFI dropped by 26% in the quarter ending 30 th December, 2016. The decline trend continued to the next financial year 2016-2017, the decline was witnessed in new customer enrollment, loan disbursal and repayment rates. Challenges faced by MFIS post demonetization Demonetization may have a lot of long-term benefits and it can be a key drive for creating a digital and cashless economy in long run. However, one cannot deny from the fact that it had a devastating effect on the economy in the short term, especially the microfinance and rural economy. Since most of the MFIs are in rural and sub-urban areas, people deal more with cash than online or digital transactions. Typically, the MFIs have a above average repayment rate of 99%, however, post demonetization it fell almost 12% below the average repayment rate. Also, there was a sharp increase of 7% to 10% in Non-Performing Assets (NPAs) in MFIs. According to ICRAs estimates, the MFIs raised to be an estimate of Rs. 5,500 crores in first six months of FY17, however it was limited to only Rs. 1,650 crores in second half of the year. The negative impacts of the demonization are as follows: Impact on collections As already discussed, MFIs are located in rural and sub-urban areas of the country, where people have less of digital literacy, and they prefer cash over digital banking for their financial transactions. MFIs primarily dealt with cash, lending and recovery of loans were done in cash. Because of 86% of cash being sucked out of economy during demonetization, MFIs struggled to recover their loans, hence their MFIs witnessed drastic decline in recovery rates immediately after demonetization. Source: CARE Ratings; Note: Data compiled for 11 NBFC-MFIs We can see from the table that the collection ratio declined to 80% in the first couple of weeks post demonetization, but when the RBI announced the release of new currency in the market, in the fourth week, the collection ratio increased. However, we can see that in the subsequent weeks, there is a decline in collection ratios. We can see in the following table, the drop-in collection ratio from November, 2016 to December, 2016. Karnataka, Bihar, Assam, Chhattisgarh, Tripura, Meghalaya, Chandigarh, Tamil Nadu, Puducherry, Odisha, Goa and Jammu & Kashmir. From the above table from CARE Ratings, dated from November, 2016 to December, 2016, we can clearly see that the collection efficiency has drastically drifted downwards, especially for the North Indian States like Uttarakhand, Uttar Pradesh, even Delhi has shown a collection efficiency of 73.50% which is very low as compared to previous average of 99%. Overall collection efficiency has come down to 84.43 % in the month post announcement of demonetization in November, 2016. Impact on lending There were tremendous efforts for cashless lending in MFI industry, however, since the digital literacy was lacking among the rural parts and among majority of the poors of the society. Hence it was very difficult for the NBFC-MFIs to disburse the loan amount to its customers. One of the reasons for further decline in the loan disbursement was the withdrawal limit imposed during the first few weeks of currency ban. Because of withdrawal limits on current accounts of MFI, it was difficult for them to disburse the loan amount to its customers. Another reason for increased impact on lending, is that NBFC-MFIs have focused more on collections than on lending during the study period, hence it was impacted. Impact on farmers Indian population has a strong presence in rural areas, as per the census of 2011, almost 70% of India's population lives in rural areas. Agriculture being one of the key contributors to the GDP of the country, with a 17% contribution of GDP, it can be called as the backbone of the country. As per the Planning Commission of India, 2014, Agriculture and allied sector employs 49% of the workforce of the country. This industry is mostly dealing in cash-based transactions, with higher denomination currency being sucked out of circulation, this sector was highly impacted. Poor people who had no or limited knowledge and access to banking and post office faced the most brutal impact of this policy. Farmer's Suicide due to the incapability to repay MFI loans has been a major concern in the country, especially in Southern part of the country. MFI has been strict on its collection policy, sometimes farmers have to borrow money at higher interest rate to repay off MFI loans. Positive impacts of demonetization on MFIs Despite the number of challenges faced by Micro Finance Industry as a whole due to demonetization, however in long run, it is the MFIs only who will benefit the most due to digital receipts and payments. The borrowers will now be forced to use the conventional banking system, then relying on cash-based transactions. Hence, they would be inclusive with the rest of the country in the cashless and digitalization objective of India. More bank accounts were opened under Pradhan Mantri Jan Dhan Yojana (PMJDY), hence supporting the financial inclusion objective. This acted as bridging the gap between the illiterate poors and the advanced banking system of the country. MFIs has been trying to increasingly use the Jan-Dhan accounts for easy and cash-less disbursements and collections. One of the boost given to the MFI industry, is granting license of Small Finance Bank License to eight of the top MFIs of the country. Which will help this industry to have their own cash out points reaching out to its customer for enhanced usage of their accounts. This step is seen as a major shift to the Micro Finance industry, from cash based to the new-age digital disbursements and collections. There have been numerous efforts by MFIs to educate and aware its customers to make use of banking system and be a part of the conventional banking system. Conclusions After demonetization, the NBFC-MFIs are increasingly looking for cashless disbursement and collection through Jan-Dhan Accounts (opened under PMJDY) and by leveraging technology. With new small bank license awarded to eight of the top NBFC-MFIs, it is going to revolutionalise the entire MFI industry. Cashless disbursement and collection will further fulfil the goal of financial inclusion, by bringing every section of the society into the mainstream banking and financial system. MFI is also actively conducting awareness program through its center and group meetings about the new age banking, and also about the credit profile and its implication on their credit capacity. Those MFIs which have higher financial leverage and have low collection efficiencies are expected to face deterioration in their credit profile in future. From our study, it is clearly evident that Demonetization had an immediate devastating impact on the rural economy, be it rural agriculture economy or micro finance industry. Almost paralyzing the financial transactions in the months just after demonization was announced in November, 2016. The collections were impacted, and it witnessed sharp fall in collection rate to 80% from 99% previously. Also, the disbursement of loans was impacted, because of lack of preparedness by the NBFC -MFIs to handle a situation when higher denomination currency was sucked away from the market. The farmers, who are mostly financial-illiterate, and they have less knowledge of banking transactions, faced the most brutal blow. They simply didn't know how to spend on their agriculture, with 86% of the currency being declared invalid overnight. Despite of its immediate problems, Demonetization has been claimed to have worked as a stepping stone for Digitalisation and Cashless economy. Millions of financially backward people, now have bank account under PMJDY, thereby strengthening the goal of financial inclusion. One of the biggest beneficiaries of post demonetization is Micro Finance Industry, eight big MFIs were given license by RBI to start new Small Finance Banks. Also, MFI, invested in digitalization, introducing more cash points, online disbursement of loans and accepting direct contributions from its customer online. This increased more transparency and trust for its customers. Demonetization may have caused problems at initial stage, due to lack of proper planning and implementation, but it has laid foundation for a progressive and strong economy. Use of financial services has dramatically increased, there has been increased participation in banking services by people from all walks of life. Research and studies suggest that it will help the financial services industry to increased liquidity, and reduce its NPAs, increased use of digital technology in banking and financial services, thus with increased efficiency, enhance the profitability.With a fundamental shift in the formal financial sector, post demonetization. This step is seen as important as the nationalization of banking sector decades ago. It is expected that in next ten years, the small financial banks and the Micro-finance banks will dominate the banking sector in rural India, and will be one of the most important players in overall development of the economy.
2021-08-02T00:06:33.709Z
2021-05-01T00:00:00.000
{ "year": 2021, "sha1": "5d4e12c8ba24dbd8e1f663a4deaf51e2160dfecb", "oa_license": "CCBYNC", "oa_url": "https://www.ssbfnet.com/ojs/index.php/ijrbs/article/download/1105/857", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "d2458b459e47eb43bbc29d38d1c5ae0d56d66a38", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Business" ] }
213804296
pes2o/s2orc
v3-fos-license
Fulfillment of the Right to Social Guidance for Children in Elementary School The purpose of the research is intended to know the actions of teachers in guiding learners to behave and relate socially well in primary school Al Firdaus, Surakarta, Indonesia. This type of research using qualitative methods and strategies used in this study a holistic single case study. Data collection techniques such as interviews, documentation and observation. The validity of the data in this study using triangulation sources and triangulation techniques. The data analysis used in this study a braid or flow analysis model. As well as using purposive sampling technique. The results of this study indicate guidance and counseling teachers perform some related action proper teaching good social guidance for learners. His actions in the form of learning communicate, behave, interact, and action learning understanding and practice of discipline for students. As an additional measure for students who have problems related to social behavior of teachers provide counseling to the students either individually or in groups related problems faced. INTRODUCTION Humans are social beings, where every individual is interdependent and interacting. In the development of human interaction does not always run harmoniously. Forms of social problems can happen to anyone, not only in adults, in some cases the form of social problems also appear in children, not just the environment of play, but also can also occur on the school environment. I have been increasingly aware of the lack of social science taught in schools (Haverback, 2017) .One example of social problems that occur in the environment the children are bullying behavior. Based on the research results Kumpulainen, et.al (1998) shows of 5813 learners in primary schools was observed researchers divided into four groups as bullies, bully-victims, victims and control. The data collected shows the number of students bully as much as 470 (8.1%) then learners being a bully -victims of 441 (7.6%), while students who are victims of as many as 665 (11.3%), and the number of learners as much control as 4247 (73.1%). The conclusions of this study indicate that bullying is a common phenomenon among children who are psychologically disturbed. Bullying also raised the possibility of being referred for psychiatric consultation. Buliying behavior is one of examples of social problems that are closest to the social environment of children. Schools must focus not only on improving the quality of teachers and teaching materials, but also the social environment in schools (Nyhus, 2016). Pupils have a wide range of counseling needs including social, emotional, career, The above problems such as bullying and behavior of negative social else what if continuous will have a negative impact on the person in question as well as the neighborhood children that others in the school environment, and the negative effect of social problems that occur more broadly could affect harmony in the social environment. Related social problems are often found in children in elementary school, there is a field specially which contribute to prevent and address the problem, namely the field of guidance and counseling. Guidance and counseling services are part of a significant and inseparable part of the educational process (Demirel & Yazgunoglu, 2013). The role of guidance and counseling in guiding learners to problems should be highlighted, how to contribute and actions carried out if it is proven to be effective in deal with social problems or just the opposite. Interaction with teachers and fellow students seem important (Nyhus, 2016). The person serving on the school counseling service called the counselor (Sevinc, Tasci and Demir, 2012). The importance of this study is based on the implementation of the social problems faced by learners who need to get the handler. In the social sciences, the problem of access to the research field has not been widely or systematically studied and remain under the theory (Richard & Belanger, 2018). Social problems should be a concern, especially if the problem is protracted and occurs sustainability then it could be psychological problems of the students and the impact on learners become problematic individuals. In order to overcome the problems that arise in the learners will require appropriate treatment for any problems. To maximize the quality of education, the existing problems as soon as possible ought to get proper treatment. METHOD Qualitative research was carried out, with the chosen strategy case study. Qualitative research is research to understand the phenomena that occur in-depth research on the subject in a way to describe the data and use a variety of scientific methods (Moleong, 2013: 6). While the case studies is a form of qualitative research conducted in-depth data description and is limited by the particular case (Marriam, 2009: 40). The design used in this study a holistic single case study, the reason is where the issues examined in this study relates to the social field of Guidance and Counseling. The purpose of this research to gain an understanding of how the activities / phenomena implementation guidance and counseling programs in the social sector primary schools Al Firdaus Surakarta, The process of implementation of the research conducted at the primary school Al Firdaus Surakarta, specifically on class 3B. The elementary school selection technique by purposive sampling, based researchers define specific characteristics to suit the purpose of research so that the results are expected to answer the research problems. Researchers chose this elementary school with consideration, (1) the heterogeneous background of students (regular / special needs); (2) there is a guidance and counseling program. The implementation process of research is inseparable from the object under study, in this case the researchers observed guidance and counseling teachers, assistant teachers, learners, and the parties directly concerned with research The process of qualitative research into main instrument is the researchers themselves. To support the results of the study would require some data collection techniques. In the form of indepth interview techniques performed on resources or related parties in order to obtain the required information. Interviews were conducted on counseling teachers, assistant teachers, principals, and related parties. Observation techniques play a passive role in these observations researchers interpreted only went to the location, but did not serve as anything other than as a passive observer, but the present context. Goal observations serve as a research subject is the activity of teachers and learners when implementation guidance and counseling programs in elementary school social field. The results of observations in described were then selected in order to obtain the required data. Engineering documentation to support the research, document here can be recorded interviews, document/school records relating to the implementation of the guidance and counseling, video/photos during the study. To obtain stability, correctness and accuracy of the maximum, then the validity of the data. Mechanical examination of the validity of data used triangulation techniques. Triangulation is selected form of triangulation source that uses a variety of resources of different available, in the Advances in Social Science, Education and Humanities Research, volume 397 data is the same or similar excavated from several different data sources (Sutopo, 2006: 93), in this study the source obtained from teacher guidance and counseling, accompanying teachers, and principals and triangulation technique involves collecting similar data but using a different data collection techniques (Sutopo, 2006: 95), in this study the techniques used in the form of interviews, documents, as well as the observation of the teacher guidance and counseling. Based on the implementation process of research, data analysis techniques used in this study using braid or flow analysis model. There are three components include (1) reducing the data to social guidance to learners, (2) Presentation of data in the form of a written text based on data that has been reduced, and (3) Make a conclusion / verification (Miles & Huberman, 1984: 23) RESULTS AND DISCUSSION The main study in this research is to observe the social behavior of the students and how teachers teach Guidance and Counseling (GC) social relationship is good and right. Student satisfaction with the guidance of teachers, material and social environment plays an important role in stimulating business both in the classroom and homework (Nyhus, 2016). Based on the findings in the primary school Al Firdaus, guidance and counseling program implemented by the class teacher in this case study was done in class 3B consists of two class teachers and also involves four special supervising teacher who accompanied the students with special needs. Investigation guidance and counseling should begin by stressing this point that anyone who teaches social skills is also an educator adviser (Hamidi & Bagherzadeh, 2010). These results can be interpreted as, Number of students 3B class consisted of 30 people, categorized 26 regular students and 4 students with special needs. Based on observations, it is known the social behavior of learners not in spite of negative behavior. Issues of social responsibility is not given much attention in schools Jamaica, this has led to a continuous decline of the moral dimension spiritual of schools in this country (Roofe, 2018). Factors that affect student performance such as education, guidance of the master, the guidance of teachers, social, transport facilities, independent study, reading books and homework, all had a positive correlation or negative with student achievement (Saeed, Gondal & Bushra, 2005). In connection with its negative social behaviors of students, teachers attempt to provide guidance and counseling is different in each case, often the guidance is done by blind where teachers provide guidance by linking with subjects that are taught by teacher. Or providing a particular time can be a group in this case given guidance and advice for all learners in the classroom, or privately where learners troubled specifically invoked in counseling rooms to be taught / advice. This was stated by Mr. ESS: This is accomplished teachers provide the best services for students with problems both at school and at home. Consultation is a basic staple in the guidance (Hamidi & Bagherzadeh, 2010). The collection of data between participants of the group, community or organization is an important step in social research (Richard & Belanger, 2018). Based on observations and interviews with teachers, there are some precautions and teaching the learners to learners Social Behaviors well, in the form. First, The action of teachers in teaching communicating well, both orally and in writing. teaching communicating expected teachers can help learners in the mix and socialize. One of the efforts of teachers to teach learners how to get acquainted, ask for help, say thank you, create letters, giving interviews, presentations, and always remind students to respect the other person. It is as stated Mrs. EAH: , , , , My reply is more poured in learning such interviews is a communication skill. Then fitting the interview should also write, then the student asked such a presentation. , , , , (Interview, mother EAH). In addition through action learning, social assistance measures communicating also be done by telling stories. This was stated by Mr. ESS: these children we give the container appears, the terms to come forward that yes we shift all have a chance. Including we give a chance to express her feelings and experiences may fit-out vacation experiences in writing wrote us also give a chance. , , , , (Interview, Mr. ESS). Second, The action of teachers in teaching behaved and socially connected properly, the behavior of learners in an social environment of inseparable from the teacher's supervision. Forms of teaching is related teacher behaved and socially connected properly taught how to advise and teach learners to help each, respecting fellow human beings, such as parents, peers, to those younger or others with special needs. In connection with the actions of the teacher in Advances in Social Science,Education and Humanities Research,volume 397 teaching behave, Mrs. EAH stated as follows: the social is like mutual cooperation. , , , The picket social skill so well with my friends of other mutual aid and cooperation, then the students are also taught to respect older people like teachers, parents like it. , , , , , , (Interview, mother EAH). Students do a tribute to teachers by means of shaking hands and kissing the hand of a teacher. Besides the actions of students in conducting mutual aid to clean the classroom. Third, The action of teachers in teaching harmonious relationships with peers. Teachers give understanding to the students that it is a family friend, if there are weaknesses and deficiencies should accept each other. Besides giving a sense of the meaning of friendship, the teacher also teaches how to maintain harmonious relationships with peers, such as acts of forgiveness and forgiving in the event of a dispute, as expressed by Mr. ESS: we emphasize to children, it is a family friend, if there are weaknesses and deficiencies so we need to accept each other, play together the term if there is anything ya borne together should not be any animosity or bully each other friends, later on their own detriment. , , , , (Interview, Mr. ESS). Actions besides teachers provides insight and advice to learners related to social assistance in maintaining harmonious relationships with peers, teachers also provide teaching, as expressed by Mrs. EAH: , , , , , sometimes joking, often different from the perception, if it should apologize ya taught to apologize like that. , , , , (Interview, mother EAH). Figure 4. Students conduct play together without distinction friends with each other Advances in Social Science,Education and Humanities Research,volume 397 Fourth,The action of teachers in teaching the discipline of the students by providing instruction on the rules that have been made, give warning and what punishment when needed. teaching discipline and teachers are expected to obey the rules that make the students can put themselves in the social environment of the general, and the school environment in particular, the rules were made and agreed to be upheld and must not be violated. As disclosed father ESS: , , , , about the rules of the classroom and school, we give a concrete example to children know the rules like this, let me call it understand well put yourself in the environment. , , , , (Interviewing, father ESS). In connection with the act of teaching has been given, the teacher guidance and counseling were also doing the action. It is as stated Mrs. EAH: , , , , , the offense must be there, so it must be frequently repeated to continuously advised, if there is an error it should be reprimanded, if indeed his guilt is already heavy yes penalized let the children conscious. , , , , such as the late reply no consequence, was sentenced to stand in the front row. , , , , (Interview, mother EAH). Figure 5. Regulatory boards in the classroom and penalties for students who are not disciplined (wearing orange vests that read "AKU BISA DISIPLIN"). Advisory and related teaching communicating, behave, harmonious relationship among human and disciplined behavior often given to learners by teachers, not only during school hours, teachers also provide advice and instruction when there is spare time which allows for the guidance given. One of the important role of teachers inside and outside the classroom is to provide guidance and counseling to students (Lai-Yeung, 2014). The results of the actions of teachers in guiding learners to become personally behaving in socially well, already shown positive results, such as when learners are given the advice or warning to a problem created, learners can instantly put themselves in a situation, such as examples of actions rowdy students during the lesson, in this case there are negative social behaviors among students that includes actions do not respect other people while talking and no disciplinary action, which should be in the classroom and school rules for student learning is prohibited rowdy. After the teacher gives advice and warning, learners can put yourself and pay attention to the teacher during a lesson. CONCLUSION AND SUGGESTION The process of implementation of guidance and counseling in primary schools is very important, considering the learners at this age still need guidance in forming the human person Advances in Social Science,Education and Humanities Research,volume 397 better. The measures to be negative as social problems need to immediately get proper treatment, where the guidance and counseling program plays a hand in directing learners. The findings of the field shows the role of guidance and counseling in addressing the social problems of students proved to have a positive impact, both in terms of prevention through the method of teaching, and advisory. And the role of guidance and counseling as well overcome social problems that have occurred as through the invocation of learners with problems and giving advice. GC teachers actions in carrying out the role of providing social guidance already performing such actions teaching communicating, behave, maintain a harmonious relationship among human and disciplined behavior. Although the application is still going on misuse of social behavior, but the teacher's role GC has been shown to result in a positive change in the behavior of learners. Suggestions for further research, the results of this study can be used as a reference, and can be used as a picture of how the situation in the field related to the handling of guidance and counseling in the social field in elementary school,
2020-02-13T09:12:32.427Z
2020-02-06T00:00:00.000
{ "year": 2020, "sha1": "cf822d16cebbe4c73bfc68b5b6ba1898a0e26b6f", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.2991/assehr.k.200129.046", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "65a20a07b42290061c6d795c901a8f56aaff5727", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Psychology" ] }
118627634
pes2o/s2orc
v3-fos-license
Optical lattices with exceptional points in the continuum The spectral, dynamical and topological properties of physical systems described by non-Hermitian (including $\mathcal{PT}$-symmetric) Hamiltonians are deeply modified by the appearance of exceptional points and spectral singularities. Here we show that exceptional points in the continuum can arise in non-Hermitian (yet admitting and entirely real-valued energy spectrum) optical lattices with engineered defects. At an exceptional point, the lattice sustains a bound state with an energy embedded in the spectrum of scattered states, similar to the von-Neumann Wigner bound states in the continuum of Hermitian lattices. However, the dynamical and scattering properties of the bound state at an exceptional point are deeply different from those of ordinary von-Neumann Wigner bound states in an Hermitian system. In particular, the bound state in the continuum at an exceptional point is an unstable state that can secularly grow by an infinitesimal perturbation. Such properties are discussed in details for transport of discretized light in a $\mathcal{PT}$-symmetric array of coupled optical waveguides, which could provide an experimentally accessible system to observe exceptional points in the continuum. I. INTRODUCTION Non-Hermitian Hamiltonians (NHHs) are widely used to describe open quantum systems in many areas of science [1,2]. Interestingly, in certain cases a NHH H can show an entire real-valued energy spectrum, in spite of non-self-adjointness. Such a remarkable property has been especially investigated for PT symmetric Hamiltonians [3], although several examples of non-PT invariant Hamiltonians yet admitting an entire real-valued energy spectrum have been provided. However, the reality of the spectrum does not correspond to orthogonality of eigenstates and, most importantly, does not ensure diagonalizability, which may be prevented by the presence of exceptional points (EPs) in the point spectrum of H [4][5][6], or of spectral singularities in the continuous part of the spectrum [7]. The physical implications of both EPs and spectral singularities have attracted a great attention in recent years and have been investigated in several physical systems [5][6][7][8][9][10][11][12][13][14][15][16][17]. Exceptional points correspond to degeneracies of a NHH where both eigenvalues and eigenvectors of a finite-dimensional Hamiltonian H coalesce as a system parameter is varied [4,5]. EPs cause PT symmetry breaking in PT symmetric systems of finite dimension. EPs of a NHH exhibit highly non-trivial characteristics compared with those of most common Hermitian degeneracies, especially concerning adiabatic features and geometric phases, which have been demonstrated in a series of experiments using microwave [8] and optical [9] cavities. In the quantum realm, the existence of EPs has been predicted theoretically in a wide range of systems, such as in atomic or molecular spectra [10], in atom waves [11], and in non-Hermitian Bose-Hubbard models [12]. Photonic structures in the presence of gain or loss are a * stefano.longhi@polimi.it natural arena in which EPs can play a role, since they are described by a non-Hermitian operator arising from a complex dielectric function [18]. Optics has provided in the past few years a formidable testbed where the main features of NHH systems, including those with PT symmetry, have been experimentally observed and exploited to mold the flow of light in new ways [19]. Examples of optical NHH admitting EPs include lasers [13], coupled waveguides [14,15] and optical resonators [16]. Most of previous theoretical and experimental works on EPs have been limited to consider finite dimensional NHH systems, or EPs of resonance states. In infinitedimensional systems, the appearance of EPs in the continuum of the energy spectrum (not to be confused with spectral singularities [7,17]) has been theoretically studied in few recent works [20][21][22], mainly on a mathematical perspective. EPs in the continuum are energies E 0 embedded in the continuous spectrum of scattered states of H that sustain bound (normalizable) states with a number of associated functions [21]. Hence at an EP in the continuum the NHH H supports bound states similar to so-called bound states in the continuum (BIC) of von-Neumann Wigner type [23] found in the Hermitian case. BIC states have been predicted to occur in a wide range of quantum and classical systems, including atomic and molecular systems [24], semiconductor and mesoscopic structures [25], quantum Hall insulators [26], and Hubbard models [27]. Experimental observations of BIC states in the Hermitian case have been reported in a few recent works using waveguide arrays [28,29] and photonic crystals [30]. However, EPs in the continuum are not just BIC states, since they are defective states [21]. The main physical implication is that, as opposed to a BIC state in an Hermitian system, the BIC state of an EP in the continuum is an unstable state, even though the spectrum of H is entirely real-valued. So far, EPs in the continuum have been predicted to occur for the arXiv:1402.3764v1 [quant-ph] 16 Feb 2014 Schrödinger equation with certain specially-tailored complex potentials [20,21], synthesized by application of a double supersymmetric (Darboux) transformation to the free-particle Hamiltonian [31]. Such special potentials, besides to show an EP, are also transparent potentials. Unfortunately, they are very difficult to be implemented in any physical system. On the other hand, light transport in discretized optical structures [32] provides a feasible laboratory tool where the physical features of NHH can be observed [15,19,33]. In this work we introduce EPs in the continuum in discrete (tight-binding) lattices, which can describe light propagation in arrays of evanescently-coupled optical waveguides, and discuss their dynamical and scattering properties. In particular, the different behavior of EPs in the continuum as compared to ordinary BIC modes of Hermitian lattices is highlighted. The class of discrete optical lattices, with an entirely real-valued energy spectrum and admitting an EP in the continuum, is synthesized by application in a nontrivial way of a double discrete Darboux transformation [34,35] to a homogenous Hermitian optical lattice. The paper is organized as follows. In Sec.II we outline the technique of double Darboux transformation for the discrete Schrödinger equation, whereas in Sec.III the method is applied to synthesize NHH tight-binding lattices with one EP in the continuum. The basic difference between EPs in the continuum and ordinary BIC states of von Neumann Wigner type is also elucidated. In Sec.IV an example of a simple lattice model with modulated hopping rates, which shows an EP at the lattice band center, is presented, and a possible physical realization using an array of evanescently-coupled optical waveguides is suggested. Finally, in Sec. V the main conclusions are outlined. II. DARBOUX TRANSFORMATION FOR THE DISCRETE SCHRÖDINGER EQUATION: GENERAL ASPECTS A powerful technique to generate EPs in the continuum for the continuous Schrödinger equation is the application of a multiple Darboux (supersymmetric) transformation to the free-particle equation. Multiple supersymmetric transformations have been used to synthesize either Hermitian [22,31] or non-Hermitian [20,21] potentials supporting BIC states. In particular, in Ref. [21] it was shown that the Schrödinger equation i∂ t ψ = Hψ = −∂ 2 x ψ + V (x)ψ with the specially-tailored complex potential V (x) = 16α 2 [α(x − λ) sin(2αx) + 2 cos 2 (αx)]/[sin(2αx) + 2α(x − λ)] 2 [with Im(λ) = 0, α real] sustains the BIC state ψ 0 (x) = cos(2αx)/[sin(2αx+ 2α(x − λ)] at the energy E = α 2 , which is an EP in the continuum. Moreover, such a defective potential is invisible, i.e. plane waves with any wave number k = ±α are fully transmitted across the defect with unitary transmittance and no phase delay or advancement, as if the defect were absent. Unfortunately, such specially-tailored complex optical potentials are difficult to be implemented in physical systems like e.g. matter waves or optical systems. On the other hand, discrete optical potentials that describe light transport in waveguide array structures or optical mesh lattices could provide an experimentally accessible platform to observe such a kind of invisible defects with EPs in the continuum. In this section we aim to synthesize a tight-binding lattice with EPs in the continuum using a discrete analogue of the double Darboux (supersymmetric) transformation. Extensions of the Darboux transformation to the discrete Schrödinger equation have been seldom discussed in the literature, mainly in the mathematical contexts (see, for instance, [34]). In Ref. [35], Darboux transformations were exploited to synthesize invisible defects in NHH lattices, however such lattices did not show EPs in the continuum. Indeed, to generate EPs in the continuum a non-trivial application of a double Darboux transformation is needed, which is described in details in this section. A. Simple Darboux transformation Let us first provide, for completeness and clearness of the analysis, a brief review of the Darboux transformation technique for the discrete Schrödinger equation, which describes transport of discretized light in optical lattices. For a more comprehensive and extended study of the discrete Darbour technique, we refer the reader to previous references [34,35]. Let us consider a one-dimensional tight-binding lattice described by the Hamiltonian H = n κ n (|n − 1 n| + |n n − 1|) + n V n |n n| (1) where |n is a Wannier state localized at site n of the lattice, κ n is the hopping rate between sites |n − 1 and |n , and V n is the energy of Wannier state |n . The energy spectrum E of H is obtained from the eigenvalue problem of the discrete Schrödinger equation with eigenstate |ψ = n ψ n |n . The point spectrum of H corresponds to energies E with normalizable eigenstates ( n |ψ n | 2 < ∞), whereas the continuous spectrum of H corresponds to to energies E at which the eigenstate is not normalizable but |ψ n | is limited as n → ±∞ (improper eigenfunctions). For instance, for the homogeneous Hermitian lattice, corresponding to κ n = κ and V n = 0, the energy spectrum is purely continuous and given by the usual tight-binding band −2κ ≤ E ≤ 2κ with (improper) eigenstates ψ n = exp(iqn), where −π ≤ q < π is the Bloch wave number and E = 2κ cos q. Note that H turns out to be Hermitian provided that the hopping amplitudes κ n and site energies V n are real-valued parameters. Let us indicate by H 1 the tight-binding Hamiltonian defined by Eq.(1) with hopping amplitudes and site energies given by κ (1) n and V (1) n , respectively, and let us assume that κ (1) n → κ > 0 and V (1) n → 0 as n → ±∞, i.e. that the lattice is asymptotically homogeneous. Let us then indicate by |φ (1) = n φ (1) n |n one of the two linearlyindependent solutions to the discrete Schrödinger equation where µ 1 can or can not belong to the spectrum of H 1 . It can be then shown that the following factorization for H 1 holds [35] where and Let us then introduce the new Hamiltonian H 2 obtained from H 1 by interchanging the operators R 1 and Q 1 , i.e. let us set H 2 will be referred to as the partner Hamiltonian of H 1 . H 2 describes the Hamiltonian of a tight-binding lattice [i.e., it is of the form (1)] with hopping amplitudes and site energies {κ n } given by Note that, to avoid the occurrence of divergences in κ (2) n and V (2) n , the sequence φ (1) n is generally requested not to vanish at any n. The following properties then hold: (i) If |ψ satisfies the discrete Schrödinger equation H 1 |ψ = µ|ψ with µ = µ 1 , then the state |ξ = R 1 |ψ , i.e. ξ n = r (1) n ψ n +r (1) n ψ n−1 (14) satisfies the equation H 2 |ξ = µ|ξ . (ii) The equation H 2 |ψ = µ 1 |ψ is satisfied for |ψ = |θ with Wannier amplitudes We remark that in the previous relations µ and µ 1 can or cannot belong to the spectrum of H 1 . An important consequence of the two above properties is that the two Hamiltonians H 1 and H 2 are isospectral, i.e. they have the same energy spectrum, apart form E = µ 1 , which might or might not belong to the spectrum of either one of H 1 or H 2 . B. Double Darboux transformation According to the analysis of the previous subsection, the function |θ = n θ n |n defined by Eq. (15) is a solution to the discrete Schrödinger equation H 2 |θ = µ 1 |θ . The most general solution |φ (2) to the same equation can be readily calculated and, apart from an unessential multiplication constant, reads where λ is an arbitrary complex-valued number. Besides the decomposition (11), we can also formally write where we have set and In the previous equations, κ (2) n and φ (2) n are defined by Eqs. (12) and (16), respectively. We then introduce the new Hamiltonian H 3 obtained from H 2 by interchanging the operators R 2 and Q 2 , i.e. The Hamiltonian H 3 describes a tight-binding lattice with hopping amplitudes and site energies {κ n } given by The Hamiltonian H 3 is isospectral to H 1 , apart from the energy value E = µ 1 which needs to be separately investigated. The state |ω defined by satisfies the equation H 3 |ω = µ 1 |ω . For µ = µ 1 , the solution |ξ to the equation H 3 |ξ = µ|ξ can be readily obtained from the solution |ψ of the discrete Schrödinger equation III. TIGHT-BINDING LATTICES WITH EXCEPTIONAL POINTS IN THE CONTINUUM In this section we apply the double discrete Darboux transformation, discussed in the previous section, to synthesize and then illustrate the properties of NHH lattices with Hamiltonian H 3 and with the energy E = µ 1 being an exceptional point in the continuum. A. Lattice synthesis As a starting Hamiltonian H 1 , we assume the homogeneous Hermitian lattice κ n = κ, V n = 0, and take as a solution φ (1) n to Eq.(3) the following one with µ 1 = 2κ cos q 0 . In Eq. (29), q 0 and σ are realvalued parameters, which are chosen such that φ (1) n is non-vanishing. Typically, we will assume q 0 a rational number, so that there exists a finite number > 0 such that |φ (1) n | > for any integer n. Such a condition ensures that the hopping rates and potential of the intermediate Hamitonian H 2 are non-singular and bounded. Note that µ 1 belongs to the continuous spectrum of H 1 , i.e. it is embedded into the lattice band −2κ ≤ E ≤ 2κ. For the given choice of H 1 , µ 1 and φ (1) n , a double Darboux transformation can be applied following the procedure outlined in the previous section. After some lengthy calculations, the following expression for the hopping rates and potential of the partner Hamiltonian H 3 can be derived where we have set and where λ is an arbitrary complex parameter. The state |ω , which satisfies the equation H 3 |ω = µ 1 |ω , can be calculated from Eq. (27) and reads explicitly whereas the intermediate function |φ (2) , defined by Eq. (16), reads B. Lattice properties Equations (30)(31)(32)(33)(34) are the main result of our analysis, on the basis of which the following important properties can be outlined. (i) Since ρ n → ∞ as n → ±∞, one has κ (3) n → κ and V (3) n → 0 as n → ±∞, i.e. the lattice described by the Hamiltonian H 3 is asymptotically an homogenous lattice. Physically, this means that we are dealing with an homogenous lattice with some defects. Moreover, since κ n and V (3) n are generally complex-valued, the lattice is non-Hermitian (though rather generally H 3 is not PT symmetric). An example of distributions of lattice hopping rates and site energy potentials, synthesized by the double discrete Darboux transformation, is shown in Figs.1(a) and (b). (ii) ω n is vanishing like ∼ 1/n as n → ∞, i.e. n |ω n | 2 < ∞ and thus the energy E = µ 1 = 2κ cos q 0 belongs to the point spectrum of H 3 . Hence |ω , defined by Eq. (33), is a BIC state for the lattice Hamiltonian H 3 . An example of the BIC mode distribution is shown in Fig.1(c). (iii) The continuous spectrum of H 3 is the energy interval −2κ ≤ E ≤ 2κ, with E = µ 1 . For any energy E = µ in such an interval, with µ = µ 1 , there are two nonnormalizable (but limited) linearly-independent eigenfunctions ξ ± n of H 3 , which can be readily obtained from the plane-wave (Bloch) eigenstates ψ n (q) ∼ exp(±iqn) of H 1 via the linear transformation (28). They read explicitly where µ = 2κ cos q, and ρ n are given by Eq. (32). The asymptotic behavior of the solutions ξ ± n (q) as n → ±∞ reads from which it follows that the defect of the asymptotically-homogenous lattice defined by the Hamiltonian H 3 is invisible, i.e. the transmission coefficient t(q) for any incident Bloch wave, with wave number q = ±q 0 , is equal to one [t(q) = 1 for q = ±q 0 ]. (iv) The energy E = µ 1 = 2κ cos q 0 is an EP in the continuum of H 3 with N = 2 algebraic multiplicity. This means [21] that there exists an associated function f n to the BIC eigenstate ω n , which is a not-normalizable but limited function as n → ∞ [36], such that The explicit expression of the associated function |f is derived in the Appendix A and reads where we have set . (43) f n turns out to have the following asymptotic behavior as n → ±∞ (see the Appendix A) An example of the associate function f n to the BIC mode is shown in Fig.1(d). C. Exceptional Points and Bound States in the Continuum of von-Neumann Wigner type As shown in the previous subsection, the energy E = µ 1 embedded into the continuous lattice spectrum belongs to the point spectrum of H 3 and the normalizable state |ω is effectively a BIC mode of the von-Neumann Wigner type [23]. However, there is a deep physical difference between the properties of an ordinary BIC mode of von-Neumann Wigner type in Hermitian systems and an EP in the continuum. To clarify this point, let us notice that an ordinary BIC state |ω in an Hermitian system is a marginally stable state, i.e. a perturbation added to |ω does not secularly grow. Conversely, the BIC mode |ω in a NHH which is an exceptional point in the continuum turns out to be an unstable state, even though the spectrum of the NHH is entirely real-valued. This means that a perturbation can induce a secular growth of the amplitude of the BIC mode |ω , a feature which is a clear signature of the 'defective' nature of the EP. To show the unstable behavior of the BIC at an EP, let us note that the Schrödinger equation is satisfied by the function for any arbitrary value of the constant , where |f is the associated function to the BIC mode |ω . This means that any arbitrarily small perturbation shaped like the associated function f n will lead to a secular growth of the amplitude of the BIC mode. Correspondingly, the norm | ψ(t)|ψ(t) | of the wave function will grow linearly with time t, in spite the Hamiltonian has an entirely real energy spectrum. An example will be discussed in more details in the next section. It should be finally noticed that, like for the continuous Schrödinger equation [21], the presence of an exceptional point in the continuum makes the mathematical problem of the resolution of the identity for the Hamiltonian H 3 rather sophisticated. This nontrivial problem, however, will not be considered in the present work. IV. A SIMPLE PT -SYMMETRIC OPTICAL LATTICE WITH AN EXCEPTIONAL POINT IN THE CONTINUUM In this section we present a rather simple example of a NHH lattice that can describe light transport in arrays of evanescently-coupled optical waveguides, and discuss the physical properties of the EP in the continuum. The lattice is obtained using the double discrete Darboux transformation, outlined in the previous section, in the limiting case λ = 1, σ = π/2 and q 0 → π/2. For such parameter values, from Eqs. (30)(31)(32) one obtains the isospectral lattice described by the Hamiltonian H 3 with κ (3) n = κ (n + 1)/(n − 1) n even (n − 2)/n n odd (47) Note that the lattice has a inhomogeneous distribution of the hopping rates κ (1) n = 0). The distribution of the inhomogenous hopping rates is shown in Fig.2(a). The non-Hemitian nature of the lattice comes from the fact that the hopping rates κ 0 and κ 1 are imaginary, namely κ 0 = κ 1 = √ −1κ = ±iκ; for n = 0, 1, the hopping rates κ (3) n are instead real-valued. Note that there is some arbitrariness at this stage in the choice of the sign of κ 0 and κ 1 , i.e. one can take κ 0 = −κ 1 = iκ or κ 0 = κ 1 = iκ. The two choices, however, are essentially equivalent, since one can switch from one to the other by application of a π phase slip to the amplitudes c n above (or below) the n = 0 site. We will consider here the case κ 0 = −κ 1 = iκ, for which the lattice turns out to be PT symmetric. In coupled optical waveguides, an effective imaginary coupling constant can be realized by suitable longitudinal modulation of complex refractive index in the central waveguide n = 0 of the lattice, as discussed in details at the end of this section (see also [35]). The inhomogeneous coupling constants κ n for n = 0, 1 can be readily obtained by judicious waveguide spacing, as demonstrated for instance in the experiment of Ref. [29]. A schematic of the optical waveguide array is shown in Fig.2(b). The EP in the continuum occurs at the energy µ 1 = 2κ cos q 0 = 0. The BIC mode ω n and corresponding associated function f n , as obtained from Eqs. (33) and (42), read explicitly Note that the BIC mode ω n has an algebraic decay and is similar to the von-Neumann Wigner BIC mode recently predicted and observed in Ref. [29] for an Hermitian lattice with inhomogeneous hopping rates. However, as briefly mentioned in the previous section and discussed now in more details, the scattering and dynamical properties of a BIC at an EP deeply deviate from those of an ordinary BIC state in an Hermitian lattice. To clarify such a point, let us consider an associated Hermitian optical lattice defined by the Hamiltonian H 4 with (n + 1)/(n − 1) n even, n = 0 (n − 2)/n n odd, n = 1 1 Basically, the Hermitian optical lattice H 4 is obtained from the PT -symmetric lattice H 3 by replacing the imaginary hopping rates κ Note that such a BIC mode simply deviates from the BIC mode ω n of H 3 because of the different amplitude at lattice site n = 0 [compare Eqs. (49) and (53)]. Obviously, E = 0 is not an EP for H 4 , because H 4 is Hermitian. In addition to the BIC mode, the Hermtian lattice H 4 sustans other two bound states with energy E ±2.31κ outside the lattice band, i.e. ordinary bound states in the gap (like those discussed in Ref. [29]). To highlight the different physical properties between the BIC mode at the EP point for the non-Hermitian lattice H 3 and the ordinary BIC mode for the Hermitian lattice H 4 , we compare the propagation and scattering features of the two lattices. Propagation: single-site excitation. In Fig.3 we show the numerically-computed evolution of the optical light intensity |c n (t)| 2 for excitation of the n = 0 lattice waveguide, i.e. for the initial condition c n (0) = δ n,0 , where in the optical context t is an effective propagation distance. In the figure, the evolution of the square root of the normalized total optical power P (t)/P (0), with P (t) = n |c n (t)| 2 , is also shown. Note that, owing to the existence of the BIC mode, localization is observed in both the Hermitian and PT -symmetric optical lattices. In the Hermitian case, a clear mode beating is observed, which arises from the excitation of the BIC mode and the other bound states in the gap; this scenario is similar to the one observed in the experiment of Ref. [29]. Obviously the total optical power is conserved in this case. Conversely, for the PT -symmetric lattice [ Fig.3(a)] the optical power shows a secular growth with time, in spite of the entire real-valued energy spectrum. As discussed in Sec.III.C, such an algebraic growth (P (t) ∼ t 2 ) is the clear signature of the instability of the BIC state because E = 0 is an EP in the continuum. Scattering of plane waves. To compare the scattering properties of the two optical lattices, we assume that the lattices are effectively homogeneous far from the defective region, i.e. we take k n = κ for n ≤ −N and n ≥ N + 1, where N is a large enough integer [37]. In this case, the scattering states of the lattice in the homogenous regions n ≤ −N and n ≥ N are Bloch states with wave number q, corresponding to the energy E = 2κ cos q. For a wave incident from the left side, the scattered state has the form where t(q) and r(q) are the spectral transmission and reflection (for left-side incidence) coefficients. An accurate computation of t(q) and r(q) can be accomplished by a standard transfer matrix method, as discussed in the Appendix B. In Fig.4 we compare the spectral transmission and reflection coefficients for the two lattices H 3 and H 4 , assuming N = 200. Note that, according to the theoretical analysis of Sec.III.B, the defect in the non-Hermitian lattice H 3 is effectively invisible for q = π/2, i.e. r(q) = 0 and t(q) = 1 like in an effective homogeneous lattice. Near q = π/2, a narrow structured resonance deep (peak) in the spectral transmittance (reflectance) is observed, whose width shrinks as the number N is increased. Conversely, the defect in the Hermitian lattice H 4 is not manifestly an invisible defect. Finally, let us discuss a possible method to realize effective complex hopping rates κ valued (Hermitian) coupling to be determined. In practice, inhomogeneous values of the couplings are realized by controlling the waveguide spacing [29]. At the central waveguide n = 0, we superimpose a longitudinal modulation of both effective propagation constant and optical gain/loss, described by a complex periodic function γ(t) of the propagation distance t with spatial period Λ; see Fig.2(b) for a schematic of the optical structure. Light propagation in such an array of waveguides is described by the set of coupled-mode equation i da n dt = κ n a n−1 + κ n+1 a n+1 + γ(t)δ n,0 a n for the modal amplitudes a n (t) of light trapped in the various guides. After setting a n (t) = c n (t) for n = 0 and a 0 (t) = c 0 (t) exp[−i t 0 dξγ(ξ)], for a rapidly oscillating function γ(t), i.e. for Λ smaller than ∼ 2π/κ, the rotating-wave approximation can be applied, leading to the following set of effective coupled-mode equations for the amplitudes c n (z) [35] i dc n dt = κ n c n−1 + κ n+1 c n−1 n = 0, ±1 where we have set The lattice with effective hopping rates (47) is thus obtained provided that the longitudinal modulation γ(t) of the complex refractive index in waveguide n = 0 is chosen to satisfy the constraint This condition can be realized for a wide range of modulation profiles. Let us discuss two possible cases. (i) Sinusoidal modulation. Let us assume a modulation of the effective refractive index of the form where α and β are the modulation depths of the real (propagation constant detuning) and imaginary (gain/loss term) parts of the modulation. In this case one has R ± = J 0 (Γ), where Γ = Λ(α + iβ)/(2π) and J 0 is the Bessel function of first kind and zero order. If, for instance, we assume α = 2 × 2π/Λ and β = −2.096 × 2π/Λ, one has R ± = J 0 (Γ) 1.9414i and the condition (61) is thus satisfied for ∆ κ/1.9414. V. CONCLUSIONS The spectral and dynamical properties of non-Hermitian (including PT -symmetric) systems are strongly influenced by the appearance of exceptional points and spectral singularities. In Hamiltonians with an entire real energy spectrum, such singular energies usually appear at the onset of PT symmetry breaking and are responsible for a secular (unstable) growth of the wave function in spite of the reality of the energy spectrum. Exceptional points are generally found in finite dimensional Hamiltonians with a discrete energy spectrum, whereas spectral singularities are defective states belonging to the continuous energy spectrum and thus can not be found in finite-dimensional systems. In this work we have shown that a novel class of exceptional points, namely exceptional points in the continuum, can arise in non-Hermitian optical lattices with engineered defects. At an exceptional point, the lattice sustains a bound state with an energy embedded in the spectrum of scattered states, similar to the von-Neumann Wigner bound states in the continuum of Hermitian lattices. Such states can be sustained in defective lattices synthesized by application of a double discrete Darboux (supersymmetric) transformation to the homogeneous Hermitian lattice. The dynamical and scattering properties of bound states in the continuum at an exceptional point are deeply modified by the defective nature of an exceptional point. In particular, contrary to the usual von-Neumann Wigner bound states of Hermitian systems, the bound states in the continuum at an exceptional point are unstable states that can secularly grow by an infinitesimal perturbation. Such properties have been discussed in details for transport of discretized light in a PT -symmetric array of coupled optical waveguides, which could provide an experimentally accessible system to observe exceptional points in the continuum. Appendix A: Associated function to the BIC mode In this Appendix we show that the energy E = µ 1 is an exceptional point of the Hamiltonian H 3 with algebraic multiplicity N = 2, i.e. that there exists a nonnormalizable but limited associated function |f satisfying Eq.(41) given in the text. To this aim, let us consider the following linear combination of improper eigenfuctions of H 3 F n (q) = i µ − µ 1 4κ sin q 0 ξ + n (q) exp(iσ) − ξ − n exp(−iσ) (A1) where µ = µ(q) = 2κ cos q, µ 1 = µ(q 0 ), and ξ ± n (q) are defined by Eqs. (35)(36)(37)(38) given in the text. Contrary to ξ ± n (q) which are singular at q = q 0 , F n (q) is a limited and non-singular function for any value of q, including q = q 0 . In fact, after some tedious but straightforward algebra one can show that the BIC mode ω n is obtained from F n (q) in the limit q → q 0 , i.e. ω n = lim q→q0 F n (q). (A2) Let us introduce the shifted Hamiltonian H = H 3 − µ 1 and the function ψ n (q) = F n (q)/(q − q 0 ). Hence H ψ n (q) = (µ − µ 1 )ψ n (q) for any q in the neighborhood of q 1 , with q = q 1 , and µ = µ(q). Let us then apply the operator H to the function (q − q 0 )(∂ψ n /∂q). One has i.e. Since ψ n + (q − q 0 )(∂ψ n /∂q) = (∂F n /∂q), from Eq.(A4) one obtains If we take the limit of both sides in Eq.(A5) for q → q 0 , since µ − µ 1 → 0, F n (q) → ω n and ∂F n /∂q is a nonsingular function, one has H lim q→q0 ∂F n ∂q = ∂µ ∂q q0 ω n (A6) Taking into account that (∂µ/∂q) q0 = −2κ sin q 0 , H = H 3 − µ 1 and using Eqs. (35) and (A1), one finally obtains where we have set and G n (q) is defined by Eq.(43) given in the text. It should be noted that (∂G n /∂q) contains secularly growing terms with the power law ∼ n as n → ∞, however it can be readily shown from Eqs. (36,37,38) and (43) that such terms vanish when taking the limit q → q 0 , i.e. f n is a limited function with respect to index n. More precisely, the following asymptotic behavior for the associated function f n as n → ±∞ can be readily obtained after taking the limit q → q 0 in Eq.(A8) f n − 1 2κ sin q 0 sin[q 0 (n − 1) + σ].
2014-05-30T05:26:24.000Z
2014-02-16T00:00:00.000
{ "year": 2014, "sha1": "33bedcc6e86650032bd9a1bd105dc2cfd7744dfc", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1402.3764", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "33bedcc6e86650032bd9a1bd105dc2cfd7744dfc", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
249613889
pes2o/s2orc
v3-fos-license
Drug-Related Hospital Admissions via the Department of Emergency Medicine: A Cross-Sectional Study From the Czech Republic Background: Drug-related hospital admissions (DRAs) represent a significant problem affecting all countries worldwide. This study aimed to determine the prevalence and preventability of DRAs, identify the most common medications involved in DRAs, the most common clinical manifestations of DRAs and describe the preventability aspects of DRAs. Methods: This cross-sectional study examined unplanned hospital admissions to the University Hospital Hradec Králové via the department of emergency medicine in August–November 2018. Data were obtained from electronic medical records. The methodology of DRA identification was adapted from the OPERAM DRA adjudication guide. Results: Out of 1252 hospital admissions, 195 DRAs have been identified (145 related to treatment safety, 50 related to treatment effectiveness). The prevalence of DRAs was 15.6% (95% CI 13.6–17.6). The most common medication classes involved in DRAs related to treatment safety were Antithrombotic agents, Antineoplastic agents, Diuretics, Corticosteroids for systemic use, and Beta blocking agents. The most common medication classes involved in DRAs related to treatment effectiveness included Diuretics, Antithrombotic agents, Drugs used in diabetes, Agents acting on the renin-angiotensin system, and Lipid modifying agents. Gastrointestinal disorders were the leading causes of DRAs related to treatment safety, while Cardiac disorders were the leading causes of DRAs related to treatment effectiveness. The potential preventability of DRAs was 51%. The highest share of potential preventability in medication classes repeatedly involved in DRAs related to treatment safety was observed for Anti-inflammatory and antirheumatic products, Psycholeptics, and Drugs used in diabetes. Potentially preventable DRAs related to treatment safety were most commonly associated with inappropriate drug selection, inappropriate monitoring, inappropriate dose selection, and inappropriate lifestyle measures. On the contrary, DRAs related to treatment effectiveness were more commonly associated with medication nonadherence. Conclusion: It should be emphasized that in most DRAs, medications were only a contributory reason of hospital admissions and that benefits and risks have to be carefully balanced. It is highlighted by the finding that the same medication classes (Antithrombotic agents and Diuretics) were among the most common medication classes involved in DRAs related to treatment safety and simultaneously in DRAs related to treatment effectiveness. The study highlighted that apart from problems related to prescribing, problems related to monitoring and patient-related problems represent significant preventability aspects. INTRODUCTION Drug-related hospital admissions (DRAs) represent a significant problem affecting all countries over the world. Although many studies have focused on adverse drug reactions (ADRs) leading to hospital admissions, fewer studies have addressed broader concepts, such as adverse drug events (ADEs) and drugrelated problems (DRPs). Multiple terms and definitions are used to describe medication harm in research and clinical practice (Falconer et al., 2019). ADEs could be defined as injuries caused by drug use that encompass ADRs and harm resulting from medication errors-they are the targets of broader efforts to improve patient safety (Nebeker et al., 2004). A DRP is an event or circumstance involving drug therapy that actually or potentially interferes with desired health outcomes (Pharmaceutical Care Network Europe Association, 2020). DRPs are divided into two main domains-DRPs related to treatment effectiveness (problem with the effect of the pharmacotherapy) and DRPs related to treatment safety (patient suffers, or could suffer, from an ADE). The third domain ("Other") includes unnecessary drug treatment (Pharmaceutical Care Network Europe Association, 2020). While on the one hand, the use of medications might lead to ADEs, their use reduces hospital admissions as well. For example, the following medication classes were found to reduce emergency hospitalizations: angiotensin-converting enzyme inhibitors, angiotensin II receptor blockers, aldosterone receptor antagonists, statins, long-acting muscarinic antagonists, and long-acting beta-2 adrenoceptor agonists (Bobrovitz et al., 2018). Therefore, DRPs related to treatment effectiveness should also be the focus when studing DRAs. So far, only a few studies have examined the extent to which DRPs contribute to hospital admissions. Recently, new tools (Thevelin et al., 2018;Kempen et al., 2019) have also incorporated DRPs related to treatment effectiveness. These include omission of an evidence-based drug, inappropriate selection of a drug or a dosage form, inappropriate administration, subtherapeutic dose, too short duration of treatment, medication nonadherence, inappropriate monitoring, inappropriate discontinuation, drug-drug interaction and drug-food interactions. The concern should not only be minimizing the risks of pharmacotherapy, but also maximizing the effectiveness of pharmacotherapy (ensuring that the goals of treatment are reached). DRPs can be prevented primarily by appropriate pharmacotherapy (selection of medications and their formulation, dosing scheme, and duration of treatment-both prescribed and over-the-counter medications), appropriate use and administration of medications, appropriate medication adherence, appropriate monitoring (whether treatment goals are reached, risk factors of complications of the disease, occurrence of ADR and risk factors of ADRs), and appropriate lifestyle measures (e.g., fluid and food intake, smoking, alcohol consumption, sunscreen use). As indicated by the definition of DRP, a DRP can be either potential (possibly leading to real problems for the patient) or actual/manifest (the problem already impacts the patient and his therapy) (Westerlund, 2019). Admission to the hospital can be a measurable outcome of manifest DRP. Numerous studies have been conducted on DRAs from highincome countries. However, there are fewer studies from low-and middle-income countries and central and eastern Europe. This is the first study from the Czech Republic that examines DRAs without any department or age limit. In previous studies from the Czech Republic, the population studied was either from the pediatric ward (Langerová et al., 2014) or the geriatric ward (Maříková et al., 2021). Reducing avoidable medication-related harm remains a difficult global patient safety challenge. Studies measuring the scope and nature of preventable ADEs can provide essential knowledge for the development of risk minimization measures. The study aimed to provide information on: a) the prevalence of DRAs to the University Hospital Hradec Králové via the department of emergency medicine, b) the most common medications involved in DRAs, c) the most common clinical manifestations of DRAs, d) the potential preventability of DRAs, e) medications most frequently associated with potentially preventable DRAs, f) the most common clinical manifestations of potentially preventable DRAs, and g) preventability aspects most frequently associated with potentially preventable DRAs. Study Design and Setting This observational cross-sectional study examined hospital admissions to the University Hospital Hradec Králové via the Frontiers in Pharmacology | www.frontiersin.org June 2022 | Volume 13 | Article 899151 department of emergency medicine in order to identify those which are drug-related. Hospital admissions were identified using a register of all hospital admissions to the University Hospital Hradec Králové via the department of emergency medicine. Most of the patients were admitted to the departments of internal medicine (49%), surgery (26%), neurology (10%), pneumology (4%), anesthesiology, resuscitation and intensive medicine (3%), oncology and radiotherapy (3%), orthopedics (2%), infectious diseases (1%), and psychiatry (1%). The number of hospital admissions via the department of emergency medicine of the University Hospital Hradec Králové is approximately 450 per month. The study followed the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement for the reporting of the study (von Elm et al., 2008). Inclusion and Exclusion Criteria The study included all patients who were admitted via the department of emergency medicine to any hospital ward of University Hospital Hradec Králové. Hospital admissions that took place between 12th August and 6th November 2018 were included. Visits to the department of emergency medicine without inpatient hospitalization were not included. Hospitalizations for diagnostic or elective surgical procedures for pre-existing conditions, hospitalizations with missing medical records, and hospitalizations taking less than 24 h were excluded. There were no exclusion criteria related to age or department. Patients hospitalized more than once were counted as separate cases. Data Collection The data collection process was retrospective. Data were obtained from electronic medical records and entered into a Microsoft Access database. The collected data included demographic characteristics, medication history, medical history, presenting complaint, admission diagnosis, laboratory values and results of clinical investigations, documented ADRs and information on medication adherence. Medications stated in medication history were counted as active substances. Ethics Committee Approval The study was approved by Ethics Committee of the University Hospital Hradec Králové and Ethics Committee of the Faculty of Pharmacy in Hradec Králové. Patient informed consent was not required due to the observational design of the study and the retrospective data collection process. No personal data that could identify the patients were collected. Methods of Assessment The methodology of DRA adjudication was adapted from the Drug-related admissions adjudication guide developed within the OPERAM project (Thevelin et al., 2018). The DRA identification process had the following steps: data abstraction, screening for potential ADEs causing or contributing to hospital admission, causality assessment, assessment of contribution to hospital admission, and the assessment of preventability. Potential ADEs that caused or contributed to hospital admission were identified and the causality of each ADE was assessed using WHO-UMC criteria. The modified WHO-UMC causality criteria (Klopotowska et al., 2013) described in the Drug-related admissions adjudication guide (Thevelin et al., 2018) were used to assess causality due to underuse. In addition, dosage adjustments were taken into account. ADEs with certain causal relationships had to fulfill the following criteria: 1) plausible time relationship to drug intake/dose increase, 2) plausible response to withdrawal/dose decrease, 3) cannot be explained by any disease, 4) definitive pharmacologically or phenomenologically, and 5) satisfactory rechallenge. ADEs with probable causal relationship had to fulfill the following criteria: 1) reasonable time relationship to drug intake/ dose increase, 2) clinically reasonable response to withdrawal/dose decrease, and 3) unlikely to be attributed to any disease. ADEs with possible causal relationships included events with a reasonable time relationship to drug intake/dose increase that could also be explained by disease or information on dechallenge was lacking or unclear. ADEs with certain, probable, or possible causal relationships were considered confirmed ADEs. In case of a confirmed ADE, the ADE contribution to hospital admission was accessed. According to the definition of DRA, hospitalizations due to ADEs that were the main reason for admission, as well as ADEs that were a contributory reason for admission, were considered a DRA. The main reason for admission was the primary cause of admission and was usually documented in the admission or discharge letter. A contributory reason for admission was a clinically significant contributory factor to admission-an event that worsened the main reason for admission or played a substantial role in the admission, but other factors also contributed significantly to the admission. Drug therapeutic failure without an evident cause, drugrelated laboratory deviation without clinical manifestation, intentional intoxication, and ADE that was present at hospital admission but not related to the reason of admission were not considered a DRA. The last step was the assessment of preventability. DRAs judged to be due to medication errors were deemed to be potentially preventable. Preventability was further assessed using Hallas criteria as definitely avoidable, possibly avoidable, not avoidable, and unevaluable (Hallas et al., 1990). Preliminary screening for potential ADEs was performed by a PhD candidate in clinical pharmacy (ZO), and the consensus assessment was performed by three board-certified clinical pharmacists (MM, JV, PS). Classification The identified DRAs were classified into two groups-DRAs related to treatment safety and DRAs related to treatment effectiveness. The Anatomical Therapeutic and Chemical (ATC) classification was used to code medications and medication groups (WHOCC, 2022). Medications were coded up to the fifth level. Medical Dictionary for Regulatory Activities (MedDRA) was used to classify clinical manifestations (BioPortal, 2021). MedDRA ® the Medical Dictionary for Regulatory Activities terminology is the international medical terminology developed under the auspices of the International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use (ICH). Potentially preventable DRAs were classified according to the OPERAM DRA adjudication guide (Thevelin et al., 2018) into DRAs related to overuse, underuse, and misuse as well as the Pharmaceutical Care Network Europe Classification V 9.1 (Pharmaceutical Care Network Europe Association, 2020) into DRAs concerning the following DRPs: drug selection, dose selection, treatment duration, patient-related, patient transferrelated and other (No or inappropriate outcome monitoring). An additional category was added-inappropriate lifestyle measures. Outcome Measures The main outcome measure was the prevalence of DRAs (defined as the number of unplanned DRAs divided by the total number of unplanned hospital admissions). DRA was defined as a hospitalization due to an ADE, which is the main or contributory reason for hospital admission of a patient. The term ADE was defined as harm due to an ADR or a medication error related to overuse, underuse, or misuse of prescription and non-prescription medications (Thevelin et al., 2018). The other outcomes included: the prevalence of potentially preventable DRAs (defined as the number of potentially preventable DRAs divided by the total number of DRAs), the most common medication classes implicated in DRAs, the most common clinical manifestations of DRAs, the most common medication classes implicated in potentially preventable DRAs, the most common clinical manifestations of potentially preventable DRAs and preventability aspects of potentially preventable DRAs. Sample Size Calculation and Data Analysis The following formula (Daniel and Cross, 2013) was used to calculate the sample size: where p stands for the expected prevalence, Z for the standard normal variable corresponding to the confidence interval (CI), and d for precision. A sample size of 1252 patients was required to estimate the prevalence of DRAs, based on 95% CI, precision level of 2%, and the prevalence of 15.4% [obtained from the latest systematic review (Ayalew et al., 2019)]. Categorical variables were expressed as absolute values and percentages. Continuous variables were expressed as medians with interquartile ranges. Prevalence of Drug-Related Hospital Admissions and Sample Characteristics The study included 1252 unplanned hospital admissions to University Hospital Hradec Králové via the department of emergency medicine. The number of patients admitted to the hospital was 1202, as some patients were admitted more than once. A total of 195 hospital admissions were identified to be drug-related. Of the 195 DRAs, 145 DRAs (74%) were related to treatment safety, and 50 DRAs (26%) were related to treatment effectiveness. The total prevalence of DRAs was 15.6% (95% CI 13.6-17.6). For the flow diagram, see Figure 1. The demographic and clinical characteristics of the study sample and the comparison of subgroups are shown in Table 1. Table 2 shows the comorbidities of the study sample and the comparison of subgroups. Table 3 shows the number of hospital admissions with corresponding medication classes in the patients' medication history and the comparison of subgroups. Clinical Manifestation of Drug-Related Hospital Admissions A total of 152 ADEs were related to treatment safety. More than one ADE was identified in 7 DRAs. Table 4 shows the MedDRA classification of ADEs related to treatment safety. Table 5 shows the classification of DRAs related to treatment effectiveness according to MedDRA. Table 6 shows the ATC classification of medication classes involved in DRAs related to treatment safety. A total of 254 medications were involved in ADEs related to treatment safety. The medications classes most frequently concerned the Cardiovascular system (27%), Blood and blood forming organs (26%), Antineoplastic and immunomodulating agents (16%), and Nervous system (11%). More than one medication was involved in 70 (48%) DRAs related to treatment safety. The most common medications involved in DRAs related to treatment safety included low dose acetylsalicylic acid (n = 23), warfarin (n = 22), prednisone (n = 8), hydrochlorothiazide (n = 8), clopidogrel (n = 7), furosemide (n = 7), perindopril (n = 6), insulin (n = 6), amiodarone (n = 5), bisoprolol (n = 5), ibuprofen (n = 5), nadroparin (n = 5), and spironolactone (n = 5). Table 7 shows the ATC classification of the medication classes involved in 50 DRAs related to treatment effectiveness (N = 62). There were 9 DRAs related to treatment effectiveness in which more than one medication class was involved. Causality Assessment Causality was assessed for every event separately. There were 7 DRAs, with more than 2 ADEs contributing to hospital admission. According to the causality assessment, 51% ADEs were probable, and 49% ADEs were possible. No ADE was certain, as no event was a recognized pharmacological phenomenon, and rechallenge was almost never performed. ADEs with probable causality were events unlikely to be attributed to disease, and the response to withdrawal (or drug initiation) was clinically reasonable, while ADEs with possible causality included events that could also be explained by disease or the information on withdrawal (or drug initiation) was lacking or unclear. Table 8 shows the categories of causal relationships of ADEs involved in DRAs. Within DRAs related to treatment effectiveness, 46% of events had a probable causal relationship. Within DRAs related to treatment safety, 53% of events had a probable causal relationship. Contribution to Hospital Admissions In 55% of DRAs, ADEs only contributed to the admission, which means that ADE was one factor among others that together resulted in hospitalization. The most common other factors were heart failure decompensation and infection. Table 9 shows the categories of contributions to hospital admissions. Potentially Preventable Drug-Related Hospital Admissions The overall potential preventability of DRAs was 51.3% (both definitely avoidable and possibly avoidable DRAs). We have identified 50 potentially preventable DRAs related to treatment safety and 50 potentially preventable DRAs related to treatment effectiveness. In addition, 83 (43%) DRAs were not avoidable, and 12 (6%) DRAs were unevaluable. Table 10 shows the classification of preventable DRAs related to treatment safety. Regarding treatment safety, the most common preventability aspects included inappropriate drug selection, inappropriate monitoring, inappropriate dose selection, and inappropriate lifestyle measures. Table 11 shows the classification of preventable DRAs related to treatment effectiveness. The most common preventability aspect of DRAs related treatment effectiveness was medication nonadherence. Potentially preventable DRAs were also classified according to the Pharmaceutical Care Network Europe classification of DRPs (Supplementary Tables S1, S2). Medications Involved in Preventable Drug-Related Hospital Admissions Medications associated with potentially preventable DRAs related to treatment safety are listed in Table 12. The highest share of potential preventability in medication classes repeatedly involved in DRAs related to treatment safety was observed for Anti-inflammatory and antirheumatic products, Psycholeptics, and Drugs used in diabetes. For detailed information, see Table 13. Clinical Manifestations Associated With Potentially Preventable Drug-Related Hospital Admissions The most common clinical manifestations associated with potentially preventable DRAs related to treatment safety were Hypoglycemia (6), Gastroduodenal hemorrhage (6), Depressed level of consciousness (5), and Bradycardia (4). The MedDRA classification is shown in Table 14. DISCUSSION The aims of the study (prevalence of DRAs, medications involved in DRAs, clinical manifestations of DRAs, preventability of DRAs, and preventability aspects) are discussed separately: Prevalence of Drug-Related Hospital Admissions Epidemiological studies demonstrate that the burden of ADRs in both inpatient and outpatient settings is substantial (Bouvy et al., . As the population is aging and multimorbidity and polypharmacy are increasing, one would expect the prevalence of DRAs to rise as well. However, at the same time, safer alternatives are being used in clinical practice, high-risk medications are being withdrawn from the market, and preventative measures are being implemented in clinical practice. The prevalence of DRAs differs due to inconsistencies in the definitions and methods of DRA identification (Leendertse et al., 2010;Linkens et al., 2020;Laatikainen et al., 2021), the selected threshold of causality assessment (Wallerstedt et al., 2021), patient population (Beijer and de Blaey, 2002;Leendertse et al., 2010;Laatikainen et al., 2021) and whether the denominator includes all admissions, only acute admissions, or specific wards (Leendertse et al., 2010). When comparing the prevalence of DRAs, one has to take all these things into account. Due to the current heterogeneity, it is practically impossible to compare the prevalences of DRAs among different studies. We found that 15.6% of acute hospital admissions were drug-related. The prevalence of DRAs related to treatment safety was found to be 11.6%. If we excluded the cases with possible causality, the prevalence would be 6%. If we limited the finding only to ADEs with a probable causal relationship which was the main reason for hospital admission related to treatment safety, the prevalence would be 3%. The results of the subgroup analysis can be found in Supplementary Table S3. A noteworthy difference is between different age groups. Among older patients (65 years or older), the prevalence of DRAs was 18.6% while the prevalence of DRAs among the rest of the patients was 10%. The prevalence of DRAs among patients aged 75 years or older was 20%. This study followed the OPERAM DRA adjudication guide (Thevelin et al., 2018), which was interested in DRPs that cause harm. To differentiate between potential DRPs and manifest DRPs, the term ADEs was used for manifest DRPs. However, the term was also applied to DRP related to treatment effectiveness. One could argue that manifest DRPs related to treatment effectiveness should not be called ADEs, since ADE is mostly defined as an injury resulting from the use of a drug, and the term ADE does not include failure to use a drug (Nebeker et al., 2004). Another confusion comes when comparing ADRs and ADEs. Some studies use the definition of ADR as a noxious and unintended response to a drug, which occurs at doses normally used, while others drop the part about normally used doses or use other definitions. Therefore, one must be cautious even when comparing studies with the same outcomes, as they might be using different definitions. There is a pressing need for further discussion and international consensus on this topic (Falconer et al., 2019). Medications Implicated in Drug-Related Hospital Admissions Several studies have revealed that DRAs are caused by commonly used medications. In our study, the most common medication classes involved in DRAs related to treatment safety were Antithrombotic agents, Antineoplastic agents, Diuretics, Corticosteroids for systemic use, Beta blocking agents, Antiinflammatory and antirheumatic products, and Agents acting on the renin-angiotensin system. The OPERAM trial has found Diuretics and Antithrombotic agents to be the most frequently involved or omitted medication classes in DRAs (Blum et al., 2021). Summarizing our findings on DRAs related to treatment effectiveness and DRAs related to safety, we have found the same medication classes (Antithrombotic agents and Diuretics) to be most frequently involved in DRAs. Regarding preventable DRAs related to treatment safety, the most common medication classes identified in our study were Anti-inflammatory and antirheumatic products, Antithrombotic agents, Drugs used in diabetes, Diuretics, Cardiac therapy, Psycholeptics, Analgesics, and Beta blocking agents. Similar findings were reported in a systematic review (Howard et al., 2007), which identified antiplatelets, diuretics, non-steroidal antiinflammatory drugs (NSAIDs), anticoagulants, opioid analgesics, drugs affecting the renin-angiotensin system, and beta-blockers as the medication classes most commonly involved in preventable DRAs related to ADRs and overtreatment. Regarding preventable DRAs related to treatment effectiveness, the systematic review by Howard et al. identified diuretics, antiepileptics, drugs used in diabetes, and beta-blockers to be most commonly involved in DRAs. A systematic review of prospective observational studies (Mongkhon et al., 2018) identified medications targeting the cardiovascular system, respiratory system, central nervous system, endocrine system, and medication used to treat infections to be most commonly associated with hospital admissions due to medication nonadherence. In our study, the most common medication classes were Diuretics, Antithrombotic agents, Drugs used in diabetes, and Agents acting on the renin-angiotensin system. Comparison With Other Countries Compared to lower-income countries, we have observed a lower prevalence of DRAs related to Antiinfectives for systemic use. Antiinfectives for systemic use were frequently involved in DRAs in Ethiopia (Angamo et al., 2017;Demessie and Berha, 2022), South Africa (Mouton et al., 2016), Nigeria (Adedapo et al., 2021), and India (Geer et al., 2016). Antiinfectives for systemic use were also frequently implicated in DRAs in Brazil (de Paula et al., 2012) during the time when the requirement to be prescription only was not met. In higherincome countries, Antiinfectives for systemic use are among the top medication classes among the pediatric population. A review comparing ADR-related hospitalizations in developed and developing countries (Angamo et al., 2016) found that antiinfectives were more commonly reported to be associated with ADR-related admissions in developing countries than in developed countries. Compared to certain higher-income countries, Opioids were not among the most common medication classes involved in DRAs related to treatment safety. Opioids appear to be frequently involved in the United States (Budnitz et al., 2011;Poudel et al., 2017), Australia (Zhang et al., 2019), Canada (Bayoumi et al., 2014). A possible explanation could be that strong opioids are not yet widely prescribed in the Czech Republic compared to these countries. However, hospital admissions due to tramadol were also present in our setting. Otherwise, the same medication classes continue to be involved in DRAs in different countries. Clinical Manifestations of Drug-Related Hospital Admissions Clinical manifestations of DRAs related to treatment safety most frequently concerned Gastrointestinal disorders (especially Gastrointestinal hemorrhage), Metabolism and nutrition disorders (especially Hyponatremia, Hypoglycemia) and Blood and lymphatic system disorders (Bone marrow toxicity, Microcytic anemia), Nervous system disorders (Depressed level of consciousness), Infections and infestations (Increased infection susceptibility) and Cardiac disorders (Bradycardia). Gastrointestinal disorders and Microcytic anemia were associated with anticoagulants, antiplatelets, and NSAIDs. Hyponatremia was associated with the use of thiazide diuretics. Hypoglycemia was associated with the use of insulin and sulfonylureas. Bone marrow toxicity was associated with the use of antineoplastic agents. A depressed level of consciousness was associated with opioid analgetics. Increased susceptibility to infection was associated with immunosuppressants. Bradycardia was associated with beta-blockers, amiodarone, and digoxin. Clinical manifestation of DRAs related to treatment effectiveness most frequently concerned Cardiac disorders (particularly Heart failure symptoms), followed by Nervous system disorders (Ischemic stroke) and Metabolism and nutrition disorders (Diabetic complications). Heart failure symptoms were associated with the underuse of diuretics. Medication classes No. Medications Anti-inflammatory and antirheumatic products 12 ibuprofen (4), diclofenac (3), meloxicam (2), nimesulide (2), naproxen (1) Antithrombotic agents 10 warfarin (6), acetylsalicylic acid (3) Similarly, a study in the United Kingdom (Rogers et al., 2009) identified heart failure and stroke to be the most frequent manifestations of DRAs due to undertreatment. In a study in Belgium (Somers et al., 2010), the most common symptom associated with drug therapy failures was dyspnea. A study from Australia (Kalisch Ellett et al., 2021) identified that chronic heart failure and osteoporosis were most frequently associated with potentially suboptimal medication-related processes of care related to the underuse of medications. However, there are not many studies that focus not only on DRAs related to treatment safety but also on DRAs related to treatment effectiveness. Preventability of Drug-Related Hospital Admissions We have found that half of DRAs were potentially preventable. However, in the subgroup of DRAs related to treatment safety, only 34% of DRAs were found to be preventable. Meta-analysis on the preventability of ADRs (Hakkarainen et al., 2012) found that half of ADRs among adult outpatients can be prevented. Like the prevalence of DRAs, the prevalence of preventable DRAs varies according to many factors. The inclusion of indirect drug-related causes for patient morbidity (errors of omission) and average sample age is associated with a higher prevalence of preventable DRAs (Winterstein et al., 2002). Variations can also be explained by differences in study populations and data collection methods (Patel et al., 2017). Preventability Aspects A systematic review (Howard et al., 2007) identified problems with patient adherence to medication (33.3%) and prescribing problems (30.6%) as the most common underlying causes of preventable DRAs, followed by monitoring problems (22.2%). Taking the results of DRAs related to treatment safety and treatment effectiveness together, our study confirms these findings. In our study, 38% of preventable DRAs concerned medication adherence problems, 35% concerned prescribing problems (drug selection, dosage selection, treatment duration) 17% inappropriate monitoring. Furthermore, 1% were related to medication reconciliation problems and 9% were related to inappropriate lifestyle measures (fluid intake, food intake, alcohol consumption, and smoking). Similar underlying causes were also observed in a recent study on medication-related hospital readmissions (Uitvlugt et al., 2021), which found that 35% of preventable readmissions were due to prescribing errors, and 35% of preventable readmissions were due to nonadherence. Uitvlught et al. had pointed out that if patients present at the emergency department due to nonadherence, this will typically manifest itself as a worsening of their underlying disease, and only if the patient indicates that they are not adherent, this will be recognized as an ADE. Additionally, Uitvlugt et al. had found that 30% of preventable readmissions were due to transition errors. In this study, only one transition error was identified. However, our study did not assess readmissions. The explanation could be that not all transition errors have been revealed. Pharmacists could play a role in managing patient electronic medication records both in the hospital (medication reconciliation, discharge list) and in the pharmacy (over-the-counter medications) and potentially reduce the discrepancies in the medication history. Howard et al. suggested concentrating interventions on the drug groups that accounted for more than half of the drug groups associated with preventable DRAs (antiplatelets, diuretics, NSAIDs, and anticoagulants). In our study, Anti-inflammatory and antirheumatic products, Antithrombotic agents, Drugs used in diabetes were the medication classes that accounted for more than half of the medication classes associated with preventable DRAs related to treatment safety. Diuretics, Antithrombotic agents, Drugs used in diabetes, and Agents acting on the renin-angiotensin system were the medication classes, which accounted for more than half of the medication classes associated with preventable DRAs related to treatment effectiveness. Similarly, (Schmiedl et al., 2018), suggested regular individualized medication reviews of the most commonly implicated drugs in preventable DRAs. In this prospective multicenter, long-term study conducted in Germany (Schmiedl et al., 2018), the most frequently implicated drugs included digitoxin, low-dose acetylsalicylic acid, phenprocoumon, diclofenac, fast-acting insulin, glyburide (glibenclamide), spironolactone, torasemide, and intermediate-acting combined with fast-acting insulin. The most common preventability aspects included missing prevention strategies, relevant drug-drug interactions, and inappropriate drugs for age, body weight, and comorbidities. In the prospective multicenter study from the Netherlands (Leendertse et al., 2008), medication classes associated most often with potentially preventable DRAs included antiplatelet drugs, oral anticoagulants, NSAIDs, and their combinations, antidiabetic drugs, and medications that act on the central nervous system. The most common medication errors associated with potentially preventable DRAs in the HARM study (Leendertse et al., 2008) included lack of a clear indication for the medication, nonadherence to the medication regimen, inadequate monitoring, and drug-drug interactions. Epidemiological studies on preventable DRAs are constantly needed since clinical practice is changing as new preventive measures are being implemented. Compared to the past, lower target serum digoxin concentrations are recommended. Digoxin concentrations ≥1.2 ng/ml are avoided, since it has been shown to increase cardiovascular mortality (Rathore et al., 2003) and other ADEs. Lower doses of spironolactone are used in practice, and potassium levels and renal function are monitored following the publication that identified increased hyperkalemia-associated morbidity and mortality among patients treated with angiotensin-converting enzyme inhibitors and spironolactone (Juurlink et al., 2004). In the geriatric population, the goal is not too tight glycemic control, and sulfonylureas (especially glibenclamide) are prescribed less often. Academicians should assess potential options that exceed the obligatory demands. Additional efforts are still needed to identify evidence-based interventions during sick days. Recently, the absence of a sick day management plan was identified to be among the root causes of preventable ADEs (de Lemos et al., 2021). Similarly, in our study, DRAs were related to acute illness accompanied by dehydration. However, randomized controlled trials that access the risks and benefits of temporarily stopping angiotensin-converting enzyme inhibitors/angiotensin II receptor blockers are still needed. In addition, there is a need for the development of safe and effective medications for chronic pain. On the one hand, NSAIDs contribute to DRAs related to the gastrointestinal tract. On the other hand, opioids pose a risk of opioid dependence and addiction and other ADEs. In the same way, the preventability aspects of DRAs related to treatment effectiveness will also change over time. There is still a huge burden of diseases affecting the cardiovascular system on hospital admissions. Recently, SGLT2 inhibitors (empagliflozin or dapagliflozin) have been recommended in certain patients with heart failure. Underuse of these medications could become a new DRP that contributes to hospital admissions of patients with heart failure with reduced ejection fraction. In addition, target low-density lipoprotein cholesterol levels for cardiovascular disease prevention have been modified. Last but not least, addressing medication nonadherence might get a greater awareness in the future. Interpretation Recently, it was suggested that the widespread use of a signal detection cut-off in descriptive prevalence studies may have contributed to the perception that harmful drug treatment is the major problem of health care (Wallerstedt et al., 2021). Therefore, it should be underlined that medications often pose a risk in certain situations and many ADEs are multifactorial in nature. The underlying causes are also related to the behavior of the patients (medication nonadherence and inappropriate lifestyle measures). Wallerstedt et al. have another excellent point in stating that studies on DRAs in which the benefits of treatment are not captured may bring about the risk of unjustly discrediting pharmacotherapy. This view is supported by our finding that Antithrombotic agents and Diuretics were the common cause of DRAs related to treatment safety and simultaneously the most common cause of DRAs related to treatment effectiveness. Had we only included DRAs related to treatment safety, a layman not taking the benefit-risk balance into account could assume that these medications are rather harmful. On the one hand, the use of Antithrombotic agents was associated with bleeding events, but on the other hand, their underuse was associated with cases of thromboembolic stroke due to atrial fibrillation. Similarly, on the one hand, Diuretics were involved in electrolyte imbalances and prerenal failure. On the other hand, withdrawal of Diuretics was associated with decompensation of heart failure. Wallerstedt et al. point out that an adverse event can be the consequence of a prudent benefit-risk evaluation and correct drug treatment. These observations are confirmed by our finding that only a minority of DRAs related to treatment safety were preventable. We agree with Wallerstedt's statement that medication error would probably be the primary interest from a health care perspective as these events could possibly be prevented. However, we think that the information on nonpreventable ADRs might also be valuable as it could prompt pharmaceutical companies to invest in the development of safer alternatives. Wallerstedt et al. have also emphasized that problems that may just as well have been caused by the disease may be less relevant when quantifying a health care problem for health care decision making and suggested restricting the reported events to those with at least a probable causal relationship with drug treatment. Therefore, it should be emphasized that the prevalence of DRAs identified in this study (15.6%) included events with possible causality, contributory reasons of admission, and ADRs, which were not preventable as well. Our definition of DRA covered all manifest DRPs that were the main reason or contributed to hospital admission. If we took into account only manifest DRPs that were the main reason for hospital admissions, the prevalence of DRAs would be 7%. If we took only manifest DRPs with a certain or probable causal relationship into account, the prevalence of DRAs would be 6%. Strengths The first strength of the study is that electronic medical records were used as a data source for DRA identification. It has been noted that spontaneous reporting or database methods of data collection underreport ADEs and ADRs compared to medical chart screening (Leendertse et al., 2010). Another advantage of using medical records is the possibility to detect some cases of DRAs related to treatment effectiveness. Electronic medical records capture important health information (e.g., presenting complaint, laboratory data, documented ADRs, previous falls, smoking status, smoking history, alcohol consumption) compared to administrative claims databases. The second strength of the study is the method of DRA identification. Own definitions and assessments hinder the interpretation and comparison of different studies. This study followed a comprehensive guide, and both causality assessment and assessment of contribution to the hospital admissions were performed. We have not limited the identification of DRAs to the trigger list since trigger lists require constant updates whenever official guidelines are updated (Hedman, 2020). As described in the DRA adjudication guide (Thevelin et al., 2018), only manifest DRPs (DRPs that caused harm) that were the main reason or contributory reason for hospital admission were considered DRA. Drug-related laboratory deviations and ADEs that were present at admission but did not contribute to hospital admission were not included in the definition of DRA. However, they can be found in Supplementary Tables S4, S5. The third strength is that the study assessed potential preventability and identified medication classes involved in potentially preventable DRAs as well as preventability aspects. As suggested by Wallerstedt et al., preventable DRAs should be the main concern of research, as DRAs, which can potentially be avoided, are of interest for clinical practice. The fourth strength is the generalizability of the study. Most studies focus on specific departments. In this study, no exclusion criteria related to department were applied. Additional strength could be the categorization of DRAs on DRAs related to treatment safety and DRAs related to treatment effectiveness. Although the latest guidelines focused on manifest DRPs, they have not suggested differentiating between problems and causes. Perhaps it could be useful to classify DRAs in a hierarchical manner, separate causes from problems, as was suggested for DRPs (van Mil et al., 2004). Limitations The main limitation of this study is the retrospective data collection process. The gold standard method is a prospective evaluation of patient medical records, laboratory tests, and interviews with patients and care providers (Parameswaran Nair et al., 2018). The limitation related to retrospective data collection includes the absence of medication reconciliation, patient interview, medication adherence confirmation. Therefore, the finding that the prevalence of DRAs related to treatment effectiveness was not as high as the prevalence of DRAs related to treatment safety could be skewed since no patient interview was conducted, and medication nonadherence was only taken into account when explicitly stated in electronic medical records. The second limitation is the inclusion of cases with a possible causal relationship. Recently, Wallerstedt et al. suggested restricting reported events to those with at least a probable causal relationship with drug treatment (Wallerstedt et al., 2021). Although this suggestion differs from the OPERAM DRA adjudication guide (Thevelin et al., 2018) and AT-HARM10 tool (Kempen et al., 2019), we have provided these results in Supplementary Tables S6-S10. The essential distinctions between probable causal relationship and possible causal relationship are that in the latter case, there may be another equally likely explanation for the event, and/or there is no information or uncertainty with regard to what has happened after stopping. Therefore, the case is classified as possible, not only when the event could also be explained by disease but also when the information on withdrawal is lacking. There are cases when a dechallenge cannot be performed (e.g., when the benefit of the medication is greater than the risks or patient death). However, with the inclusion of a possible causal relationship, there is a possibility of a non-drug-related explanation of the symptoms being classified as ADE. In our study, there were cases of hyperkalemia associated with a reduction in kidney function due to dehydration and events that were multifactorial (hyponatremia, fall, syncope). Coppes et al. (2021) have highlighted that the tools to identify DRAs have no scale to assess the medication-relatedness of hospital admission, so some cases might be identified as drug-related, but disease progression may play a larger role. Wallerstedt et al. indicated that medical doctors are more likely to attribute the hospital admission to exacerbation of disease while pharmacists tend to attribute the event to ADEs (Wallerstedt et al., 2021). Therefore, there is a possibility of over-attribution of conditions to ADEs. Several other issues arise in applying causality assessment algorithms to adverse drug events. There is a need to update the algorithmic methods to allow perfect applicability in all possible clinical scenarios accordingly or not with the terms of marketing authorization (Mascolo et al., 2017). The third limitation is the heterogeneity of electronic medical records. Variability of the completeness of electronic medical records between departments might affect the results. In our study, the share of falls on DRAs might be underestimated as the electronic medical records from the department of surgery were insufficient to evaluate the causality of falls. The last limitation is the assessment of inter-rater reliability. Fleiss cappa indicated slight agreement (0.09) between the raters. However, only the cases preselected by the main investigator have undergone consensus assessment, as the consensus assessment of each case would be time-consuming. However, given the fact that pharmacists tend to attribute adverse events rather to the medications than the disease, the risk of a potential miss will likely be small. CONCLUSION The total prevalence of DRAs to University Hospital Hradec Králové via the emergency department was 15.6%. Of 195 DRAs, 74% DRAs were related to treatment safety, and 26% DRAs were related to treatment effectiveness. If we took only manifest DRPs that were the main reason for hospital admissions into account, the prevalence of DRAs would be 7%. ADEs affecting Gastrointestinal disorders and Metabolism and nutrition disorders accounted for 38% of DRAs related to treatment safety. Cardiac disorders accounted for 32% of all DRAs related to treatment effectiveness. DRAs related to treatment safety most frequently involved Antithrombotic agents, Antineoplastic agents, Diuretics, Corticosteroids for systemic use, and Beta blocking agents, while DRAs related to treatment effectiveness most frequently involved Diuretics, Antithrombotic agents, Drugs used in diabetes, Agents acting on the renin-angiotensin system, and Lipid modifying agents. The potential preventability of DRAs was 51%. Antiinflammatory and antirheumatic products, Antithrombotic agents, and Drugs used in diabetes represented were most frequently associated with preventable DRAs related to treatment safety. The medication classes with the highest of preventability included Anti-inflammatory and antirheumatic products, Psycholeptics, and Drugs used in diabetes. The most common preventable ADEs included gastroduodenal hemorrhage, hypoglycemia, and a depressed level of consciousness. The preventability aspects involved in potentially preventable DRAs related to treatment safety included primarily problems with drug selection, inappropriate monitoring and problems with dose selection, and inappropriate lifestyle measures. On the contrary, medication nonadherence was the most common preventability aspect of potentially preventable DRAs related to treatment effectiveness.
2022-06-13T13:33:47.080Z
2022-06-13T00:00:00.000
{ "year": 2022, "sha1": "627cd36325981849519043bb56fb5bc5e8b06175", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fphar.2022.899151/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5e6d262d81e0cbf7d6786bacc59c082c6214f9e2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
241871613
pes2o/s2orc
v3-fos-license
Policy Policy making has a significant meaning for each level of government. Nepal formally adopted a federal democratic system and the role of local governments has been increased drastically since then. As a doorstep government of the people, local bodies completed their first five years of tenure under the federal system. Concerned with this study, Phedikhola Rural Municipality, one of the rural municipalities in Gandaki Province of Nepal, enacted several policies in the form of acts, rules, procedures, bylaws, directives and decisions. A few of the policies were also reviewed. The policies concerned with agriculture, education, health, infrastructure development, youth enterprising, good governance and the use of barren land got an overwhelming popularity. However, local governments have faced several problems in formulating such policies. The problem of localization of policies, the dominating role of bureaucracy, lack of skilled human resources, problem in identifying local needs and a host of other elements stood as a heavy barrier in the policy-making sector. The policy-making institution has not yet found to be effective to deliver its assigned task. In this context, this article aims to explore the challenges of policy-making faced by the local governments of Nepal. It attempts to assess the challenges from the viewpoint of stakeholders contributing to enhance institutional capacity in formulation and implementation of policies in the future. This article is qualitative, using descriptive, exploratory and analytical design. Both primary and secondary sources were used for data collection. Key Informants Interview (KII), Focus Group Discussion (FGD) and field observation were intensively used. INTRODUCTION Local governments are the grassroots units of modern democracy and are considered as the fundamental basis of the democratic system. An ordinary citizen can municipalities in matters related to the provisions of schedules 8 and 9 of the constitution. Following the spirit of the federal system, the Local Government Operation Act has been enforced since 2017. The act ensures the power of making policies on matters related to its functions and responsibilities as mentioned in section 11. Based on the constitutional and legal provisions, the local governments have completed their five years' tenure. Taking their first experience of the federal ruling system, the local governments spent their time mostly in the policy-making sector. The Phedikhola Rural Municipality has also enacted several policies in the form of acts, rules, procedures, bylaws and directives. Making policies by the local government under the federal system was a really difficult task mainly because of their new experience in the field and other constrains (Baral, 2022). The operational aspect found equally challenging due to limited resources, lack of skilled human resource and other factors. In this context, the questions automatically arise: What fundamental challenges did the rural municipality face during policymaking? What was the condition of public participation? To what extent the policies were able to address the local needs? Whether the rural municipality was able to formulate required policies? What is the condition of implementation? This article attempts to answer these questions. METHODOLOGY Nepal formally entered into the era of the federal system after the promulgation of the Constitution of Nepal in 2015. Strengthening of the federal system depends on the proper functioning of local governments. The local governments are empowered with the task of making required policies to operate their business in new political environment. The new system brought both opportunities as well as challenges. So, the primary objective of this article is to explore the challenges of policy making faced by the local governments of Nepal. It attempts to assess the challenges from the viewpoint of major stakeholders. The article is based on the challenges faced by Phedikhola Rural Municipality concerning policy making. The rural municipality lies in Syangja district of Gandaki province of Nepal. Initially, it was a village development committee and was declared as rural municipality on 12 March 2017. The article covers a period of five years (2074-2079 B.S.). Since it tries to explore the challenges of policy-making, exploratory design is used. Existing legal provisions and the real experiences of stakeholders in the policymaking process are analyzed which led to the following analytical design. Most essential data and information are collected from the primary sources. Key Informants Interview (KII) and Focus Group Discussion (FGD) were applied as these methods were found very useful to obtain insightful information from diverse sections and departments (Gundumogula, 2020). The interview was taken with the village chairperson, chief administrator, legislative committee members (chairperson, a woman member and a member from marginalized class), secretary member and legal advisor. The interview was conducted in different dates from 9 July 2022 to 18 August 2022. Nine participants including three ward chairpersons, three civil society members and three other stakeholders were involved in a focus group discussion that was conducted on August 18, 2022. The secondary data were obtained from books, official documents, journal articles, magazines and previous research works. CONCEPTUAL AND THEORETICAL REVIEW The concept of 'public policy' is as old as the formation of society and state. It has got a widespread popularity with the increasing role of public activities concerned with health, education, employment, disaster management, good governance, trade, business and so on. But its systematic study began during the mid-twentieth century. Harold Lasswell systematically put forward the concept through the publication of 'Policy Sciences' in 1951(Delon & Vogenbeck, 2007. Nowadays, it is recognized as a separate discipline covering a broad area of study and research. The word 'public' denotes the involvement of people in general rather than being limited to a particular group of people. And 'policy' generally signifies the plan and guidelines to attain certain goals. It is defined as "a statement by the government that it intends to do about the public problem. Such statements can be found in the constitution, statutes, regulation, case law, decisions, or the behavior of government officials at all levels" (Birkland, 2011, p. 9). Further, the policy is taken as a matter of whatever governments choose to do or not to do (Dye, 2013). Sapru (1994) takes public policy in the form of a declaration of goals and objectives, course of action and societal values as it is defined: "Fundamentally, a public policy is a government action or proposed action directed at achieving certain desired goals or objectives" (Ikelegbe, 2006). In this sense, public policy can be considered as a form of governing principle, plan, or course of action. Public policy is equally concerned with administrative management by which the government translates its political, developmental and managerial vision into programs and actions. It is concerned with the 'how' of the government that aims to deliver 'outcomes' for achieving desired changes. "The policy is also explained as the code of conduct determined for the regulation of public behavior. It combines the basic decisions, commitments and actions made by those who hold or influence government positions or authority" (Gerston, 2010, p. 7). But it seems irrational to exclude individuals, society and other organizations that have close relations with public matters. So, Karl J. Fredrich is correct who defines 'public policy' as "a purposive course of action of a person, organization or government within a given environment to achieve a goal" (cited in Pandey, 2069, pp. 2-13). The policy formulation is equally concerned with the social sciences. Sapru (2010) opines that "the meaning of the word 'policy' is changing like other concepts of social science" (p. 24). The social sciences in general and political science in particular is concerned with policy making. Exploring deep attachment with 'Political Science', Brikland (2011) describes the policy as the management of power. All stakeholders have political relations and are concerned with the allocation of political power. Hence, "policy is concerned with the study of political relationship; that is, the study of the processes by which societies seek to allocate political power and the benefits of such power" (Brikland, 2011, p. 15). Thus, the policy is taken as the guideline principle that contributes to effective governance. It is the tool or the instrument of transformation that builds the capacity to deliver social values. Overall decisions of the government are guided by the policy which in the long run determines the future of the community. Every institution is guided by certain policies. Such policies include acts, rules, procedures, by-laws, ordinances, decisions, directives, judgments, etc. Generally, these policies are formulated and implemented by the government. The legislature, an important organ of government, is the sovereign body to make such policies. These are the collective activities of the government either local or central. After a government acknowledges the existence of a public problem and the need to do something about it, policymakers need to decide on some course of action. Making such a course of action is generally known as policy formulation (Howlett & Ramesh, 2003). Policy making requires a definite process. Harold Lasswell is of the opinion that policy making passes through intelligence, promotion, prescription, invocation, application, termination and appraisal (Jann & Wegrich, 2007). Dye (2010), on the other hand, opines that policy making consists of five major steps i.e. problem identification, agenda setting, policy formulation, policy legitimization, policy implementation and policy evaluation. However, problem identification, policy drafting, consultation with stakeholders, preparation of final draft and passing of draft are the major steps in policy making. Besides these, monitoring and evaluation are also included in the process. Policy making is the fundamental task of government. Hence, 'Village Assembly' and 'Village Executive' are the formal actors who play pivotal role in policymaking at local municipalities. However, it is taken as the outcome of the interaction of several stakeholders. The political parties, NGOs, consumer groups and local elite have their role as non-state actors (Baral, 2022). Regulatory, restrictive, distributive and redistributive are the major types of such policies. The non-state actors focus on distributive policies whereas the state actors are more concerned with regulatory and restrictive matter. The existing concept is supported by several theories. In this context, system theory, institutional theory, elite theory, group theory and rational choice theory can be applied for a better interpretation of this study. Out of above theories, system theory can be employed in the explanation of policymaking process in developing countries. This theory observes the policymaking as a "political system responding to the demands arising from its environment" (Osman, 2002). According to this theory, there exist several actors and there is an interrelationship among the stakeholders. Eyestone (1071) rightly remarks that "public policy is the relationship of a government unit to its environment" (p. 18). A continuous interaction occurs within the internal and external environment. The system performs the 'input', 'conversion', 'output', and 'feedback' functions through which public policies are formulated (Easton, 1953). Public policy is taken as the response of a political system to demands arising from its environment. The policy is considered as an output of the political system which David Easton called an 'authoritative allocation of values'. Hence, 'Village Assembly' is the system that is affected by several environmental factors like consumer groups, interest groups, unions, civil society, etc. Likewise, the institutional theory advocates on the role of political institution. The formal governmental structures such as legislatures, executives and courts have a pivotal role in policymaking and its implementation as it is mentioned here: "A policy becomes a public policy only when it is authoritatively determined by government institutions" (Chakrabaty & Chand, 2016, p 26). The government applies policies to all citizens and has a monopoly on the use of force in applying policy. 'Village Assembly' and 'Village Executive" are legitimate institutions to formulate policies at the local level. They use coercive power to enforce those policies. On the other hand, elite theory presupposes the reflection of the preferences of the governing elite. This theory argues that "society is divided between the mass of people and a ruling minority, whereas the political powerthe power to take and impose decisions valid to the whole society-always belongs to the latter" (Mariott, 2020). The elite represent the mass based on influence, prestige, resource, skill, knowledge and authority. The elite theory speaks that the society is composed with a powerful minority and a weak majority such as it is mentioned: "Although we often assert that public policy reflects the demands of the people, this may express the myth rather than the reality of democracy" (Dahal, 2017). Panchas in the Panchayat system, NGOs in the democratic era, and 'all party mechanism ' (2006-2016) remained the dominating elites in policy making (Acharya, Dhungana, & Guragain, 2022). Still, social and party elites have a dominating role in policymaking. Likewise, various groups formed in society have equal influence in policy making. The groups organized in the name of profession, occupation, religion and sometimes caste and ethnicity have sufficient influence in policy making, but all need to go through legal institutions. The weak section of society is encouraged to be organized and put pressure upon the policymaker if they are ignored and neglected. The group has its own identity and follows collective interests. Policy making is taken as the management of group conflict (Pandey, 2069 B.S.). This theory is more workable with policy formation in a society having pluralistic nature. In addition to these, rational choice/public choice, and populist theories are equally workable with policy making and its implementation. Policy making itself is a difficult issue. Various interest groups play with bargaining, compromise, and accommodation. The problem is more acute in developing countries. Their limitations differ from those of developed ones. The policies effectively implemented by developed countries cannot be adopted in developing countries. The developing countries are often influenced by limited economic resources, political instability, social backwardness, unemployment and low standard of living (Osman, 2002). In Cairney's view, successful policy requires clear and consistent goal, enough resources, dedicated and skillful bureaucrat, dependency relationships, stakeholder's support, and policymaker's approach in policy implementation (2012). But most of the developing countries lack these conditions. They face similar challenges during policy implementation. Ineffective governance, corrupt civil servants, instable politics, unnecessary political interference, aid dependency and problems of institutional acceptance on policy outcomes are the common challenges encountered during the implementation phase (Mulyanyuma, 2016). Policy implementation seems a crucial phase as success and failure of policy depends on the operational aspects of such policies. FINDINGS AND INTERPRETATIONS Policymaking in the Local Level: Constitutional and Legal Provisions Nepal has a long practice of a unitary system of government. History witnessed that different forms of local governments existed in various phases of history. Policy making in the past was the sole business of the central legislature. The local governments used to exercise certain power based on the principle of decentralization. But it is different in the federal system where the power of a government is divided into a federal, provincial and local government. The People's Movement-II (2062/2063 B.S.) established a loktantric (democratic) system in the country. Nepal formally entered into a federal ruling system after the promulgation of the constitution in 2015. The political and administrative systems were also managed in a new structure with three tiers of government i.e. the federal, province and local (Article 56). Altogether, 753 local governments were also set up. Part 18 of the constitution has made the provision of 'local legislature'. According to the constitution, "the legislative powers of the local level shall be vested in the "Village Assembly" (Article, 221). The village assembly may make necessary laws on the matters outlined in the list contained in Schedule 8 and Schedule 9 (Article, 226) of the constitution. Further, Article 57 of the constitution reads that the powers of local level shall be vested in the matters enumerated in Schedule 8, and such powers shall be exercised under this constitution and the law made by the village assembly. Federal, provincial and local legislature have the power to make laws included in the concurrent lists. But the laws enacted by village assembly concerning Schedule 9 should be consistent with the federal and state law. If the law made by the village assembly is found inconsistent with the federal and state law, shall be invalid in the extent of such inconsistency (Article, 57 (6) & (7)). Such constitutional provisions make it clear that the local assembly (municipal assembly or village assembly) has the power to make necessary policies under the conditions mentioned in the constitution. All such policies should respect federal law. In addition to the constitutional provisions, the Local Government Operation Act-2017 empowers rural municipalities to make necessary laws within their jurisdiction. According to Section 102, the village executive has every jurisdiction to make necessary rules, directives, procedures and by-laws. It is also mandatory that those policies should be published in the local gazette. Policies cannot be implemented until they are published in the local gazette (Section 102 (5)). The rural municipality has also the responsibility to inform and send laws to the state and federal governments. Challenges of Policy Making Providing policy making rights to the local government is an important contribution of the democratic system. It is said that "political decisions should be as close as possible to the society they concern" (Dahal, 2007, p. 11). Following this spirit, Article 221 of the constitution of Nepal has granted legislative powers to the village assembly on the matters mentioned in the constitution. Accordingly, Phedikhola Rural Municipality, one of the local governments of Gandaki Province, has made a breakthrough in the policy making sectors. It enacted 79 policies in the form of acts, rules, directives and procedures. Ten such policies were reviewed during the last five years (Phedikhola Rural Municipality, 2079 B. S.). But the stakeholders faced several challenges in making and implementing the policies. The researcher observed the following challenges during the field visits. People's Participation A proper functioning of local government provides better chances for all citizens to control and participate in the decision-making process (Dahal, 2017). Democracy has no meaning if a large portion of people is left behind in the process of decision-making affecting their life. They help in promoting democratic ideals and values at the grassroots level. They need to involve people in the developmental effort as much as possible because real democracy lies in popular participation (Khanal, 1998). But it was found that the stakeholder's participation in policy making was very poor. Several policies required review as they were enacted in a hurry (Baral, 2020). The chief administrative officer also shared the same view. He opined that most policies were formulated in haste. On the one hand, it was very hard for the stakeholders to identify the need whereas they also ignored the process of making such policies on the other. The legislative committee chairperson said that most stakeholders were found budget oriented. Their interest was to allocate a budget for the programs. The policy-level discussion was rarely held though many formal programs were organized for policy making purposes. The situation is often created to pass policies hastily to spend the budget of rural municipalities. On the one hand, the members of the legislative committee do not bear expertise and they do not want to make their weakness public on the other (Baral, 2022). Exploring the cause of poor participation, one of the members of the village executive spoke that most stakeholders lacked legal knowledge and technical aspect of policy making. Ownership in Policy Formulations The local governments are supposed to deliver people-friendly services. It requires better policies based on local needs. Policies are formulated accordingly where public participation plays a crucial role. But most policies were formulated based on the draft given by the federal government. That ignored local needs, which ultimately reduced ownership of the policies (Baral, 2022). According to the chief administrative officer, stakeholders were more interested in the local origin policies, but participation seems poor in the policies given by the center. It is found that regular interaction among the stakeholders helps in the localization of policies (Sayapatri, June 2020). Participation and regular interactions help in the localization of policies, which develop the feeling of ownership. But most imported policies like 'Forest Act' failed to catch this spirit and difficulties arose in the implementation of policies. Institutional Challenge Legislature is the sovereign institution to make required policies. This provision is made in the constitution where village assembly/municipal assembly works as the local-level legislature. There is also the provision of a 'legislative committee' to assist in the policy formulation process. A seven-member legislative committee had also formed but most of its members were found dissatisfied with their role. Most of the respondents agreed that institutional arrangement was set for formalism. They also opined that it was mainly because of the lack of technical knowledge in the policy making process, which ultimately decreased its autonomy and accountability. Two members of the legislative committee from the privileged class openly accepted their weak role in the policy making process. Despite several policies made, institutional capacity could not be mobilized to attain the desired role. Systemic Barrier Local legislatures cannot enjoy the sole right to make the required policies in the matter related to Schedule 9 of the constitution. Many policies related to education, forest, health, agriculture, disaster management, etc. required A clear-cut federal policy. The chairperson was found disappointed in the matter that province and federal governments delayed in the formulation of laws required for the local government. One of the ward chairpersons also told that the local policies are sometimes found inconsistent with the federal policy. The chief of education section said that 'Education Act' was reviewed not because of the local need but because of delay of federal government in enacting the related act. This ultimately brings problems in the implementation of those policies. Financial Barrier The rural municipality has also introduced several acts related to finance. But barriers have appeared in the implementation of those policies. It is our common problem where the development budget cannot be allocated in the stipulated time (Chaudhary, 2019). It creates a direct challenge in the operational aspect of policy. It has become very challenging to introduce a policy-based program, said the chief of the planning section. According to the account officer, competition is found among the people's representatives and local elite to pull the budget in their region, which became hard to achieve the economic goal through the adoption of better economic policies. Many programs have failed to achieve the target goal due to a lack of the required budget. COVID-19 also added further hardships in the implementation of policies. Reactive Behavior The constitution and law have made institutional arrangements for the policy formulation and implementation. But it was found that stakeholders have very little interest in the process of drafting and passing policies. They showed their ceremonial presence in the meeting. One of the legislative committee members from the Dalit community told that she played no role in making policy. But the policies were enacted. They were kept just for clapping. She also realized her weakness of having less knowledge regarding the legal aspects. But they continue to put a maximum attention to matters related to delivery, which may not suit policy (Baral, 2022). The concerns seem to be related to roads, schools, allowances, drinking water, electricity and the construction of temple and public buildings. The concern is on the budget allocation rather to pay due attention to the formulation of policies. Sharing his experience, one of the ward secretaries complained that the people's representatives play fewer roles in making and reviewing policies rather give unnecessary pressure on the implementation of policies, especially in matters related to the allocation of budget. Most stakeholders tend to show a reactive behavior. Passive Civil Society The civil society is a people's organization, moment, or forum, which works on a non-governmental basis. It is a protest against excessive institutionalism, bureaucratic, economic determination and fundamentalism (Pokhrel, 1998). But most of our local governments hesitate to cooperate with the civil society (Subedi, 2021). Sharing his bitter experience in an interview, the legal advisor said that institutional authorities are reluctant to discuss policy issues. They have a dual role in accepting the presence of civil society in the policy formulation and implementation process. It lacks a structural mechanism to coordinate with civil society groups. The chairperson of the legislative committee praised the role of 'Sayapatri Society', a leading non-governmental organization working in the policy sector. But the role of other groups like tole (lane) organizations, women's groups, senior citizens, youth clubs, etc. was found nominal in this field. The consumer groups are budget oriented rather than playing a creative role in the policy sector. Bureaucratic Domination Bureaucracy is mainly concerned with the 'how' of policy that is the operational aspect of policy. Most legislative members were found dissatisfied with the active role of bureaucratic personnel. Tradition's central bureaucratic chain is still playing a dominating role in providing service to the local people (Rijal, 2018). All the participants, including administrative personnel, agreed that bureaucracy is found more active in the policy making process. Exploring the compulsion upon the role of bureaucracy, the chief administrative officer told that legislative committee members lack the required knowledge on the process and content of the policy. He also added that most policies were ready-made and the process was followed just for formality. In other cases, required bills were prepared by the chief of different sections and forwarded to the process. There was no case of rejection of the bill submitted by the section chief. Only the matter was that stakeholders were a bit serious about the policies concerned with a local issue. The 'one teacher one laptop' program, 'agriculture subsidy' program, 'agriculture ambulance', 'use of barren land', etc. were the policies, which the stakeholders paid a little concern. In rest of the cases, the bureaucratic role was found active in making and implementing the policies. In this context, neither the representatives were able to play their role nor easily accepted the bureaucratic domination. This led to the deterioration of ownership of the policies. Shortage of Skilled Human Resources The fundamental challenge encountered by the local government in the policy making process is the lack of legal expertise (Baral, 2022). The village assembly members could not exhibit their expertise due to a lack of legal knowledge. Even the 'legislative committee' faced the same problem. Sharing her bitter experience, the legislative committee chairperson told that the committee faced several obstacles in policy making process due to a lack of legal knowledge. Ignorance in the legal aspect discouraged their active participation in the policy making process. One of the ward secretaries shared her experience of irrational pressure imposed by elected representatives due to a lack of legal knowledge. The same problems were seen in the review process of existing policies. This sort of ignorance ultimately turns into a reactive behavior that widens the distance between stakeholders. Though they were assisted by legal advisors, challenges became more acute because of the lack of legal expertise among the policymakers themselves. Besides the above-mentioned challenges, policy making in the local government is highly affected by the unusual transfer of civil servants, especially the chief administrative officers. The local government has also a culture of 'all party mechanism'. But the political party, especially the ruling one, found reluctant to bring consensus on the matter concerned with the public issues (Baral, 2022). These trends may create a danger of avoiding ownership of the policy by the upcoming representative body. There may be the problem of implementation. Likewise, few bills were ready-made and failed to identify the local problems. This caused the problem of ownership of the policies at the local level. The guardian institutions (federal and provincial government) also delayed in providing an instruction in the required field. The role of the interest group was equally dominating. Their main focus was to make policies related to the distribution of resources. Policies related to planning and budget allocation were influenced by the local and bureaucratic elite. The mandatory provision of publishing policies at the local gazette was duly followed but it was not accessible to the general public. Despite all the above challenges, Phedikhola Rural Municipality has introduced sufficient policies. It is a breakthrough in the history of the local-level policy making sector. CONCLUSION AND RECOMMENDATIONS Nepal formally adopted the federal system after the promulgation of the constitution of Nepal. The constitution and other acts have empowered the local governments where they can make required policies. The 'village assembly' and 'municipal assembly" have the right to make necessary policies regarding the matter enlisted in Schedules 8 and 9 of the constitution. Phedikhola Rural Municipality has made a significant achievement in the policy making process. The rural municipality enacted seventy-nine policies during its last five years of tenure. Though many more policies were ready-made, they exhibited their originality in the field of policy making. Most policies were distributive and regulatory. 'One teacher, one laptop', 'agricultural ambulance', 'youth enterprise program', 'use of barren land', 'agricultural subsidy', 'union and associations', 'cooperatives' and several other policies have got popularity. But policy making and its implementation could not remain free from problems. The major stakeholders could not play the desired role due to a lack of legal knowledge. The 'legislative committee', which enjoys an exclusive power, was overshadowed by the bureaucratic influence. Participation has turned into formality. The institutional and social elites were found equally active to make policies in their favor. There is a rare practice of maintaining consensus among the stakeholders, especially the party leaders. Many policies were imposed from the center, which failed to identify the local needs. The federal and province governments have delayed in introducing mother policies, which directly influence the policy making process at the local level. It was found that the nature of challenge was almost similar with all developing countries. However, the federal system is a new practice and it is natural to have the challenges. These barriers can be minimized by providing comprehensive trainings to the policy makers, making the provision of legal advisor, and reducing the bureaucratic dominance. It is also required to focus on local need, public participation and pressure to the federal and provincial governments to enact the mother laws. The upcoming government has the responsibility to make a timely review of the policies and the focus should be given to localizing the policies to create ownership among the stakeholders.
2022-12-17T16:21:09.097Z
2022-12-15T00:00:00.000
{ "year": 2022, "sha1": "24fa68a1b738d908b39e083032ce6a29e8f38ef5", "oa_license": "CCBYNC", "oa_url": "https://www.nepjol.info/index.php/pjri/article/download/50156/37331", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "93c062c37a548a498fd8a0c7b296773af15b07fd", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [] }
23716827
pes2o/s2orc
v3-fos-license
Regulation of Tyrosinase Processing and Trafficking by Organellar pH and by Proteasome Activity* Pigmentation of the hair, skin, and eyes of mammals results from a number of melanocyte-specific proteins that are required for the biosynthesis of melanin. Those proteins comprise the structural and enzymatic components of melanosomes, the membrane-bound organelles in which melanin is synthesized and deposited. Tyrosinase (TYR) is absolutely required for melanogenesis, but other melanosomal proteins, such as TYRP1, DCT, and gp100, also play important roles in regulating mammalian pigmentation. However, pigmentation does not always correlate with the expression of TYR mRNA/protein, and thus its function is also regulated at the post-translational level. Thus, TYR does not necessarily exist in a catalytically active state, and its post-translational activation could be an important control point for regulating melanin synthesis. In this study, we used a multidisciplinary approach to examine the processing and sorting of TYR through the endoplasmic reticulum (ER), Golgi apparatus, coated vesicles, endosomes and early melanosomes because those organelles hold the key to understanding the trafficking of TYR to melanosomes and thus the regulation of melanogenesis. In pigmented cells, TYR is trafficked through those organelles rapidly, but in amelanotic cells, TYR is retained within the ER and is eventually degraded by proteasomes. We now show that TYR can be released from the ER in the presence of protonophore or proton pump inhibitors which increase the pH of intracellular organelles, after which TYR is transported correctly to the Golgi, and then to melanosomes via the endosomal sorting system. The expression of TYRP1, which facilitates TYR processing in the ER, is down-regulated in the amelanotic cells; this is analogous to a hypopigmentary disease known as oculocutaneous albinism type 3 and further impairs melanin production. The sum of these results shows that organellar pH, proteasome activity, and down-regulation of TYRP1 expression all contribute to the lack of pigmentation in TYR-positive amelanotic melanoma cells. Pigmentation in mammals is regulated by a small population of specialized cells (termed melanocytes) that express a limited number of specific proteins that act in a cascade to synthesize the biopolymer melanin, which is then deposited in discrete membrane-bound organelles known as melanosomes (reviewed in Refs. [1][2][3][4]. Those melanogenic proteins can be structural proteins, such as gp100, or enzymatic proteins, such as tyrosinase (TYR), 1 tyrosinase-related protein 1 (TYRP1), and tyrosinase-related protein 2 (DCT). Interestingly, each of those melanosomal proteins (gp100, TYR, TYRP1, and DCT) represents a specific immune target for transformed melanocytes (5)(6)(7)(8)(9), known as malignant melanoma, and thus their biological and clinical importance far exceeds their function in producing melanin. The loss of pigmentation in melanomas is very common in advanced and in metastatic lesions because of the dysfunction of those melanocyte-specific proteins. Regulation of melanogenesis at the transcriptional level usually involves microphthalmia-associated transcription factor (MITF), a basic helix-loop-helix transcription factor known as a master regulator of melanogenic gene expression (10,11). MITF has been shown to regulate the expression of all known melanosomal proteins (12)(13)(14). However, post-transcriptional processes can also result in the amelanotic phenotype in melanocytes that express normal amounts of wild-type TYR; this level of regulation might be important in determining constitutive skin color and the pigmentation phenotype of malignant melanoma (15)(16)(17)(18)(19). The sum of those results suggests that in melanocytes, TYR does not necessarily exist in a catalytically active state and that its activation could be an important control point for melanin biosynthesis. TYR is a type I membrane glycoprotein with seven potential N-glycosylation sites (20 -22). TYR catalyzes the rate-limiting reactions in melanin synthesis, converting tyrosine to DOPAquinone and subsequently oxidizing 5,6-dihydroxyindole to indole-5,6-quinone (23)(24)(25). After the translation of TYR and its insertion into the endoplasmic reticulum (ER), it undergoes some initial glycosylation and maturation and then enters the Golgi network, where it is processed further (26,27). TYR is eventually transported from the trans-Golgi network (TGN) to its target organelle, the melanosome, by a vesicular transport system (28 -30). Interestingly, TYRP1 and DCT, two proteins closely related to TYR, undergo similar processing and sorting, although the sorting vesicle system used to deliver them from the TGN to melanosomes is presumably unique for each of those proteins (31,32). Even gp100 is processed through the ER, but is transferred to early melanosomes directly from the ER or from the early Golgi (33) in a manner distinct from the TYR-related proteins. The processing of TYR in the ER requires the presence of the chaperone calnexin, which increases the ER retention time for TYR necessary for its binding of copper and its conformational folding (34 -36). The most common mutations of TYR result in oculocutaneous albinism type 1 (OCA1), which is associated with the ER retention of the protein, presumably as the result of enzyme misfolding (27,37,38); the misfolded TYR is then degraded by proteasomes. In OCA3, normal TYR is produced, but mutations in TYRP1 result in the retention of the wild-type TYR in the ER and its proteolysis by proteasomes; TYRP1 seems to act as a chaperone for TYR in the ER (27). Thus the reduction in TYR function in amelanotic melanoma cells could be mediated by the quality control system of the ER, because selective retention in the ER and subsequent degradation by proteasomes occurs in several genetic diseases (39 -41). The sum of these studies underscore the fact that proper folding and processing of TYR in the ER is crucial for its enzymatic activity, delivery to melanosomes, and subsequent melanin synthesis. Thus, disruption of melanin synthesis may result not only from mutations at the TYR locus but also from mutations at a number of other loci involved in TYR processing and transport. Melanosomes are known to be acidic organelles that when mature can have a pH as low as 4.0 (42,43). It has been assumed that this low melanosomal pH facilitates melanogenesis (44 -46). Recently, the activation of melanogenesis by selective vacuolar type proton pump inhibitors, bafilomycin A1 (Baf) and concanamycin A (CCM), was shown in amelanotic human melanoma cells and in mouse melanoma cells, which express TYR but do not produce pigment (47). Further, Baf and CCM induce melanin synthesis in pink-eyed dilution gene (p)null melanocytes by affecting early TYR processing and trafficking rather than by simply affecting activity at the melanosomal level (48,49). Fuller et al. (50) showed that melanocytes derived from caucasian donors respond to those agents by producing melanin after activation of TYR, whereas melanocytes derived from black donors were refractory to those agents, suggesting that intracellular pH might regulate constitutive skin color. Baf and CCM, macrolide antibiotics that at low concentrations specifically inhibit vacuolar-type proton ATPases (vATPase) (51), have been commonly used to neutralize acidic compartments such as endosomes, in which the low luminal pH is known to affect receptor-ligand interactions and protein sorting (49,52,53). Recent studies have shown that Baf inhibits the delivery of endocytosed material from endosomes to lysosomes (54 -56). Baf was shown to function by inhibiting the formation of carrier vesicles operating between early and late endosomes (54) and to block the association of a subset of coatamer subunits with early endosomal membranes (57,58). In this study, we examined the expression, processing, and trafficking mechanisms of various melanosomal proteins (TYR, TYRP1, DCT, gp100, and MART1) in amelanotic and in melanotic melanoma cells using immunological, biochemical, and molecular approaches to characterize melanosomal localization in early melanosomes, endosomes, coated vesicles, Golgi apparatus, and ER. Those organelles hold the key to the understanding of biogenesis and pigment synthesis of melanosomes and how that might be disrupted in unpigmented melanocytes that express normal levels of melanogenic proteins. We confirm that the processing of TYR is altered in amelanotic melanoma cells, and we show that its retention in the ER can be corrected in the presence of protonophore or proton pump inhibitors, which increase the pH of intracellular organelles, after which TYR is transported correctly from the ER to the Golgi, further glycosylated, and then sorted to melanosomes. The sum of these results shows that organellar pH, proteasome activity, and the down-regulation of TYRP1 expression may all contribute to the lack of pigmentation in TYR-positive amelanotic melanoma cells. Subcellular Fractionation-For purification of melanosomes, we used the protocol originally described in Ref. 33 with modifications. Briefly, confluent monolayers of SK-MEL-28 cells were harvested with 0.05% trypsin, 0.53 mM EDTA (Invitrogen) and washed once in 0.25 M sucrose by centrifugation at 1,000 ϫ g for 5 min at 4°C. Specimens were then homogenized on ice using 20 strokes of a Dounce glass:glass homogenizer and centrifuged at 1,000 ϫ g for 10 min at 4°C. That supernatant was recovered and further centrifuged at 19,000 ϫ g for 30 min at 4°C. The pellet was resuspended in 2.0 M sucrose and layered at the bottom of a 1.0 -2.0 M sucrose step (1.0, 1.2, 1.4, 1.5, 1.6, 1.8, and 2.0 M) gradient. The gradient was centrifuged at 100,000 ϫ g in a Beckman SW 28 swinging-bucket rotor for 1 h at 4°C, and the various layers of the fraction were carefully recovered. The 1.0 M sample was then layered in the middle of a 0.8 -1.4 M sucrose step (0.8, 1.0, 1.2, and 1.4 M) gradient. That gradient was again centrifuged at 100,000 ϫ g for 1 h at 4°C and the 0.8 and 1.0 M fractions were carefully recovered. The various layers of fractions were analyzed by Western blotting using antibodies as indicated in the figure legends. We used a standard technique (64) to purify endosomes and coated vesicles. Briefly, SK-MEL-28 cells were harvested and washed once in MES buffer (0.1 M sodium MES, pH 6.5, 1 mM EGTA, 0.5 mM MgCl 2 , 0.02% sodium azide, and a protease inhibitor mixture) by centrifugation at 1,250 ϫ g for 5 min at 4°C. Specimens were then homogenized on ice using 20 strokes of a glass:glass homogenizer and centrifuged at 19,000 ϫ g for 40 min at 4°C. The supernatant was recovered and centrifuged at 120,000 ϫ g in a Sorvall T1270 rotor for 70 min at 4°C. That pellet was again homogenized on ice using 10 strokes of a glass: glass homogenizer in Percoll-sucrose-MES buffer (12.5% Percoll, 12.5% sucrose, MES) and centrifuged at 35,000 ϫ g for 40 min at 4°C. The pellet was recovered, resuspended in buffer, and used as the endosomerich fraction. The supernatant was recovered and was again centrifuged at 82,000 ϫ g for 70 min at 4°C; the resulting pellet was resuspended in buffer and used as the coated vesicle fraction. For purification of ER and Golgi apparatus, we used a previously published method (65). Briefly, SK-MEL-28 cells were harvested and washed once in 0.25 M sucrose by centrifugation at 1,000 ϫ g for 5 min at 4°C. Specimens were then homogenized on ice using 20 strokes of a glass:glass homogenizer and centrifuged at 1,000 ϫ g for 10 min at 4°C. The supernatant was recovered and centrifuged at 2,000 ϫ g for 30 min at 4°C. That supernatant was recovered and was further centrifuged at 105,000 ϫ g in a Sorvall T1270 rotor for 60 min at 4°C. The pellet was resuspended in 1.35 M sucrose and layered in the middle of a 0.8 -2.1 M sucrose step (0.8, 1.0, 1.2, 1.35, and 2.1 M) gradient, which was then centrifuged at 100,000 ϫ g in a Beckman SW 28 swinging-bucket rotor for 6 h at 4°C. The 0.8 and 1.0 M layers were carefully recovered and used as the Golgi-rich fraction, whereas the layers of the 1.35 and 2.1 M fractions were recovered and used as the ER-rich fraction. Electron Microscopy-Electron microscopy was performed as reported previously (33). Briefly, cells were harvested and collected by centrifugation at 4°C. After several washes in PBS, cells were fixed overnight at 4°C in 2% glutaraldehyde, 2% paraformaldehyde in 0.1 M sodium cacodylate buffer (pH 7.3). The samples were then stored in PBS at 4°C until they were embedded in epoxy resin. Thin sections were cut, stained with uranyl acetate and lead citrate, and then examined with a Zeiss EM 912 EX electron microscope. Immunofluorescence Microscopy-Cells were plated in 2-well Lab-Tek chamber slides (Nalge Nunc International, Naperville, IL), incubated for 3 h at 37°C in minimum essential medium supplemented with or without 10 M Mon, 25 nM Baf, or 25 nM CCM, and stained by double indirect immunofluorescence methods as described previously (33,66). After three washes in PBS, the cells were fixed in 4% paraformaldehyde for 15 min at 4°C. After three further washes in PBS, the cells were permeabilized with 0.01% Triton X-100 for 3 min at room temperature (for Vti1b, syntaxin 8, HMB45, T311) or with 100% methanol for 15 min at 4°C (for Bip/GRP78, EEA1) and then blocked with 5% normal goat serum and 5% normal horse serum for 1 h at room temperature. The cells were incubated with a mixture containing a polyclonal and a monoclonal antibody (at the dilution noted in the figure legends) overnight at 4°C. After three washes in PBS, the polyclonal antibodies were reacted with goat anti-rabbit IgG labeled with Texas Red (1:100), and the monoclonal antibodies were reacted with horse anti-mouse IgG labeled with fluorescein (1:100) (Vector, Burlingame, CA) followed by nuclear counterstaining with DAPI (Vector). Reactivity was classified into three categories according to whether they showed green, red, or yellow fluorescence. The latter was indicative of colocalization of the red and green fluorescence signals. All preparations were examined with a confocal microscope (LSM 510, Zeiss) equipped with HeNe, argon, and krypton laser sources. Western Blotting and Glycosidase Digestion-Cell extracts were prepared using the M-PER mammalian protein reagent (Pierce) containing complete protease inhibitor mixture (Roche Applied Science), and protein concentrations were measured using the BCA protein assay (Pierce). Cell extracts were mixed with 2ϫ Tris-glycine SDS sample buffer (Invitrogen) supplemented with 5% 2-mercaptoethanol and boiled for 5 min. Samples (10 g of protein/well) were then separated on 8% or 14% SDS-polyacrylamide gels (Invitrogen) and transferred electrophoretically to Immobilon-P transfer membranes (Millipore, Bedford, MA). The blots were blocked in 5% nonfat dry milk in TBS-T (10 mM Tris-HCl, 150 mM NaCl, 1% Tween 20, pH 7.2) overnight at room temperature and then incubated with primary antibodies diluted (as noted in the figure legends) in 5% nonfat dry milk in TBS-T for 1 h at room temperature. After three washes with TBS-T, the blots were incubated in horseradish peroxidase-linked anti-rabbit or anti-mouse whole antibodies (1:1000) (Amersham Biosciences) in 5% nonfat dry milk in TBS-T for 1 h at room temperature. After three washes with TBS-T, the immunoreactivities of the antibodies were detected using an ECL-plus Western blotting Detection System (Amersham Biosciences) according to the manufacturer's instructions. The BenchMark prestained protein ladder (Invitrogen) was used to establish the molecular weight curve for the Western blotting. Cell extracts (2 g of protein) were digested with 1000 units of endoglycosidase H (Endo H) or Peptide:N-glycosidase F (PNGase F) (New England Biolabs, Beverly, MA) for 3 h at 37°C. After the digestion, cell extracts were mixed with Tris-glycine SDS sample buffer (2ϫ) (Invitrogen) supplemented with 5% 2-mercaptoethanol and boiled for 5 min. Samples were subjected to SDS-PAGE, and immunoreactive bands were detected by Western blotting using the ␣PEP7h antibody. Metabolic Labeling and Immunoprecipitation-Metabolic labeling and immunoprecipitation experiments were performed as described previously (27). Cells were incubated in Met/Cys-free Dulbecco's modified Eagle's medium containing 10% dialyzed fetal bovine serum (Invitrogen) for 30 min at 37°C and were then labeled for 30 min at 37°C with 0.5 mCi of (ϭ[ 35 S]Met/Cys (Amersham Biosciences) in Met/Cysfree Dulbecco's modified Eagle's medium containing 10% dialyzed fetal bovine serum. For pulse-chase experiments, cells were pulsed and chased for specific periods at 37°C (as detailed in the figure legends) in minimum Eagle's medium containing 1 mM unlabeled methionine supplemented with or without 10 M Mon or 25 nM Baf. Cells were harvested and solubilized overnight at 4°C in immunoprecipitation lysis buffer (50 mM Tris-HCl, 150 mM NaCl, 1% Nonidet P-40, 0.01% SDS, pH 7.4, containing complete protease inhibitor mixture). Cell extracts were incubated with 40 l of normal rabbit serum for 2 h at 4°C with continuous mixing and were then incubated with 150 l of protein G-Sepharose 4 Fast Flow (Amersham Biosciences) for 2 h at 4°C with continuous mixing. For immunoprecipitation, the supernatants were collected by centrifugation at 4°C and incubated with 10 l of ␣PEP7h or normal rabbit serum as a control for 2 h at 4°C with continuous mixing. The immunocomplexes were separated by incubation with 20 l of protein G-Sepharose 4 Fast Flow for 2 h at 4°C with continuous mixing and were further washed six times with immunoprecipitation lysis buffer. The final pellets were mixed with Tris-glycine SDS sample buffer (2ϫ) supplemented with 5% 2-mercaptoethanol and boiled for 5 min. Samples were separated on 8 -16% SDS-PAGE, and the separated protein bands were visualized by fluorography using Enlightning (PerkinElmer Life Sciences). Expression of Melanosomal Proteins and the Ultrastructure of SK-MEL-28 and MNT-1 Melanoma Cells-We used Western blotting to examine the expression of melanosomal proteins in amelanotic SK-MEL-28 cells and in pigmented MNT-1 melanoma cells. As shown in Fig. 1A, MNT-1 cells were positive for all five known melanosomal proteins (TYR, TYRP1, DCT, gp100, and MART-1). SK-MEL-28 cells were positive for TYR, gp100, DCT, and MART-1, but there was had only a barely detectable band of TYRP1. Reverse transcription-PCR analysis of those melanogenic genes correlated with the protein expression patterns and confirmed that TYRP1 is transcribed at very low levels in SK-MEL-28 cells (Fig. 1B). Quantitation of the bands after normalization against G3PDH revealed that the TYR mRNA level in SK-MEL-28 cells is about 40% that detected in MNT-1 cells, whereas the level of TYRP1 mRNA in SK-MEL-28 cells is about 10% that detected in MNT-1 cells. Kobayashi et al. (67) showed that TYRP1 plays an important role in stabilizing TYR, and mutations affecting TYRP1 function have been shown to result in hypopigmentation such as found in OCA3 (27,68). The mobility patterns in the Western blots shows that TYR in SK-MEL-28 cells appears as the immature 60-kDa form rather than the fully glycosylated 70 -85 kDa form detected in MNT-1 cells, suggesting that TYR may then be degraded by proteasomes as occurs in OCA3 melanocytes. Consistent with this expectation, TYR was consistently less abundant in SK-MEL-28 cells compared with MNT-1 cells. These results demonstrate clearly that TYR in amelanotic SK-MEL-28 melanoma cells is not glycosylated correctly compared with pigmented MNT-1 melanoma cells. The ultrastructure of these cells is shown in Fig. 1C. Only stages I and II (unmelanized) early melanosomes were seen in the amelanotic SK-MEL-28 melanoma cells with none in more advanced stages, whereas MNT-1 cells also contained melanized stage III and IV melanosomes in addition to those early stage melanosomes. Subcellular Distribution of TYR in SK-MEL-28 cells and MNT-1 Cells-To investigative the subcellular localization of TYR in SK-MEL-28 cells and in MNT-1 cells, we used immu-nohistochemical staining with ␣PEP7h (which is specific for TYR) to compare its distribution in the ER (using Bip/GRP78 as a marker), the Golgi (using Vti1b), early endosomes (using EEA1), late endosomes (using syntaxin 8), stage I melanosomes (using ␣PEP13h), and stage II melanosomes (using HMB45). In the merged images shown in Fig. 2, yellow indicates colocalization of the two signals. As shown in Fig. 2A, TYR in the pigmented MNT-1 melanoma cells colocalized with markers for the ER, Golgi, early and late endosomes and early (stage I and II) melanosomes. In contrast, in SK-MEL-28 cells (Fig. 2B), the majority of TYR colocalized with the ER in the immediate perinuclear area, and very little TYR was found in the Golgi in early or late endosomes or in melanosomes. Thus, most of the TYR in amelanotic melanoma cells was retained in the ER, although a minor amount was detectable in stage I melanosomes. Thus, TYR trafficking to early melanosomes is dramatically reduced in amelanotic SK-MEL-28 melanoma cells, which is consistent with the lack of production of melanin therein. Distribution of Melanosomal Proteins in Subcellular Fractions of SK-MEL-28 Cells-We used Western blotting to examine the distribution of melanosomal proteins in purified subcellular fractions of pigmented MNT-1 cells (not shown) and amelanotic SK-MEL-28 cells (Fig. 3). Early melanosomes can be separated efficiently by differential and sucrose density gradient centrifugation as detailed under "Materials and Methods." The ER, Golgi, endosome, and coated vesicle fractions were purified by the fractionation techniques detailed under "Materials and Methods." Full-length gp100 (100 kDa, as detected by ␣PEP13h) was distributed in the ER and Golgi fractions and in the 0.8 and 1.0 M sucrose fractions, whereas processed gp100 (ϳ35 kDa, as detected by HMB45) was found in the 1.4 to 1.8 M sucrose fractions (data not shown). Note that gp100 was not present in the coated vesicle or endosomal fractions (contrast with TYR and MART-1 as discussed below). FIG. 2. Subcellular distribution of tyrosinase in SK-MEL-28 and MNT-1 cells. SK-MEL-28 and MNT-1 melanoma cells were fixed in 4% paraformaldehyde and stained with antibodies (␣PEP7h at 1:20, ␣PEP13h at 1:20, Bip/GRP78 at 1:10, Vti1b at 1:10, EEA1 at 1:40, syntaxin 8 at 1:10, or HMB45 at 1:10). The polyclonal antibodies were reacted with goat anti-rabbit IgG labeled with Texas Red (1:100), and the monoclonal antibodies were reacted with horse anti-mouse IgG labeled with fluorescein (1:100) followed by nuclear counterstaining with DAPI. Reactivity was classified into three categories according to whether they showed green, red, or yellow fluorescence. Please note that TYR is stained red in all panels except where noted with asterisks, where it is stained green. We have recently demonstrated (33) that stage I melanosomes are recognized by ␣PEP13h and that stage II melanosomes are recognized by HMB45 following the proteolytic cleavage of gp100. From these reactivity patterns, it is clear that the 0.8 M sucrose fraction contained only stage I melanosomes, that the 1.0 M and 1.2 M sucrose fractions contained stage I and II melanosomes, and that the 1.4 -1.8 M sucrose fractions contained stage II melanosomes; this is consistent with the contents of those fractions as previously demonstrated by electron microscopy (33). TYR and MART-1 were detected in the ER, Golgi, coated vesicle, and endosome fractions and in stage I melanosomes of FIG. 4. Processing and stability of tyrosinase in SK-MEL-28 and MNT-1 cells. A, SK-MEL-28 and MNT-1 melanoma cells were solubilized, and 2 g of protein from each extract were digested with Endo H or PNGase F. Proteins were separated on 8% SDS-polyacrylamide gels, transferred to membranes, and detected by ␣PEP7h (at 1:2000). B, subcellular fractions purified from SK-MEL-28 cells were solubilized, and 2 g of protein from each extract were digested with Endo H or PNGase F. Proteins were separated by SDS-PAGE and detected by ␣PEP7h (at 1:2000). C, SK-MEL-28 and MNT-1 melanoma cells were pulse-chase radiolabeled with [ 35 S]Met/Cys, and extracts of the labeled proteins at various chase times (shown in hours) were immunoprecipitated with ␣PEP7h (at 1:40). The immunocomplexes were separated on 8 -16% SDS-polyacrylamide gels and visualized by fluorography. D, SK-MEL-28 and MNT-1 melanoma cells were incubated with 1 g/ml CHX for various times (shown in hours), and cell extracts were separated by SDS-PAGE and detected by ␣PEP7h (at 1:2000). E, SK-MEL-28 and MNT-1 melanoma cells were incubated with MG132 (or ALLN, not shown) in the presence of 1 g/ml CHX for various times (shown in hours), and cell extracts were separated by SDS-PAGE and detected by ␣PEP7h (at 1:2000). FIG. 3. Distribution of melanosomal proteins after purification in SK-MEL-28 cells. Subcellular fractions purified from SK-MEL-28 melanoma cells were solubilized, and 10 g of protein from each extract were separated on 8% or 14% SDS-polyacrylamide gels. Proteins were transferred to membranes and detected by antibodies (␣PEP7h at 1:2000, ␣PEP13h at 1:1000, ␣PEP8h at 1:500, MART-1 at 1:500, Bip/GRP78 at 1:500, Vti1b at 1:500, EEA1 at 1:2000, syntaxin 8 at 1:500, LAMP-1 at 1:500, mitochondria at 1:500, AP-3 at 1:250, or AP-1 at 1:2500). SK-MEL-28 cells. This reactivity pattern is identical to that recently reported for mouse melanoma cells (32). In contrast, DCT was distributed in the ER and Golgi but was not present in stage I melanosomes. TYRP1 was not detectable in any fractions (not shown), which is consistent with the results presented above. Interestingly, the coated vesicle fraction (positive for AP-1 and AP-3 as expected) contained TYR and MART-1 but not DCT or gp100. These results are quite distinct from the patterns found for TYR localization in pigmented MNT-1 cells, where TYR is abundant in stage II melanosomes but is almost completely lacking in stage I melanosomes (33). To demonstrate the relative purity of these enriched subcellular fractions, we used standard antibodies for organelle markers. As expected, the ER fraction contained the majority of Bip/GRP78 and calnexin, the Golgi fraction contained the majority of Vit1b, the endosome fraction contained the majority of EEA1, and the coated vesicle fraction contained the majority of AP-1. Endo H Sensitivity of TYR in SK-MEL-28 Cells and MNT-1 Cells-To further characterize the processing of TYR in amelanotic SK-MEL-28 melanoma cells and pigmented MNT-1 cells, we assessed its sensitivity to Endo H, an enzyme that removes high mannose-type carbohydrates from N-linked glycoproteins. This conversion occurs in the medial Golgi region, and when proteins are correctly processed through the ER and Golgi, they become resistant to Endo H yet remain sensitive to PNGase F. In MNT-1 cells, the majority of TYR was completely resistant to Endo H, showing that TYR in those pigmented melanoma cells is correctly transported to the medial Golgi region or TGN and is correctly glycosylated (Fig. 4A). In contrast, in SK-MEL-28 cells, the majority of TYR was sensitive to Endo H, although a very small amount of TYR was Endo H-resistant. These results are consistent with an earlier study (39) and show that TYR in amelanotic SK-MEL-28 melanoma cells is not glycosylated correctly and is retained in the ER, also confirming the immunohistochemical results described above. We then examined the sensitivity of TYR to Endo H and PNGase F in the ER, Golgi, endosome, and stage I melanosome fractions purified from SK-MEL-28 cells. As shown in Fig. 4B, TYR in the ER of SK-MEL-28 cells was sensitive to Endo H, whereas TYR that reached the Golgi, endosomes, and stage I melanosomes was resistant to Endo H. These results confirm that although the majority of TYR (cf. Fig. 4A) is not glycosylated correctly in SK-MEL-28 cells, the minor amount of TYR that is processed through the Golgi, endosomes, and early melanosomes is correctly glycosylated. Fig. 4C. The immature form of TYR was detected as a 60-kDa band at 0 h chase in MNT-1 cells and was quickly converted from that immature form to the mature form (75 kDa) within 1.5-3 h. Most of the radiolabeled TYR in MNT-1 cells was detectable even after a 24-h chase. In contrast, SK-MEL-28 cells were much less efficient in processing TYR, and the TYR was degraded quickly (very little of the TYR synthesized was glycosylated to the 75-kDa mature form, and most of the TYR had been degraded within 3 h of chase). These results suggest that the degradation of TYR is markedly accelerated in amelanotic melanoma cells compared with pigmented melanoma cells. SK-MEL-28 cells and MNT-1 cells are shown in We also used another independent approach to investigate the stability of TYR in SK-MEL-28 cells and MNT-1 cells, using CHX to inhibit protein synthesis and analyzing TYR levels by Western blotting. As shown in Fig. 4D, TYR levels in SK-MEL-28 cells disappeared quickly in the presence of CHX compared with the stable nature of TYR in MNT-1 cells, confirming the metabolic labeling and immunoprecipitation results described above. Effect of Proteasome Inhibitors on TYR in SK-MEL-28 Cells and MNT-1 Cells-To investigate the role of proteasomes on TYR in SK-MEL-28 cells and MNT-1 cells, cells were incubated for varying times with proteasome inhibitors such as MG132 or FIG. 6. Subcellular distribution of tyrosinase after neutralization of intracellular pH. SK-MEL-28 melanoma cells were cultured in the presence of Mon, Baf, or CCM for 3 h. Cells were fixed and stained with antibodies (␣PEP7h at 1:20, Bip/GRP78 at 1:10, EEA1 at 1:40, or HMB45 at 1:10). The polyclonal antibodies were reacted with goat anti-rabbit IgG labeled with Texas Red (1:100), and the monoclonal antibodies were reacted with horse anti-mouse IgG labeled with fluorescein (1:100) followed by nuclear counterstaining with DAPI. Reactivity was classified into three categories according to whether they showed green, red, or yellow fluorescence. ALLN in the presence of CHX, and the extracted proteins were analyzed by Western blotting. TYR levels in MNT-1 cells were quite stable even after 6 h and were not changed in the presence or absence of MG132 (Fig. 4E). In contrast, TYR levels in SK-MEL-28 cells were rapidly reduced in the presence of CHX, even within 3 h, but when they were also treated with MG132 (or ALLN, not shown), that degradation was dramatically abrogated. TYR levels increased in a dose-and time-dependent manner following treatment of SK-MEL-28 cells with MG132 or ALLN (data not shown). These results demonstrate that TYR in amelanotic melanoma cells is highly sensitive to proteasome inhibitors and is therefore actively degraded by proteasomes. Effect on TYR of Neutralizing the pH of Acidic Intracellular Organelles-To clarify the effects of organellar pH on TYR processing and trafficking in SK-MEL-28 cells and MNT-1 cells, cells were incubated with the protonophore Mon or with proton pump inhibitors such as Baf or CCM, and the extracted proteins were analyzed by Western blotting. TYR levels in MNT-1 cells were not altered significantly in the presence or absence of Mon or Baf (Fig. 5A). In contrast, TYR levels in SK-MEL-28 cells quickly decreased following treatment with CHX, but in the presence of Mon or Baf (or CCM, not shown) TYR levels were partially stabilized. Furthermore, in SK-MEL-28 cells, TYR accumulated in a dose-and time-dependent manner following treatment with Mon, Baf, or CCM (not shown). These results demonstrate that TYR in amelanotic SK-MEL-28 melanoma cells is highly sensitive to agents that raise the pH of intracellular organelles. We next used metabolic labeling to investigate the stability of TYR in SK-MEL-28 cells in the presence or absence of Mon or Baf. Cells were pulse-chase radiolabeled with [ 35 S]Met/Cys, and extracts of the labeled cultures were then immunoprecipitated with an antibody to TYR (␣PEP7h). As shown in Fig. 5B, the immature 60-kDa form of TYR was detected at 0 h chase following treatment with Mon or Baf, and the mature glycosylated 75-kDa form of TYR appeared from 1.5 h to 6 h. The stability of TYR was also prolonged after treatment with Mon or Baf, suggesting that the degradation of TYR in amelanotic SK-MEL-28 melanoma cells is abrogated by neutralizing the pH of acidic intracellular organelles. To more closely clarify the processing of TYR in SK-MEL-28 cells, we assessed its sensitivity to Endo H in response to changes in intracellular pH. Before treatment with Mon, Baf, or CCM, the majority of TYR was sensitive to Endo H (as shown above in Fig. 4A), whereas after varying times of treatment with Mon, Baf, or CCM, much of the TYR became Endo H-resistant (Fig. 5C), which demonstrates that neutralizing the intracellular pH stimulates the processing of TYR from the ER to the Golgi. We used Western blotting to examine the distribution of TYR during the purification of early melanosomes and endosomes. Before treatment with Mon or CCM, TYR was detected only in fractions containing stage I melanosomes and in the endosome fraction (as shown in Fig. 3), whereas after treatment with Mon or CCM, TYR distribution to stage II melanosomes (1.4 -1.8 M) was dramatically increased (Fig. 5D, compare with Fig. 3). Further, TYR and EEA1 content in the endosome fraction was reduced following incubation with CCM, but following treatment with Mon, the level of TYR was increased significantly in the endosome fraction. These results suggest that treatment with the protonophore Mon directs TYR to be correctly transported to stage II melanosomes via the endosome network, but that treatment with the proton pump inhibitor CCM allows TYR to be sorted to stage II melanosomes from the TGN without trafficking through the endosomal system. Subcellular Distribution of TYR after Neutralization of Intracellular pH-To investigate the subcellular localization of TYR in amelanotic SK-MEL-28 melanoma cells after neutralization of organellar pH, we used immunohistochemical staining to compare its distribution in the ER, early endosomes, and stage II melanosomes (Fig. 6). TYR staining was detected by red fluorescence, whereas the others were detected by green fluorescence; in the merged images, yellow indicates the colocalization of the two signals. Before treatment with Mon, Baf, or CCM, the majority of TYR colocalized with the ER in the perinuclear area, whereas after treatment with any of those three compounds, the yellow fluorescence in the ER was reduced and the green fluorescence was increased. Therefore, TYR is not retained in the ER in the presence of Mon, Baf, or CCM and is more efficiently sorted to the Golgi. In untreated SK-MEL-28 cells, some TYR colocalized with early endosomes, whereas after treatment with Baf or CCM, the yellow fluorescence virtually disappeared, confirming the Western blotting results described above and showing that TYR is trafficked beyond the early endosomes after neutralization of intracellular pH. Similarly, prior to treatment with Mon, Baf, or CCM, a small amount of TYR colocalized with stage II melanosomes (HMB45) in the perinuclear area, whereas after treatment with any of those agents, the amount of TYR in stage II melanosomes increased dramatically. These results suggest that the processing of TYR is rescued in amelanotic melanoma cells treated with protonophore or proton pump inhibitors, showing that the processing of TYR can be corrected by adjusting the intracellular organellar pH. DISCUSSION More than 100 distinct genes play direct or indirect roles in regulating mammalian pigmentation (69). Many of those genes encode proteins that are localized in melanosomes, specialized pigment organelles produced only in melanocytes. Those gene products modulate the type and amount of melanin produced and/or its processing and distribution of melanosomes. The known melanosomal proteins are involved in melanogenesis as catalytic and/or structural components and include TYR, TYRP1, DCT, MART-1, and gp100 (60,70). Although the processing and sorting of those proteins are not completely understood, they are known to be synthesized and translocated into the ER and eventually into the Golgi where their post-translational processing and glycosylation take place (36,71). Following that processing, they seem to take distinct routes to traffic to melanosomes, with the majority of melanosomal proteins predominantly going to stage II melanosomes, although that distribution is disrupted in amelanotic melanoma cells. Despite their high structural similarity and conserved primary sequences, the three tyrosinase-related proteins use distinct routes to move from the TGN to early melanosomes: TYR uses the AP-3 system, TYRP1 uses the AP-1 system, and DCT uses yet another unknown sorting vesicle system (32,41,72). The sorting system for MART-1 is not yet known, but its distribution patterns are highly similar to TYR. gp100 is perhaps the most uniquely processed of the melanosomal proteins; after processing through the ER and Golgi, it is normally delivered to stage I melanosomes without going through the endosomal system (33,66). Following the maturation of stage I to stage II melanosomes, which is coincident with the cleavage and refolding of gp100, the enzymatic components are delivered, and the synthesis of melanin usually ensues (2). To investigate the subcellular distribution of melanosomal proteins in amelanotic melanoma cells, which produce melanosomal proteins and thus should be pigmented, we used immunofluorescence staining and Western blotting in conjunction with the purification of subcellular organelles. In pigmented MNT-1 cells, the majority of TYR was processed correctly through the ER, Golgi, and endosomes and was delivered to stage II melanosomes (33). In contrast, the majority of TYR in amelanotic SK-MEL-28 melanoma cells was retained in the ER, although a small amount of TYR was glycosylated correctly and was found in the Golgi, endosomes, and early melanosome fraction (but primarily in stage I melanosomes). The effects of this aberrant processing on disruption of melanosomal maturation and tyrosinase function are shown in this study, using confocal immunohistochemistry, electron microscopy, subcellular fractionation, and Western blotting, as well as metabolic labeling and immunoprecipitation. We further analyzed the processing of TYR in amelanotic melanoma cells by Western blotting after Endo H and PNGase F digestion, and following inhibition of proteasome activity. In contrast to the distribution and stability patterns of TYR in pigmented MNT-1 cells, the majority of TYR in amelanotic SK-MEL-28 melanoma cells was not correctly glycosylated, was trapped in the ER, and was quickly degraded by proteasomes. To clarify the stability and degradation of TYR in amelanotic melanoma cells, we also used metabolic labeling and immunoprecipitation. TYR had an extremely short half-life in amelanotic melanoma cells, and that stability could be markedly enhanced by treatment with proteasome inhibitors. The sum of these results suggests that the incorrect trafficking of TYR plays an important role in the disrupted pigmentation in amelanotic melanoma cells. In light of the role of TYRP1 in complexing with and stabilizing TYR in the ER (67), and of the fact that disruption of TYRP1 function results in abnormal proteasomal degradation of wild-type TYR in OCA3 cells (27), the lack of TYRP1 expression in SK-MEL-28 cells may also be an important factor resulting in the hypopigmentation of those cells similar to what occurs in OCA3. Melanosomes are lysosome-related organelles, but their exact biogenesis is still poorly defined (29,73). For instance, melanosomes and lysosomes contain many of the same structural proteins (e.g. LAMP, acidic hydrolases, vacuolar-type proton pumps (47,74,75)), and both are affected in several genetic disorders, such as the Chediak-Higashi and Hermansky-Pudlak syndromes (76,77). The catalytic domains of TYR and other enzymes involved in melanogenesis are located within the lumen of the melanosome, and it follows that their activity is likely to be dependent upon the intramelanosomal environment, including the pH (19). However, there is some controversy regarding the optimal pH for TYR activity. Melanosomes, like other lysosomal organelles, can be quite acidic, and it has been assumed that this low melanosomal pH facilitates melanogenesis (78). It has been suggested that human TYR activity is activated at acidic pH and that the enzyme is inactive at neutral pH (46). However, several independent groups have maintained that mammalian TYR has an optimal enzymatic activity that is near neutral pH and that its activity is gradually lost with decreasing pH (79 -81). However, the pH at which melanin is produced and the pH at which the vesicular trafficking system works may function quite independently. Baf and CCM, which at low concentrations specifically inhibit vATPases (51), have frequently been used to neutralize acidic compartments within cells. Mon, a proton ionophore, exchanges H ϩ for Na 2ϩ and has also been commonly used to neutralize acidic compartments. To clarify the effects of neutralizing the pH of acidic organelles on TYR processing and trafficking, we used immunofluorescence staining and Western blotting to examine the processing of TYR in amelanotic melanoma cells. TYR levels in amelanotic melanoma cells were dramatically increased following treatment with any of those protonophore or proton pump inhibitors. Surprisingly, the re-tention of TYR in the ER in amelanotic melanoma cells could be corrected by those agents (i.e. by increasing intracellular pH), which resulted in the enhanced transport of TYR from the ER to the Golgi. Thus, the abnormal acidification of intracellular organelles also plays an important role in the pathogenesis of hypopigmentation in amelanotic melanoma cells. Although TYR was dramatically redistributed to stage II melanosomes in SK-MEL-28 cells following the neutralization of intracellular pH, TYR levels in endosomes were reduced in the presence of proton pump inhibitors but were increased in the presence of the protonophore. This suggests that the processing and trafficking of TYR in amelanotic melanocytes is related to the dysfunction of vATPases. vATPase was recently identified as a melanosomal protein (66), and this current study shows that it may play an important role in regulating pigmentation. Transport of TYR from the ER and its subsequent processing depend on the neutralization of pH in the Golgi, although the translocation of TYR to endosomes then requires the activation of vATPase. Neutralization of the pH within early melanosomes results in the accumulation of TYR in those organelles. Therefore, the activity of vATPases within intracellular organelles plays an important role in the sorting and function of TYR and in modulating pigmentation; dysfunction of that pH regulatory system may be responsible for the depigmented phenotype and pathogenesis of amelanotic melanocytes.
2018-04-03T04:32:10.257Z
2004-02-27T00:00:00.000
{ "year": 2004, "sha1": "4d69cae1e83f34c22b73b24a206b291eb3ff70e1", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/279/9/7971.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "87face13b84e451cbad14dbe41806c5e8b78cf76", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
14730500
pes2o/s2orc
v3-fos-license
Atomic-detailed milestones along the folding trajectory of protein G The high computational cost of carrying out molecular dynamics simulations of even small-size proteins is a major obstacle in the study, at atomic detail and in explicit solvent, of the physical mechanism which is at the basis of the folding of proteins. Making use of a biasing algorithm, based on the principle of the ratchet-and-pawl, we have been able to calculate eight folding trajectories (to an RMSD between 1.2A and 2.5A) of the B1 domain of protein G in explicit solvent without the need of high-performance computing. The simulations show that in the denatured state there is a complex network of cause-effect relationships among contacts, which results in a rather hierarchical folding mechanism. The network displays few local and nonlocal native contacts which are cause of most of the others, in agreement with the NOE signals obtained in mildly-denatured conditions. Also nonnative contacts play an active role in the folding kinetics. The set of conformations corresponding to the transition state display phi-values with a correlation coefficient of 0.69 with the experimental ones. They are structurally quite homogeneous and topologically native-like, although some of the side chains and most of the hydrogen bonds are not in place. Molecular-dynamics simulations in explicit solvent can be a very useful complement to experimental studies of protein folding, in keeping with the fact that they provide insight into the time evolution of the process with atomic detail, under fully controlled conditions [1]. On the other hand, they are computationally very demanding, even in the case of small proteins. Among the most massive folding simulations ever realized is a 10µs molecular dynamics (MD) folding trajectory of the 38-residue WW domain, lasting for about 3 months on 329 cores and reaching conformations which are ∼ 50% similar to the native conformations in terms of number of contacts [2]. To be statistically sound, Pande and coworkers carried out 410 simulations of the folding of the 35-residue Villin Headpiece, the average duration being 863 ns. The calculation lasted for 54 machine years on a distributed computer, and eighteen of these trajectories reached the native conformation [3]. The intrinsic and unavoidable computational problem in carrying out folding simulations with realistic protein models is the wide range of time scales involved: the time step of the simulation must be tuned to femtoseconds, corresponding to the time scale of atomic vibrations, while the overall folding process spans over interval of time ranging from milliseconds to seconds. In an attempt to overcome this difficulty, a number of investigations focused on the study of unfolding simulations at high temperature [4,5]. A decade ago Marchi and Ballone developed an adiabatic bias molecular dynamics (ABMD) method [6], to generate MD trajectories between pairs of points in the conformational space of complex systems. It was applied for the first time to protein unfolding by Paci and Karplus [7]. The method is based on the introduction of a biasing potential which is zero when the system is moving towards the desired arrival point and which damps the fluctuations when the system attempts at moving in the opposite direction. As in the case of the ratchet and pawl system, propelled by thermal motion of the solvent molecules, the biasing potential does not exert work on the system. Consequently, the resulting trajectories are physically correct. On the other hand, the algorithm cannot provide the statistical weight of the visited states nor the time scales associated with the trajectory. In the present work we report on results of the application of the ABMD algorithm to the study, with the help of the Amber force field [8] in explicit solvent and without recurring to high-performance computing, of the folding of the 56-residues B1 domain of protein G starting from 16 thermally-unfolded conformations. From the eight trajectories which reached a RMSD lower than 2.5Å we have extracted the conditional probabilities of contact 2 formation between the amino acids of the protein. From them it is possible to learn whether there are obligatory steps along the folding pathway of the protein. I. RESULTS Sixteen ABMD simulations were carried out starting from uncorrelated high-temperature protein conformations in water, drived by the distance d CM of the contact map of a given protein conformation from the native contact map (cf. Eqs. 2,3). Eight of these simulations fold within an RMSD of 2.5Å from the crystallographic conformation within the simulated time of 50 ns (see Fig. 1). Two of the folding trajectories reach conformations within an RMSD of 1.2Å, and other two within 1.4Å. The other four folding trajectories display imperfections due to non-native alignment of some side-chains, a result which is fostered by the biasing algorithm and which is anyway compatible with the predicted glassy dynamics of side-chains in the native state [11]. These imperfections cause the RMSD to reach values up to 2.5Å, even if the overall topology was correct. Concerning the eight non-folding trajectories, three of them display the two hairpins docked on the wrong side with respect to the helix; two trajectories display the hairpins undocked but with the orientation of the side chains which is symmetrical with respect to the native conformation. These misfolded conformations are reached because the definition of the reaction coordinate d CM employed to drive the simulation involves mainly the C α atoms (which are 56) and only 12 atoms belonging to the hydrophobic side chains. Consequently, it is not always effective in discouraging the formation of conformations with a wrong symmetry of the side chains. The structure of the protein in the remaining three trajectories does not display misfolded features and seems only to need longer simulation times to reach the native state. A. The kinetics of contact formation is rather hierarchic The only information which ABMD simulations can provide concerns the sequence of events of the folding process (which follows what). The order of formation of the 110 native contacts of the protein are calculated for each trajectory, defining the quantity t(i, k) as the (nominal) time at which the ith contact is stably formed in the kth simulation. From it one 3 can define the probability that the ith contact is formed before j, where θ is the Heaviside's step function. This The plot of the A j , ordered from the smallest to the largest values (cf. Fig. S1 A j values will lay on a horizontal line. One can thus define a parameter hi to measure the degree of "hierarchicity" of the folding process, proportional to the angular coefficient of the ordered A j values, ascribing to it the value 1 in the case of a deterministic hierarchy. One should notice that a hierarchical folding kinetics is not incompatible with a sharp thermodynamic transition from denatured to native state at equilibrium, something which can be produced, for example, by a large free-energy barrier at one step of the hierarchy. The value of hi associated with the simulation is 0.65, indicating a fairly hierarchical process. One should notice that a large value of hi is not in disagreement with the sharpness of the thermodynamic transition between the denatured and the native state, as this transition usually involves a small subset of native contacts in keeping with the fact that a number of these contacts are already formed in the denatured state [12]. As a control, a random matrix satisfying the M ij + M ji = 1 requirement displays hi = 0.08. A further control case is that of homopolymeric chains whose rate of contact formation depends only on the distance along the chain of the two residues involved (see Supplementary Materials). This model provides a hi = 0.27, reflecting the hierarchy arising from the straightforward fact that residues which are close along the chain build out contacts faster than those which are far apart along it. The curve associated with the folding simulations display six contacts with particularly-4 low value of A j . These contacts are between pairs of residue 22-25, 29-32, 34-38 and 35-38 (within the helix), 46-49 and 43-54 (within the second hairpin) and 7-54 (between the Nand the C-terminal strands of the protein). Because the presence of such an early non-local contact is entropically quite unfavorable, its presence underscores its essential role in the whole folding process, strongly restricting the possible conformations of the chains. It is not stabilized by hydrogen bonds or salt bridges, but takes place between two hydrophobic residues (L7 and V54), L7 being close to other two hydrophobic residues (L5 and I6) and belonging to the eventual hydrophobic core of the protein G. The contacts displaying the largest probability of being formed after all the others have done so, belong mostly to the interface between the two hairpins and between each of them and the helix. Interestingly, three of these late-forming contacts are between residues stabilizing the first hairpin, namely 9-13, 7-16 and 4-15. B. The folding hierarchy involves three levels of contact formation To inspect the hierarchy of contact formation, we report in Fig The contacts not marked by squares are those displaying fractional probabilities to be the cause or the consequence of the formation of some other contact. These contacts are mostly concentrated within the first hairpin, and to a lesser extent between the two hairpins. Summing up, the simulation indicates that the folding mechanism of protein G involves first the spontaneous formation of native contacts in the second hairpin, in the helix and few non-local contacts. The stabilization of the first hairpin and the formation of most tertiary contacts come only as a consequence of these events. C. A small number of non-native contacts are formed with probablity one Operatively, a non-native contact is assumed established when two amino acids lying more than three residues apart along the chain and further away than 5Å in the native conformation found themselves come, in the folding process, closer than 4Å. There are 9 non-native contacts which are formed for some time in all the folding trajectories; these contacts display well-defined behaviours. The contacts E19-A26 and V21-A26 stabilize a non-native turn (hydrophobic staple motif [13,14,15]) in the region between the first β-hairpin and the α-helix; this turn is disrupted when the first N-terminal turn of the helix is formed. Residues K31 and D40 form a non-native salt-bridge which stabilizes the C-terminal segment of the helix. When the helix forms its tertiary interactions with the hairpin, the salt bridge is broken and the side chain of K31 gets reoriented to interact with the hairpin. Contacts Y33-D40 and N35-G41 are always formed in the initial collapse of the chain. Contacts V39-T55 and D40-T55 have a bizarre behaviour. They are formed after the Cterminal part of the helix, when no other native contact between the helix and the second hairpin are formed. Their disruption is followed by the formation of the native contacts in the immediate vicinity (e.g. 39-56, 40-56), which substitute such non-native contacts in the docking between the helix and the second hairpin. Consequently, they seem to act as baits to entice together the two secondary structures of the protein to come togheter. lies. On the other hand, since protein folding is associated with the crossing of a free-energy barrier, one expects the TS to be associated with a marked jump in the reaction coordinate of the system. While the correct reaction coordinate is unknown, d CM has proven effective in ratcheting the protein to its native conformation, thus showing to be correlated with it. Consequently, the assumption is made that the transition state is located close to the last jump in d CM before the native state (cf. Contacts which are only cause of other contacts are mainly local, located within the helix and the second hairpin, while only three of them are non-local (i.e., 5-30 between the first hairpin and the helix, 39-54 between the helix and the second-hairpin and 7-54 between the two hairpins). It is not unexpected that the residues building out these early-contacts are concentrated in the helix and in the second hairpin (cf. Figs. 2A and B), which are known by ϕ-value analysis to be structured in the transition state [18]. Interestingly the non-local contact between residues 7 and 54, linking the two terminals of the protein, agrees with the interpretation of the effects of mutations in the first strand of protein G, according to which hydrophobic residues in the first strand do make some interactions with the relatively ordered second beta-hairpin [18]. Contextualizing the above results within the framework of a two-state scenario of the folding of protein G, one can argue that the formation of the non-local contacts (5-30, 39-45 and 7-54) take place in the early stages of refolding (see also ref. [4,14]). Within this context one can mention that, NMR spectra of the pH-denatured state of protein G highlight elements of native structure associated with the second hairpin, with the helix and with the turn of the first hairpin [19]. This interpretation is consistent with results of simplified models showing the formation of local elementary structures (LES) [9,10,20], or foldons [21], in the denatured state under native conditions, and their docking into a folding nucleus [22]. Within this picture, the docking of the LES -which can be viewed as hidden incipient secondary structures but most likely lacking of a number of hydrogen boding as well as of side chains contacts as compared to the native situation-seems also helped by the early formation of few non-local native contacts and few non-native contacts. The transition state is compatible with the docking of the LES, corresponding to a ensemble of conformations where the protein display its native topology. The end of the folding process involves the detailed packing of the buried side-chains [11,23] and the stabilization through hydrogen bonds of secondary structure elements [23]. The non-native interactions seem to play two roles. First, they stabilize the helix while the system attends the formation of the native tertiary interactions. Moreover, they help the folding kinetics, attracting residues which are distant along the chain for then leaving the place to native nearby interactions. This fact suggests that evolution could have selected, at the price of protein stability, sequences not only optimizing native interactions, but also specific non-native ones able to enhance folding kinetics. The B1 domain of Protein G folds in a time which is as fast as 5 ms [24], making the experimental characterization of the events which take place along the folding pathway, quite difficult. Aside from validating the model, the possibility of characterizing structurally the transition-state ensemble from ab initio calculations offers an unprecedented opportunity to complement experimental data. Therefore, we compare our results with, on one hand, the characterization of the denatured and of the TS state, and on the other hand, with computational results obtained making use of simplified models. Also with those resulting from studies of selected fragments of protein G. The NMR analysis of protein G under mildly denaturant conditions carried out by Sari and coworkers [19] indicate that short-range contacts between H α and H N in the turn of the first hairpin (contact [8][9][10][11][12], in the helix (22-25, 22-26, 29-32, 34-38) and the long-range contact 41-55 are formed in the denatured state. These contacts match remarkably well with the "early contacts" marked in red in Fig. 3. Moreover, the J-couplings associated with residues 44, 49, 51 and 52 indicate that the second hairpin populates the beta region of the Ramachandran plot. Consistently with our results, this suggests that parts of the helix are formed, that the second hairpin displays a native-like topology, constrained by the contacts 39-54 and 41-55. Also, the result of the simulations is not inconsistent with the early formation of the turn in the first hairpin. The topology of the transition state is more similar to the native state than what ex-periments usually suggest. This was already noted in ref. [23,25,26], but on the basis of data-driven calculations. The difference between the transition and the native state seems to be not in the amount of native contacts, meant mainly as Van der Waals interactions, but in their degree of optimization and in the formation of orientation-dependent H-bonds. For example, the first hairpin is quite native-like in the transition state, but it does not qualify as a beta-hairpin in terms of detailed dihedral angles and H-bonds pattern. Similarly, it is likely that the native structures present in the denatured state are not textbook secondary structures, and consequently escape characterization with traditional tools. The subtle structural difference between the transition state and the native state is likely to be the reason why simplified protein models with reduced degrees of freedom overstimate the ϕ-values [20]. Also the reason why ϕ-value analysis provide a very refined microscope at the all-atom level of the transition state. A. Model system The structure of the B1-domain domain of the IgG-binding domain of streptococcal protein G we used has pdb code 1pgb [27]. All the simulation are performed with GRO-MACS [28]. The interactions are described by the Amber 2003 all-atom force-field ported to Gromacs [8,29]. The system is enclosed in a dodecahedric box of 261 nm 3 with periodic boundary conditions and solvated with 8325 SPC water molecules. The system charge was neutralized adding 4 Na + ions. Van der Waals interactions are cut-off at 1.4 nm and the long-range electrostatic interactions were calculated by the particle mesh Ewald algorithm [30] with a mesh spaced of 0.125 nm. The system evolve in the canonical ensemble, coupled with a Nosé-Hoover thermal bath [31,32]. The native state is first thermalized at 300 K for 1 ns. A 2 ns dynamics at 300 K at constant volume used to generate a reference native state ensemble. B. Initial conformations To generate a set of unfolded conformation we run a 26 ns long simulation at 600 K, we took 16 conformations from the 10th to 26th ns and we thermalized them at 300K 11 for 2 ns each. The RMSD calculated on the Cα atoms between the 16 structures after the thermalization is between 0.8 to 1.4 nm which guarantee that the dynamics will be uncorrelated. C. Adiabatically biased trajectories The trajecories are generated by a biased molecular dynamics algorithm proposed by Marchi and Ballone [6] and applied to proteins by Paci and Karplus [7]. The driving coordinate used in the present study is the distance d CM of the contact map of a given protein conformation from the native contact map, introduced by Bonomi at al. in ref. [33]. This is defined as were C ij is the i,j element of a NxN matrix defined as r ij is the distance between atom i and j andC is the defined on the native state. The parameters used in these simulations are p = 6, q = 10, r 0 = 0.75 nm and r cut = 1.23 nm. N include all the α carbons and either the β or the γ carbons of the hydrophobic side-chains. The first tests to chose a proper collective variable were done with a constant α of 40 KJ/mol, while for the production runs the constant is set to 3 KJoule/mole. Each of the sixteen unfolded structure is evolved for 50 ns. D. Contact analysis A contact between two amino acids is defined if 1) there is a H-bonds between the two amino acids, that is a polar H and an O are closer than 2.5Å and their respective bonds are aligned with a maximum deviation of 30 deg, or 2) the minimum distance between any atom in their side-chain is less than 4Å. Native contacts are defined if the above property holds for the average distances calculated on the native state ensemble. Having also calculated the standard fluctuations of the atom distances in the native state, a native contact is defined as stably formed once it is formed and, since then, its fluctuations do not exceed the double of those found in the native state. A non-native contact between two amino acids is defined if 1) the minimum distance between any atom is less than 4Å and, 2) the mean minimum distance between any atom is more than 5Å on the native state ensemble.
2009-05-18T12:35:51.000Z
2009-05-18T00:00:00.000
{ "year": 2009, "sha1": "6fbf9dfdfbbbdd103a542fae7769645f82c23ae2", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "6fbf9dfdfbbbdd103a542fae7769645f82c23ae2", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Chemistry" ] }
119574472
pes2o/s2orc
v3-fos-license
On the asymptotics of integrals related to the generalized Cantor ladder The Cantor ladder is naturally included into various families of self-similar functions. In the frame of these families we study the asymptotics of some parametric integrals. Introduction Let {I k = [a k , b k ]} m k=1 be subsegments of [0, 1] with non-intersecting interiors. Denote by S k (t) = a k + (b k − a k ) t the affine contractions of [0, 1] onto I k preserving the orientation. We also introduce a set of positive numbers {ρ k } m k=1 such that m k=1 ρ k = 1. Define the operator S acting in the space L ∞ (0, 1) by the formula It is easy to check, see, e.g., [5], that S is a contracting map in L ∞ (0, 1). Thus, there exists a unique function C ∈ L ∞ (0, 1) such that S(C) = C. We call such a function C(t) the generalized Cantor ladder with m steps. It can be found as a uniform limit of the sequence S k (f ) with f (t) ≡ t. This allows to assume C(t) continuous and monotone with C(0) = 0, C(1) = 1. Note that the derivative of C(t) in the sense of distributions is a measure µ self-similar in the sense of Hutchinson (see [4]). This means µ(E) = m k=0 ρ k µ(S −1 k (E ∩ I k )). More general self-similar functions are described in [5]. For a generalized Cantor ladder C(t) we study the asymptotic behavior, as λ → ∞, of the integral E(λ) = 1 0 e λC(t) dt. Remark 1. It is easy to see that the quest of asymptotics of E(λ) as λ → −∞ can be reduced to a similar problem as λ → +∞. Namely, let a ladder C(t) be generated by segments I k = [a k , b k ], k = 1, . . . , m, and by numbers {ρ k } m k=1 . Consider the ladder C 1 (t) generated by segments For these ladders we have an obvious relation Thus the quest of asymptotics of E C (λ) as λ → −∞ can be reduced to the quest of asymptotics of E C 1 (λ) as λ → +∞. In what follows we assume λ > 0. Definition 1. We say that a generalized Cantor ladder is regular if For a 2 = b 1 such a ladder degenerates to C(t) ≡ t, and we have E(λ) = e λ −1 λ . The regular ladder for m = 2 was considered in the paper [3]. In particular, the first term of the asymptotic series for E(λ) was calculated. We also mention the paper [2] where the function E(λ) and some other integrals were expressed (in the case of classical Cantor ladder) in terms of series of elementary functions. The recurrent relation and the Main Lemma Without loss of generality we can assume a 1 = 0, b m = 1 (any another case can be reduced to this one by dilation). Denote by ∆ i , i = 1 . . . 2m − 1, the lengths of parts of the segment [0, 1], i.e. Remark 2. The relation S(C) = C can be rewritten as follows: Lemma 1. For a ladder with m steps the following relation holds: Proof. and we arrive at (1). To analyse this relation we need the following statement. Remark 3. In a particular case this statement was proved in [3]. Proof. We introduce the notation dλ α e ηλ . Then the assumption 2 can be rewritten as follows: The first term We claim that, for any generalized Cantor ladder, the function E(λ) satisfies the assumptions of the Main Lemma. Then we can rewrite the relation (1) as follows: Applying the Main Lemma we obtain The function H(λ) is a sum of series which converges uniformly on any compact in the half-plane Re(λ) > 0. Therefore, analyticity of f (λ) implies analyticity of Φ(x) in the strip |Im(x)| < π 2 ln(η) . In general case it is difficult to say anything more since f (λ) is expressed in terms of E(λ). For example, in a degenerate case λ , and thus Φ(x) becomes a constant. In general case even the question whether Φ(x) is constant remains open. However, for regular ladders the dependence of f (λ) on E(λ) can be eliminated. Then Φ(x) can be written in a more explicit form. This allows us to obtain additional information. Repeating the proof of the Main Lemma we arrive at Thus, we have the explicite formula for Φ(x). Now we can study the Fourier series To proceed we need the Riemann formula, see, e.g., [1]: dt. Theorem 1. For a regular ladder, the Fourier coefficients of the function Φ(x) can be evaluated as follows where α n = −α − 2πin ln(m) . Proof. We have More terms in the simplest case Let us continue to study the asymptotic expansion. We begin from the simple example. Here C k , D k are numbers satisfying the following recurrent relations: Proof. The relation (1) in this case can be rewritten as follows: Applying the Main Lemma we can write the result as follows: We substitute this into (7) and obtain This implies Denote by E 2 (λ) the right-hand side of the last equality. Then . This gives us the second term of the asymptotics . We can substitute it into the relation (7) and obtain the expression for E 2 (λ) similar to (8): Repeating this algorithm we obtain formulas (6) and (5) as asymptotic expansion. Next, from (6) we conclude that coefficients C k , D k grow not faster then an exponent of their number: This gives us the uniform convergence of the series in the right-hand side of (5) if λ is sufficiently large. It remains to show that the right-hand side of (5) exhausts E(λ). To do this, consider the remainder Note that the sequence E k (λ) converges to к E 1 (λ) := e −λ λ α H(λ) E(λ) in the space L ∞ (Λ, +∞) for sufficiently large Λ. Further, tends to zero in L ∞ (Λ, +∞). Therefore, E 1 (λ) satisfies the homogeneous equation We know that for any ς 1 the estimate E 1 (λ) = O(e −ςλ ) holds. Whence for some c > 0, ς 1 we have |E 1 (λ)| c e −ςλ for λ > Λ. Remark 5. For ∆ 1 = ∆ 3 , i.e. for a regular ladder, (6) implies D k = 0 for all k 1. This fact is true in general case, see Theorem 4 below. More terms in the case ρ m = min{ρ i } In this subsection we transfer our scheme to a general case. Unfortunately, it is not always possible. Here we introduce an additional assumption: ρ m = min{ρ i }. We rewrite the statement of the Main Lemma as follows: We substitute this into (1) and rewrite the obtained equation as follows: Here Note that the minimal element in I 1 is ηg m−1 = 1. We transform (11) as follows: We know that E 1 (λ) = O(e −λ ). Therefore all terms in the right-hand side of (12) are O(e −ς ′ λ ), ς ′ > 1, whence E 2 (λ) = O(e −ς ′ λ ). Thus, Now we can rewrite (11) as follows: Note that even for ς ∈ I 1 ∩ I 2 the coefficients c 2 ς (λ) in general differ from c 1 ς (λ). However, this relation is quite similar to (11). Therefore, we can hope that this algorithm can be iterated. Let us write down a general form of the iteration. We have a function E k (λ) satisfying the following relations: We rewrite (13) as follows: Note that E k (ηλ) = O(e −ης k λ ), and ης k > ς k ; in the last inequality we use the assumption ρ m = min{ρ i }; This implies After substitution we obtain for E k+1 (λ) a relation similar to (13). It remains to make sure that ς k+1 ς ′ k+1 : Thus, we can separate more and more new terms. Theorem 3. Let ρ m = min{ρ i }. Then the function E(λ) can be represented as a series (all exponents in the last sum are negative). This series converges uniformly for sufficiently large λ. Proof. The calculations above give us (14) as asymptotic expansion. For c ς (λ), as for coefficients C k , D k in the simplest case, we have a recurrence: To prove the convergence of the series (14), one should show that the exponents ς grow sufficiently fast while coefficients c ς (λ) grow sufficiently slowly. First we show by induction that there exist C 1 > 0, C 2 > 1, such that Note that for any C 2 > 1 there exists C (0) 1 such that the estimate (15) holds for c 1 ς (λ). Next, let (15) be satisfied for some first terms in the series (14). We claim that (15) holds for the next term. Indeed, 1 , we obtain (15). Now we study the exponents in P k . We introduce linear functions l 0 (ς) = ρ m ς, l i (ς) = g i + ρ i ς, i = 1, . . . , m − 1, l m (ς) = ς. Any step of the algorithm can be described as follows: we take away the term with minimal exponent ς from P k and add this term to the series (14). In this process some terms with exponents l −1 0 (l i (ς)), i = 1, . . . , m are added or changed in P k+1 . The assumption ρ m = min{ρ i } implies that the graph of l 0 (ς) does not intersect graphs of other l i for ς > 0. Therefore, the linear transforms l −1 0 (l i (ς)), i = 1, . . . , m, Figure 1: The sequence of exponents ς k for a regular ladder have no positive fixed points. Thus, the sequence of exponents has no concentration points. This is shown at the Figure 1 which shows the graphs of l i (ς) for the regular ladder with m = 2. So, instead of the term with exponent ς any step of the algorithm adds to P k at most m other terms with exponents greater than ς + δ with some δ > 0. To estimate the series in (14) we change all new exponents to the minimal one (note that all the exponents arising at subsequent steps also decrease). Taking (15) into account we obtain for λ > ln(C 2 ) ς∈I |c ς (λ)|e (1−ς)λ The last series converges uniformly for sufficiently large λ. To complete the proof, as in the simplest case, we consider the remainder and note that the sequence E k (λ) converges to E 1 (λ) := e −λ E(λ) in the space L ∞ (Λ, +∞) for sufficiently large Λ. Further, where F k are tails of the series (16). Since this series converges uniformly for λ > Λ, we conclude that E 1 (λ) satisfies the homogeneous equation As in the simplest case, for some c > 0, ς 1 we have From (17) and (18) we obtain Without loss of generality we can assume Λ > 2 δ ln( 2 ∆ 2m−1 ). Then As in the simplest case, this gives E 1 (λ) ≡ 0 for λ > Λ, and the statement follows. Remark 6. It is easy to see that if we know the expansion (14) we can reconstruct the parameters of the function C(t). Now we consider the case of the regular ladder. Theorem 4. For a regular ladder the relation (14) is simplified and reads as follows: Proof. We slightly change the definition of E 1 (λ): Then the relation (11) becomes e −jλ E 1 (λ) − P 1 (λ), The function H(λ) is absent in this relation. Therefore it cannot arise in subsequent terms of the asymptotics. Figure 2: The sequence of exponents ς k for a ladder with a critical point This situation is shown at the Figure 2. One can see the intersection of graphs of l 0 (ς) and l 1 (ς) providing the concentration point, the sequence of exponents tending to this point, and an exponent greater then ς o , which cannot arise in our asymptotic expansion. for any given ς ′ < ς o . All elements of I ′ satisfy the inequality 1 < ς < ς ′ . If the coefficients c ς (λ) for ς < ς o do not vanish all together, this sum can have arbitrarily many terms.
2012-03-18T07:44:48.000Z
2012-03-18T00:00:00.000
{ "year": 2012, "sha1": "668936901607b7420a3ea358fb838555cdb7aa10", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "668936901607b7420a3ea358fb838555cdb7aa10", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
249220348
pes2o/s2orc
v3-fos-license
A Multi-Label Classification of Al-Quran Verses Using Ensemble Method and Naïve Bayes −Al-Quran is the holy book as a guide and also a source of law for muslims. Thus, understanding and studying Al-Quran is very important for muslims. To make it easier for muslims to understand and study the Qur'an, it is necessary to classify the verses of the Al-Qur'an. This study built a system that can perform multi-label classification of Al-Quran verses. Multi-label means that the classification will divide each verse of the Al-Quran into more than 1 topic. The model is built using the ensemble method by combining several Naïve Bayes algorithms. The ensemble method was chosen because research with different datasets can obtain good performance. The naïve Bayes algorithm was also chosen because it has a simple calculation so it requires a fairly short computation time. The preprocessing step is also carried out to see the comparison of performance results. To measure the performance of the system that has been built, the calculation of hamming loss is used. Based on the experimental results with several testing scenarios, the best performance results are obtained by combining Multinomial NB and Bernoulli NB with a hamming loss value of 0.1167. Thus, the use of the ensemble method can improve performance compared to without the ensemble method. This research can also of course build a multi-label classification model for the verses of Al-Quran with the ensemble method. This research was conducted to build a classification model system using the ensemble method and several naïve Bayes algorithms. The model aims to classify the English translation of the Al-Quran into certain topics. This research was conducted to analyze the effect of using several combinations of preprocessing steps on the results of hamming loss performance. This study also aims to analyze the effect of the naïve Bayes algorithm with the ensemble method and without the ensemble method. a better hamming loss value. The testing also shows that the use of the Bernoulli naïve Bayes algorithm produces the best hamming loss value. INTRODUCTION In 2015, muslims worldwide reached 1.8 billion people. Al-Quran is a holy book and a way of life for muslims [1]. Studying and understanding the verses of the Al-Quran is an obligation for muslims. Thus, a muslim is obliged to study the holy book Al-Quran kaffah or thoroughly. Al-Quran consists of more than 6000 verses and each verse has a different topic, even one verse can have more than one topic in each verse [2]. The topics contained in the Al-Quran are very diverse, ranging from Islamic history, charity, morals, and others. In the interpretation of the Al-Quran Cordova published by Syaamil Quran, Bandung, there are 15 different topics [3]. One way that can be done to make it easier to learn the Al-Quran is to classify the existing topics. Therefore, it is necessary to carry out a classification process on the verses of the Qur'an. The classification of verses in the Qur'an can also be categorized in multi-label text classification [4]. Multi-label means that the classification will divide each verse of the Quran into more than 1 topic. With this Al-Quran classification system, muslims in the world are expected to be able to easily distinguish and study the category of one verse from another. Text classification is a process to group text into certain classes [5]. Text classification is used for several cases of document text in many fields with different purposes [6]. One application of text classification on the topic of Al-Quran verses has been carried out by Abdullah Adeleke [7]. In this study, it is explained about the comparison of algorithms on the topic of Al-Quran verses. The research was conducted by Ananda Pane [2]. In this study, the researchers divided the verses of the Al-Quran into 15 different classes. Multi-label classification is a case of text classification. In the case of multi-label text classification, each text or document can be grouped into more than one class [8]. The multi-label classification also illustrates the problems that exist in the world. In this study, the multilabel classification used is to classify the verses of the Al-Quran into 15 different classes. In several previous kinds of related works to text classification that has been carried out, the classification model using the naïve Bayes algorithm produces a fairly high performance [2] and [5]. Ananda Pane et al. [2], have researched the case of multi-label text classification of Al-Quran verses in English translation. In this study, the multinomial naïve Bayes algorithm was used and focused on the use of stemming on preprocessing. The use of stemming can also accelerate the computing speed up to 29.44%. By using multinomial naïve Bayes, the resulting hamming loss of 0.1247 is the best performance. However, these figures are obtained without using stemming. To overcome this, they suggest using other selection features to get different performances. In Shou Xu's research [5], a classification text study was conducted using naïve Bayes. There are 3 naïve Bayes algorithms used in this study. Multinomial naïve Bayes, Bernoulli naïve Bayes and Gaussian naïve Bayes. Multinomial can classify text with an f1 score of 82%, followed by Bernoulli with an f1 score of 77%, and finally Gaussian at 70%. The ensemble method is a combined method of several models that can be used for classification [9]. In the ensemble concept, several models that have been built will do majority vote to find the best classification results [10]. This method has been proven in studies [10] and [11] by producing better accuracy than without using the ensemble method. With some research and completion by existing methods, researchers will focus on research using the ensemble method to differentiate from previous research. The ensemble method combining the Gaussian naïve Bayes, Multinomial naïve Bayes, Complement naïve Bayes, and Bernoulli naïve Bayes algorithm. This research was conducted to build a classification model system using the ensemble method and several naïve Bayes algorithms. The model aims to classify the English translation of the Al-Quran into certain topics. This research was conducted to analyze the effect of using several combinations of preprocessing steps on the results of hamming loss performance. This study also aims to analyze the effect of the naïve Bayes algorithm with the ensemble method and without the ensemble method. Research Flow In this research, a multi-label text classification system was built. The classified text is an English translation of the Al-Quran verse. In building the text classification system, there are several steps carried out. First, the researcher prepared a labeled Al-Quran dataset. Then, the dataset will be processed at the data preprocessing step which aims to make the data higher quality. Furthermore, feature extraction will be carried out using TF-IDF. At the classification step, there are 2 processes, first is building a model with 4 naïve Bayes algorithms and the second is combining these models into an ensemble method using majority voting. An ensemble method is a new innovation method for Al-Quran dataset. An overview of the system to be built can be seen in Figure 1. Dataset The dataset used in this research is an English translation of the Al-Quran verses that have been labeled in an excel file. The label consists of 15 topics (classes) according to the Tafsir Al-Quran Cordova published by Syaamil Quran, Bandung. The Al-Quran dataset used is more than 6000 verses of the Al-Quran which can be accessed on the Dataverse [13]. Each verse has at least 1 label (class). Preprocessing At this preprocessing step, the data train and data test were treated the same preprocessing. There are several preprocessing steps including case folding, punctual removal, stemming, stopword and tokenization. The preprocessing steps can be seen in Figure 2. In Figure 2 it can be seen that the first step is case folding which is used to convert each word into lowercase letters. Then there is punctual removal to remove punctuation marks such as semicolons and others. Furthermore, there is stemming which is used to remove the initial or final affix to change it into a basic word. Then there is stopword removal which will remove common words or conjunctions that have no meaning. The last step is tokenization to cuts the words in each sentence. The examples of input and output in preprocessing can be seen in Table 2. Feature Extraction (TF-IDF) Feature extraction is used to give weight to a word. In this research, the weights are calculated using the TF-IDF calculation. TF describes the number of occurrences of words in the document and IDF describes the value of how important a word is to the document. To calculate the value of TF-IDF can be seen in equation (1). Where is word weight for document , is number of words in document , is number of document, and is a number of the word in document D. Classification The multi-label text classification steps in this study have several steps carried out, including the distribution of data train and data test, model making, model training, prediction of each label on each model built, combining the prediction results of the model built using the majority voting. The Ensemble Method (Majority Voting) is a newness method that used for Al-Quran dataset. The more detailed classification steps of each fold in ensemble method are presented in Figure 3. At the data split step to dividing data train and data test, the researcher used K-fold cross-validation with K=5. Each fold is built several classification models using the Naïve Bayes algorithm as many as 4 models. The naïve Bayes algorithms used includes Gaussian naïve Bayes (GNB), Multinomial naïve Bayes (MNB), Complement naïve Bayes (CNB), and Bernoulli naïve Bayes (BNB). Basically, every model with a naïve Bayes algorithm will produce a single label classification. In this case, each model will classify 15 times for each label. Each prediction result of each model is entered in the ensemble method. The ensemble method used in this study is majority voting. The majority voting is taken from the predictions of each n model. In determining the majority voting it can be illustrated that if 3 models are entered in the ensemble method, then 2 or more than 3 models that produce 1 prediction result will produce a vote of 1. An example of the classification results using 3 models can be seen in Table 4. Evaluation (Hamming Loss) The evaluation step that will be used in this research is hamming loss. Hamming loss is used because it is suitable for multi-label classification cases. The smaller the hamming loss, the better. To calculate the hamming loss [5] follow a equation (2). Where N is number of data, L is column of label, ̂( ) is multi-label classification target and ( ) is multi-label classification output. RESULT AND DISCUSSION The evaluation step of this research was carried out on the dataset of the English translation of the Al-Quran. The dataset that used for testing is above 6000 data(text). The testing scenario of this research is carried out on preprocessing the data, and testing the classification method. The first scenario carried out in the preprocessing will show the effect of the performance results using all preprocessing steps, then without using stopwords and without stopwords as well as stemming. The second testing scenario was carried out on the classification using the ensemble method and the naïve Bayes algorithm. Hamming loss evaluation used will produce a value with 4 digits behind the comma. This is done because the calculation of each fold, the hamming loss will be calculated for 1247 lots of data and 15 labels. And every wrong prediction will give a hamming loss value of 0.00005. The performance results of each fold will be divided by 5 according to the number of K-folds. Result The first testing scenario was carried out on the preprocessed data. In this scenario, each model using the Multinomial naïve Bayes algorithm, Bernoulli naïve Bayes and Complement naïve Bayes algorithms are tested with full preprocessing, and without using stopwords and without stopwords as well as stemming on preprocessed data. The experimental results in the first testing scenario can be seen in Table 2. The second testing scenario was carried out on the classification using the ensemble method and the naïve Bayes algorithm. In this scenario, 4 models with naïve Bayes algorithm are included in the ensemble method with different combinations. The first combination in finding the most votes is to enter 2 models in the ensemble method. The results of the first combination can be seen in Table 3. Another combination in finding the most votes is to include 3 models and 4 models in the ensemble method. The results of the first combination can be seen in Table 4. Analysis Analysis of the testing results was carried out on 2 existing testing scenarios. The results of the first testing scenario can be seen in Figure 4. In Figure 4 it can be seen that the use of all preprocessing steps does not provide a better hamming loss value. The testing also shows that the use of the Bernoulli naïve Bayes algorithm produces the best hamming loss value. In the Gaussian NB, Multinomial NB and Complement NB algorithms, the use of preprocessing without stopwords and stemming results in a better hamming loss value than full preprocessing and preprocessing without stopwords. The use of stopwords will provide a filter or reduction of words in the sentence. This will also change the structure of the sentence from initially using a conjunction to not. Therefore, the Al-Quran dataset has a sentence structure that cannot be separated or reduced because every word in the verse can give a certain meaning. The use of stemming will certainly eliminate the affixes contained in each word. This can give a different meaning to each word so the Al-Quran dataset is not suitable for removing affixes. The Bernoulli naïve Bayes algorithm produces a better hamming loss value than the other 3 algorithms. Bernoulli NB algorithm represents features or words in binary. The same word in every verse of the Quran will be counted as 1 if it is present and 0 if it is not present. So that each word in the verse only has a value of 1 or 0 which is not the frequency of occurrence of the word. In the results of the second testing scenario, the diagram can be seen in Figure 5. The previous testing scenario can be seen that the Bernoulli NB is the superior model. The results of this testing indicate that the addition of the Bernoulli NB model can produce a better hamming loss value. These results are because the Bernoulli NB model can provide good predictions so the majority of votes can give the same or even better voting results. In the same testing scenario, the results show that the use of the ensemble method can indeed provide a better hamming loss value as shown in Figure 6. The more models that provide predictions for the ensemble method, the better the resulting hamming loss value will be. In certain cases, the ensemble method may not provide a better hamming loss value. This is because the models used are very far apart in providing the hamming loss value so that the hamming loss value may not be better, but it will not be too far from the model without the ensemble method. CONCLUSION Based on the results of testing and analysis, this research has several conclusions. Referring to the testing scenario carried out, preprocessing the data without using stop words and stemming can affect the evaluation results of the multi-label text classification in the Al-Quran dataset for the better. The use of the ensemble method can also provide a better hamming loss value. The best hamming loss was obtained using the ensemble method (majority voting) between the multinomial NB and Bernoulli NB algorithm models with a hamming loss value of 0.1167. More models that used for predict the ensemble method (majority voting), will get the better performance. However, based on the analysis of the testing results, the ensemble method will be better if good model combined with good model also. Based on the research problem, a multi-label classification using ensemble method was built. This conclusion is certainly useful to make it easier muslim to learn Al-Quran. This conclusion This research also can give better performance from similar research before. For further research, the use of more diverse and more algorithm models is one way to find different performances. The use of algorithm models with similar quality can also provide different performance.
2022-06-01T15:09:39.888Z
2022-03-31T00:00:00.000
{ "year": 2022, "sha1": "71bb4a6e7f55b7eefe6d120d3606d3bd23b8a952", "oa_license": "CCBY", "oa_url": "https://ejurnal.seminar-id.com/index.php/bits/article/download/1287/912", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "2b203f04df36268cfd9ee543da9cac63cce5f20f", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
238050403
pes2o/s2orc
v3-fos-license
Assessing Impact, Performance and Sustainability Potential of Smart City Projects: Towards a Case Agnostic Evaluation Framework : We report on a novel evaluation framework to globally assess the footprint of smart cities and communities (SCC) projects, being also expandable to the case of smart grid related projects. The uniform smart city evaluation (USE) framework is constructed upon three complementary evaluation axes: the first one aims to weigh up the success of a SCC project based on performance metrics against pre-defined project-specific target values. The second axis focuses on the project’s impact towards the sustainability of a city and it is bench-marked against national and international key objectives arising from strategic plans. This bench-marking feeds the third axis which provides a more inclusive evaluation against four pre-defined and widely acclaimed sectors of interest. The steps to be followed for the uniform evaluation of each axis and corresponding index are presented in detail, including necessary key performance indicator (KPI) normalization, weighting, and aggregation methods. The resulting indices’ scores for each axis (namely project performance index, sustainability impact index, and sustainability performance index) can be post-processed with adequate data processing and visualization tools to extract important information on the extent to which the range of success of a SCC project contributes to the city sustainability progress. Illustrative examples from an on-going SCC project are provided to highlight the strengths of the approach. The proposed framework can be used to compare multiple projects within a city and sustainability and project performance in different cities, evaluate the interventions chosen per project against city needs, benchmark and design future projects (with, e.g., reverse engineering, projections), as well as evaluate various spatial and temporal scales. Introduction and Motivation Smart and sustainable cities have been receiving increased interest from scholars and municipal stakeholders over the past 10 years, in the hopes of contributing to sustainable development goals (SDG) [1] and realising a prosperous climate-neutral future and better quality of life [2]. Nowadays, over 55% of the world's population lives in cities and urban areas [3], while approximately 75% of the world's energy consumption and almost a similar share of global anthropogenic carbon emissions is attributed to cities [4]. The need for combating those major challenges of urbanization and climate change led to the rapid development of various smart city initiatives, such as the C40 Cities [5], Daring Cities [6], Covenant of Mayors [7], Smart Cities Information System (SCIS) [8], 100 Intelligent Cities Challenge [9], Energy Cities [10], etc. Their main focus is on policies and measures that aim to accelerate energy transition and reduce greenhouse gas (GHG), as well as other pollutant emissions at city level [11], while actively engaging citizens in urban regeneration schemes [12]. The design of transformative policies towards recovery from the current COVID-19 pandemic crisis is also of vital importance for urban sustainable development due to changes caused in the working environments and the use of buildings [13]. In the European Union (EU), several smart cities and communities (SCC) projects foster solutions pertaining to energy, mobility, and information and communication technology (ICT) in order to transform cities into smart, green, liveable, and sustainable, as well as exchange know-how in terms of project results, best practices and lessons learnt [14][15][16]. An important component for designing and implementing a smart city concept is the evaluation of the impact of the demonstrated solutions and actions [17,18] against a city's sustainability vision [19,20]. This assessment process needs a common and shareable evaluation framework which can measure the effectiveness of the interventions in relation to smart city development and sustainable performance progress [21,22]. Such an evaluation framework can be used as a decision support tool for policy makers and financiers to evaluate smart city interventions and compare how impactful each intervention is for a city or assess what types of interventions create a greater impact in different cities with varying contextual characteristics. A comprehensive and simplified framework can also act towards enabling the engagement of high power and highly interested stakeholders, i.e., governance, technology, and service providers, and energy utilities early on, which is critical for the successful implementation of smart city solutions, while it can also be used for raising citizen awareness on the impact of the proposed interventions. Currently, evaluation frameworks rely on the definition and use of key performance indicators (KPI) [22,23] that quantify project results based on quantitative or qualitative data in order to assess how close cities are to meet their goals and provide a comprehensive analysis regarding smart and sustainable performance and progress [24]. The development of an evaluation framework lies in individual indicators which either constitute an indicator set or they form a composite index. An indicator set is a group of single non-aggregated indicators relevant to pre-defined performance dimensions constituting a simple indicator evaluation framework for a project or a city performance. A composite index or system is a single and comparable metric based on normalization, weighting, and aggregation methods of multiple indicators [25], that provide as a final output a ranking of the overall performance levels of a range of evaluations [26]. Typically, sustainability is characterized by complexity and multidimensionality, and is expressed by various domains and typologies [27]. This is also the case for urban sustainability, due to the fact that cities are complex and multidisciplinary ecosystems [28] with many differences in a variety of features such as population, geography, climate and environment, natural resources, capital, culture, infrastructure, etc. In this regard, significant challenges and issues arise when it comes to the strategic planning and assessment of urban sustainability [29]. Some of the most common challenges pertain to discrepancies in the definition, theoretical modeling and scope of urban sustainability, while a variety of urban sustainability concepts have been contrived, namely the sustainable city, the smart city, the eco-city, the low-carbon city, the green city, etc. [30]. Other issues stem from deficiencies in the inclusive aspects, incompatibility of different solution approaches and limited connection between the concepts of sustainable cities and smart cities in the urban model, narrowing the effectiveness of sustainable transformation. We note here as an example the design of weak instead of strong sustainability schemes; a strong sustainability perspective assumes complementarity of human-made and natural capital in urban areas, while weak sustainability assumes that technology can substitute resources and ecology [31]; Moreover, deviations are also observed between the planned project and city goals and global targets set towards sustainability and smartness. More specifically, smart city projects tend to follow an evaluation approach based on specific project objectives, business models, and expected impacts of the engaged cities, rendering the comparability with other projects or cities difficult or unfeasible. The effective incorporation of key aspects that consider the urban smart-sustainability fix [32], address, and converge the different conceptual paths of urban sustainability [29] and link properly the concepts of smart city and sustainable city [33,34] in the urban ecosystem, will help projects and consequently cities to achieve multiple objectives. To this end, urban sustainability assessment can be designed following an integrated approach focusing on a solid and unified evaluation process of project-based and city-oriented progress capable of generating clear, sufficient, understandable, and ready-for-benchmarking results and conclusions for all projects and cities. In this context, this paper builds upon the body of knowledge around the evaluation of sustainability in cities and concentrates on developing and proposing an easy-to-use and versatile evaluation framework for assessing and comparing the success of a project and the extent to which it contributes to the city sustainability progress against EU goals. We aim to answer four research questions, furthering the state of the art as follows: 1. What are the main frameworks used to evaluate cities in terms of sustainability and smartness and which are their attributes and shortcomings? 2. How a SCC project can evaluate its performance against pre-defined targets? 3. How a SCC project can holistically evaluate its contribution and impact towards the sustainable goals of a city? 4. How results achieved by each smart city can be comparable against other ones, on a common basis, accounting though that for each city there are specificities 5. How can one evaluate in a fair and transparent way multiple SCC projects, as well as cities as whole promoting comparability and replication potential? Question 1 is being addressed in Section 2 where an extensive literature review and analysis has been performed. The proposed evaluation framework (the uniform smart city evaluation-USE framework) tackles questions 2-5 in a cyclic, inclusive manner under a three-axes approach. By adopting USE framework, the impact, performance, and sustainability potential of smart city projects can be assessed in parallel while the framework can be used to compare multiple projects within a city, as well as sustainability and project performance in different cities, evaluate the interventions chosen per project against city needs, benchmark and design future projects (with, e.g., reverse engineering, projections), as well as evaluate various spatial and temporal scales. The remainder of this paper is organized as follows: Section 2 provides an overview of common urban sustainability and smartness indices, their main characteristics and problematic elements. In Section 3, we present our newly developed uniform smart city evaluation (USE) framework composed of three axes or indices, namely: (i) project performance index (PPI), (ii) project sustainability impact (PSI) index, and (iii) sustainability performance index (SPI). The necessary methods and tools for calculating the indices under the USE framework are detailed in Section 4 while the step-by-step calculation procedure per index is presented in Section 5. In Section 6, a set of illustrative examples from an ongoing SCC project is provided showing how the framework works in terms of weighting, aggregating, and scoring of indicators. We discuss on necessary steps and current limitations and shortcomings in Section 7 while concluding remarks and suggestions for future research are presented in the last section of the article (Section 8). Review of Urban Sustainability and Smartness Evaluation Frameworks and Indices In order to provide an overview about the main characteristics, inclusive aspects, methodological approaches, and limitations of the frameworks and indices developed for the topic of urban sustainability, an extensive literature review and analysis was carried out based on a series of latest research studies, and several indexing reports published in the field. The screening process of both scientific and gray literature was conducted with the aid of several search engines and online databases, e.g., Scopus, Web of Science, ScienceDirect, Google Scholar, etc., with a view to include a wide spectrum of journals, books, and technical reports with high relevance to smart city and urban sustainability assessment. The purpose of this bibliographical search was to identify the most well-known and widely-accepted city sustainability and smartness indices, frameworks, and studies from the last decade (after year 2012). A generic search was performed by typing keywords such as "urban sustainability" AND "assessment", "smartness" AND "assessment", "urban sustainability" AND "evaluation framework", "urban sustainability" AND "index", "smart city" AND "index", "smart and sustainable cities" AND "ranking" OR "concept", and so forth. Then, the abstracts and titles of the most relevant articles were examined in order to select potential references that align with the scope and the inclusion criteria of this research. The review of gray literature pertinent to the area of urban sustainability and smart city indices was also considered at this stage to extract available information and potential frameworks. The studies from both categories that passed the initial filtering process went through a full read and review, and only scientific or technical material with sufficient methodological content and contribution in the field was included in the final inventory of papers and reports which were analyzed in detail. A similar approach has been followed in Ref. [35], where the main drivers for increased city smartness have been studied. We note that even though sustainability goes beyond local and urban areas, and several composite indices tackle country or global scale evaluation (e.g., the Climate Change Performance Index 2021 [36], the Energy Trilemma Index 2020 [37], the Environmental Performance Index 2020 (EPI) [38], the Energy Transition Index 2021 (ETI) [39]), we have restricted our study to urban sustainability and smartness at district and city level. Our research indicates that a variety of models and tools have been developed for the evaluation and comparability of smartness and sustainability in urban areas [40,41]. These tools are based on composite indices that assess critical dimensions of sustainability and smartness, but only few of them combine features and qualities of both aspects. A good example of a composite index offered also as an interactive tool, that introduces both technology maturity and sustainability aspects in urban development is the networked society city index [42]. Another initiative on an EU level and under the European Green Capital framework, is the Green City tool [43], aiming to facilitate sustainable urban planning with a main focus on offering best practices and guidance. It provides a simple, straightforward tool but limited to generic qualitative inputs of self-assessment for cities. In general, composite indices provide some key outcomes, such as ranking and benchmarking of cities, facilitating research and analysis in the urban design [44] and assisting in sharing knowledge for the development of smart and sustainable cities [45]. However, given that city sustainability entails a multitude of aspects and domains [46], all these evaluation frameworks and indices present methodological gaps and conflicts, as they capitalize on different definitions of urban performance and development [47] while showing imbalance between smartness and sustainability [34]. Table 1 summarizes the most important composite indices associated with sustainability or smartness aspects at urban level, along with a brief presentation of their benefits and shortcomings. Although there are many similarities among characteristics of evaluation frameworks, rating systems, or composite indices, they differ considerably in conceptualization, focus, and goals, due to the determined diverse city needs, boundaries and expected outcomes of the smart and sustainable cities under assessment, as well as the perspectives of the relevant stakeholders and experts. A majority of applications, experiments, projects and initiatives use as a guiding principle the "triple bottom line (TBL)" in order to evaluate sustainability performance which integrates social, economic, and environmental variables [48]. A good illustration of this is the China's urban sustainability indices (USI), the last version of which launched in 2016 and uses 23 indicators categorized into the three dimensions of TBL for ranking one hundred and eighty-five (185) Chinese cities of diverse sizes and development stages assessing their sustainability performance level between 2006 and 2014 [49]. A primary issue with the China USI indices is that they not adequately address smartness aspects. Furthermore, some indices represent strong sustainability while other weak sustainability assessment. A representative index with strong sustainability criteria is the sustainable development of energy, water, and environment systems index (SDEWES) that assesses the sustainable performance of one-hundred twenty (120) cities across seven (7) dimensions, while identifying also best practices for policy learning and adoption [50]. There are also indices that primarily focus on environmental sustainability, such as the European Green City Index which is grounded on 30 individual indicators to assess and compare the environmental performance of 30 big European cities from different countries [51], or indices that explore only specific urban aspects, such as urban mobility, air quality, business development, etc. [52], e.g., the index developed by Collins et al. (2019) that builds upon geographic, meteorological, and socio-economic data and K-means clustering to determine which out of 119 U.S. cities included in the analysis are bicyclingfriendly cities [53]. Several indices also have drawbacks that lie in the difference and multiplicity of the data sources used for results' comparison, owing to lack of data for some indicators or even due to inconsistency of the framework approach. In some cases, country-level data are utilized or extrapolation techniques are implemented, while data are also obtained from other indices to calculate a number of their metrics (e.g., References [54,55]). In the case of a city evaluated by two or more different indices, results lead to diverse type of rankings, implying an indication of subjectivity. A good illustration of this is the city of London when assessed via the IESE Cities in Motion Index 2020 and the IMD Smart City Index 2020. The city ranks top in the first index and on the fifteenth place in the second, due to the different approaches in the smart and sustainable city concept and its dimensions, as well as the number of cities and indicators of city evaluation between the two indices, leading to extremely difficult comparison of results. In addition, major differences and incoherences are observed among composite indices regarding the normalization, weighting, and aggregation methods used to evaluate performance. At the project level of evaluation, an important issue is that, usually, the targets set by the local communities and cities vary significantly from key policies and objectives defined in the strategic plans which are in alignment with wider smart city, urban sustainability, and energy transition aspects, e.g., EU key objectives and directives. Each project uses their own assessment methods and capitalizes on sustainability aspects based on their needs. For example, the evaluation frameworks of the EU funded SCC projects +CityxChange [56] and SPARCS [57] differ noticeably in their methodological approaches: +CityxChange builds upon a framework that aims to evaluate the impact of the project interventions at demo-site level by a simplified SCIS-based (SCIS refers to the smart city information system-an SCC knowledge platform incorporating data reporting through pre-defined KPIs) and project-defined indicator set, while SPARCS includes, apart from SCIS, a set of KPIs selected from numerous frameworks and aims to use a data normalization methodology, in order to provide an objective assessment of the project results. This leads to problematic interpretation and decision-making when assessing project performance levels or comparing project results between cities or projects. In addition, the existing frameworks cannot be used uniformly to evaluate the smart and sustainable features of cities, i.e., the overall city sustainability, since they do not follow a similar monitoring procedure of their expected impacts in terms of sustainability. For instance, the proposed framework by SHARING CITIES project [58] entails a list of 129 indicators classified in six performance domains, whereas ATELIER project [59] includes a set of 44 indicators in six performance domains, domains which are not only different but also need to be normalized and aggregated properly to provide information for comparing smartness and sustainability levels of the participating cities. Recognizes that not all cities start from the same development level, nor with same set of advantages; Considers priority areas for cities Not evaluating key sustainability aspects, e.g., environmental, climate, and energy performance; Inconsistency of data in some cases The Global Cities Index (GCI) 2020 [64] Measures the international standing of 151 cities globally across five dimensions, resulting in city rankings in terms of business opportunities and economic innovation 29 Business Activity, Human Capital, Information Exchange, Cultural Experience, Political Engagement Outlines new challenges and priorities in the business sector; Provides a snapshot of how COVID-19 has shattered the status-quo; Reflects emerging geographies Focuses only on business innovation and economy; Not primarily based on smart city technology and urban sustainability pertaining to environmental issues Global Power City Index (GPCI) 2020 [65] Evaluates and ranks major cities of the world (48) according to their comprehensive power level in terms of average well-being and access to urban facilities, in order to attract people, capital, and enterprises 70 Economy, Research and Development, Cultural Interaction, Liveability, Environment, Accessibility Function-specific ranking; Focuses on the development state of cities including a broad set of factors; Examines changes in working styles and people commuting owing to COVID-19 Does not address sufficiently smartness aspects; Includes a limited number of indicators for energy performance, governance, etc.; Differs conceptually with the 3-pillar sustainability approach Urban Development Index (UDI) 2020 [47] Aims to measure the level of sustainable development in the city of Rio-de-Janeiro via benchmarking with other four cities, based on an equal weighting approach 32 Capitalizing on 4 knowledgebased urban development (KBUD) pillars: economy, society, environment, and governance Provides a baseline for how a city is positioned in relation to others and determines how to improve its urban performance; comprehensive structure of 8 target groups Does not entail resource efficiency, energy transition and climate change aspects; Renewable energy and air quality indicators not included; Data used from other indices Mercer's Quality of Living City Ranking (QoL) 2020 [66] Assesses living conditions for 140 cities against generally accepted standards and gives recommendations to potential employees for assigned destinations 39 Political and social, economic, socio-cultural, health, education, public services and transportation, recreation, consumer goods, housing, natural environment City-to-city index comparison that quantifies the difference in the quality of living between any two cities; Provision of data on quality of living that help employees sent to work abroad, students, etc. Not suitable for evaluating overall sustainable development progress nor smartness; Not including aspects pertaining to resource efficiency, technology, energy, and climate targets Index Description # of Ind. Dimensions Pros Cons The Global Liveability Index (GLI) 2019 [67] Assesses which locations among 140 cities worldwide provide the best or the worst living conditions based on 5 evaluation areas 30 Stability, Healthcare, Culture and Environment, Education, Infrastructure Quantifies the challenges to an individual's lifestyle in any given location, and allows for direct comparison between locations Benchmarks the performance of 120 cities across energy, water, and environment systems towards promoting policy learning, action, and cooperation and bringing cities closer to sustainable development 35 Energy Presents a total score of smart city performance of 120 eligible cities across the globe, based on the Smart Cities while properly considering also sustainability issues 62 Economy, People, Mobility, Living, Governance, Environment Entails a multitude of indicators addressing most of the factors of smart and sustainable cities; Universal spatial scope in tandem with regional characteristics Lack of a unified approach-uses and combines data from individual indicators and other indices, e.g., Mercer, Innovation Cities Index; Indicator averaging at each level Sustainable City Index 2.0 (SCI 2.0) 2014 [76] Evaluates 403 Dutch cities showing at a glance the level of their sustainability 24 Economic Well-being, Environmental Well-being and Resources circularity, Human Well-being Examines thoroughly the correlation between various indicators; Results can also be aggregated to the provinces' level Does not include governance aspects; Several data sources UN-Habitat's City Prosperity Index (CPI) 2012 [77] Evaluates the degree of prosperity in the cities of the world based on the concept of the wheel of urban prosperity, a conceptual matrix that symbolises the well-balanced development across five spokes 17 Productivity, Infrastructure Development, Quality of Life, Equity and Social Inclusion, Environmental Sustainability, Urban Governance, and Legislation Aids the design of effective policy interventions; Allows to evaluate, and report on city progress towards the implementation of the SD Agenda 2030; Depicts the strengths or weaknesses of prosperity factors Does not address smartness aspects, e.g., smart infrastructure, technology, energy, mobility, etc. CITYWEB Index 2012 (City-Card) [78] Evaluates and ranks performance of cities against city concepts. Each city receives a score out of 100 in five city development models and a total grade out of 500 21 Global Cities; Nice Cities; Knowledge-intensive networks; Intelligent Cities; Creative Cities; Highlights specific areas where a city can be improved; Equally useful to individuals, businesses, academics and governments; Includes both quantitative and qualitative factors (well-balanced) Weighting issues; Lack of consistency of the data Lack of data for some categories or cities; Overlap of some indicators Taking these critical points into consideration, the relationship between a project's success and the impact this success has over the city sustainability progress remains an issue that still requires further research. Cities' sustainable performance evaluation should on the one hand be reliant on the needs, visions, and strategies that address the peculiarities of every city ecosystem, but on the other it should be based on a harmonized, holistic, and unified approach able to benchmark the overall performance in terms of sustainability and smartness for all projects and cities. In the following section (Section 3), we present a newly developed framework (the USE framework) for the evaluation and progress assessment of SCC projects along with their contribution towards a city's sustainability targets. The Uniform Smart City Evaluation (USE) Framework-A Common Framework for SCC Projects and Cities In a typical approach, cities construct internal projects (here we constrict ourselves to SCC projects-projects that in one way or another contribute towards their sustainability) in order to upgrade the functionalities, infrastructure, and services provided to their citizens. These projects can be focused to a particular geographical area, or they can cover extensive city regions and districts. Moreover, such projects can focus on a specific intervention domain, such as energy consumption, grid flexibility, e-mobility, etc. It is thus evident that a simple city sustainability index based on a pre-determined set of indicators is inherently limited: it cannot track the progress and success of each project, neither can it track the contribution and impact of each project to the sustainability of each city. In addition, the aggregation of interventions and their impact from a building level to a whole district and eventually to the city level is still ambiguous. For example, how do we choose representative buildings in order to scale-up the evaluation of a city? A consensus needs to be reached on these aggregation approaches and on the definition of the spatial scales (we note here that a first attempt has already been made on an EU level-especially towards positive energy districts-PEDs [79,80] and we adopt these conventions in this work as we mention below). To this respect, we propose a uniform smart city evaluation (USE) framework under a triple axis approach as illustrated in Figure 1. The key objective of this framework is to provide a method of identifying the extent to which the range of success of a SCC project contributes to the city's sustainability progress. It can be used to evaluate the interventions chosen against real-time city needs, benchmark and design future projects (based on reverse engineering, projections, etc.) and ultimately compare multiple projects within a city, as well as sustainability performance between different cities. The ultimate goal for the proposed framework is to propose global metrics that can be used as reference amid multi and cross-disciplinary projects, promoting comparability, transparency, and uniformity on various aggregation levels. To reach this goal, a series of policy-like decisions need to be made so that these metrics can be adopted in a wide-scale (e.g., under the EU umbrella) and we discuss the necessary steps and possible barriers in Section 7. Most importantly, we promote the notion of common KPI repositories for a holistic evaluation of future SCC projects that need to be determined. The USE framework consists of three evaluation axes that aims to tackle the aforementioned objectives in a holistic and self-containing manner. The first axis corresponds to the lower level of evaluation: the project performance index (PPI). This index is being fed by the project success indicators (PSI)-KPI-like metrics which are used to assess the successful (or not) implementation of each project's interventions and their impact against pre-defined targets relevant to this specific project. PSIs typically involve only monitored values of performance at a specific temporal scale after the project's start and in the case where the target values are linked to baseline values (providing indications of percentage change for example), their estimation (prior to the project's start) rely on modeled and not actual up-to-date baseline values which should reflect the current status just prior the monitoring phase. The definition of the PSIs is a newly proposed concept [81,82], mostly relevant to EU-funded SCC projects with clear call impact targets. We propose here their adoption by any SCC project and link them with the PPI evaluation. To clarify this index, let us assume a district-wise project which aims to increase the uptake of public EV charging points inside the specific district. The project has already set a target for this uptake to, e.g., three "new public EV charging points installed inside the district by the end of the project". This indicator is project-specific, assessed only on pre-defined spatial and temporal scales and it is thus to be evaluated against this specific pre-defined target value and not against long-term sustainability goals. It is thus a PSI, in the sense that it should be measured and reported, and it provides a straight-forward interpretation of what is done versus what is planned: did the specific project meet its goals in the predefined SC focus, spatial and temporal levels? The aggregation of all PSIs lead to the project performance index. In Section 5.1, we provide details on the methodological approach for calculating the PPI with a step-by-step procedure and a complete flowchart. The second axis corresponds to the middle level of evaluation: the sustainability impact index (SII). Its assessment focuses on the multi-dimensional impact that a particular project has to the relevant to the project sustainable goals of a city and can be extracted on any spatial and temporal scale of interest. It differs significantly from the PPI in the following aspects: • The SSI is based on all the key performance indicators (KPI) defined in a SCC project in contrast to the PPI which incorporates only the (limited in number) PSIs. The KPIs are commonly linked to different dimensions (also called domains or categories depending on the project) such as energy, economic, social, ICT, mobility, etc. The SSI can thus assess the multi-dimensional performance of the project providing an inclusive evaluation while also providing the possibility of sub-indexing per project's KPI dimension; • Each KPI target value is defined based on sustainability goals as proposed by national or international goals and policies/best available practices; • The KPIs are assessed against actual baseline values (monitored before the project's start) and can, thus, directly showcase the progress of a city compared to the business as usual (BaU) scenario. To clarify the SII's scope, let us consider again the aforementioned public EV charging points uptake as an example. A relevant KPI to the defined PSI of "new public EV charging points installed inside the district by the end of the project" would be "public EV charging points installed per 1000 capita". Let us restrict ourselves to a particular district A and let us assume that the baseline value of this KPI is 2-the number of charging points per 1000 capita inside district A before the project. The target value for this KPI can be considered as 5.7 public charging points/1000 capita inside the same district based on EU target values in the Road2Zero scenario [83]. Let us now assume that the project has reached its PSI target of 3 new EV charging points by the end of the project (which correspond to an increase in baseline value to <3 points/1000 capita if we consider that >3000 capitas are living within the defined district. It is evident that while the PPI has been achieved, the project has a mild contribution towards the sustainable goal of the district (and by aggregating of the city). It can be theorized that in order to achieve its sustainable goal the district needs to implement an additional number of 3 similar projects. Of course, this example is too simplistic and unidimensional, but it gives a first understanding of the scope of the SII. In Section 5.2, we provide details on the proposed methodology for calculating the SII with a step-by-step procedure and a complete flowchart, where all relevant input and initial conditions will be further clarified (in addition to the complete illustrative example of Section 6. The third and last axis corresponds to the highest level of evaluation: The sustainable performance index (SPI). It is important to distinguish two use-cases (UC) for the evaluation of SCC projects under the SPI. • UC1: The first use-case deals with on-going SCC projects that are currently in the development-implementation-evaluation phase and have already identified the necessary KPIs to be monitored. Herein, the SPI aims to provide a cross-dimensional evaluation under four pre-defined overarching sectors. Each sector (see Section 5.3 for more details on the definition of each sector) encompasses the most important KPIs of the project extracted from all the KPI dimensions. In contrast to the SII which clusters the KPIs under each dimension, this clustering allows for a more holistic but also targeted evaluation of each project's results into the specific sectors of interest, leading to a cross-dimensional evaluation. For example, consider again a SCC project which focuses on e-mobility. It is straight-forward to conclude that this particular project aims to achieve impact in terms of sustainable urban transport (e.g., EV integration and adoption, car sharing schemes) urban infrastructure (e.g., EV chargers, V2G technology), climate change (e.g., reducing urban transport related emissions) in addition to any socio-economic benefits for the districts or cities of application. The categorization and evaluation including all project's KPIs and under KPI dimensions (as performed by the SII) although extremely important, might not be easily interpreted by the citizens and city authorities: KPIs can be quite technical depending on the particular focus of the project (e.g., a KPI on battery degradation rate is an important aspect for any EV charging system), hindering a straight-forward and layman-oriented understanding of the project's impact to the overall sustainability of the area. Moreover, KPI dimensions are not inclusive per-se, in the sense that each dimension is well-defined and not overlapping with the rest of dimensions (and associated KPIs). Despite the fact that in the end SII provides an averaged evaluation of all dimensions ("multi-dimensional evaluation"), it is essential to integrate the project's results in a categorization that reflects cross-dimensional aspects. To this respect, the SPI and its clustering to overarching, easily interpreted, cross-dimensional sectors provide the necessary flexibility and inclusivity reflecting the project's performance versus the city's needs. • UC2: The second use-case pertains to future SCC projects which have the flexibility in adopting their KPIs at a later stage. The necessity for the SPI under this use-case comes directly from the fact that SCC projects are extremely wide in scope. A small or largescale city project can focus on energy related matters, such as building renovations, RES penetration, and grid flexibility, while another can only touch aspects related to mobility and district level storage. The KPIs along with their dimensions defined for each of these projects are targeted to their specific interventions and the evaluation results of each project cannot be fairly compared (comparing the SII of project A to the SII of project B is unfair as their scope is different). In addition, even if the projects' scope is similar, such a comparison lacks a common framework that includes all aspects in which a city needs to progress in order to meet its sustainability goals. These aspects are rarely limited to energy or e-mobility related matters. Leveraging the UN's sustainable development goals, it is easy to conclude that a real smart city is (at least) energy efficient, clean, safe, just, citizen-centric, culturally rich, healthy, and self-sufficient. Therefore, it is essential to provide an evaluation of a specific project against universal, all-inclusive, overarching sectors on a higher hierarchical level than the level of KPI dimensions (which are pre-defined for each project). These sectors should be widely acclaimed and should ideally include a multitude of common KPIs belonging to multiple dimensions and covering all aspects of a smart city. The SPI then provides an index that can be used to compare reliably projects with different or similar focus under the same umbrella, while each sector is linked to relevant sub-indices for a more targeted assessment. The cross-dimensional nature of the SPI provides increased interest for a city, being able to assess self-consistently its overall performance and initiate targeted projects to progress further on. As noted in Section 2, the definition of a common KPI repository per pre-defined SCC sector is a matter of high-level institutional and international decision-making and thus such definition is outside the scope of this work. Nevertheless, once a consensus is reached on the common KPIs, the implementation of our proposed framework is straight-forward as described in Section 5.3. In Section 5.3, we provide details on the proposed methodology for calculating the SPI with a step-by-step procedure and a complete flowchart, while also providing a definition of the four overarching sectors. We also distinguish the methodology between the two use-cases as described above. In summary, the proposed framework builds upon the inherent needs of cities, as well as the particular nature of each SCC project. We need to reiterate here that the concept of sustainability in a smart city context is unfortunately not clear, and multiple definitions render its uniformity ambiguous [84]. In this work, we have adopted UNECE's definition [85] which states: "A smart sustainable city is an innovative city that uses ICTs and other means to improve quality of life, efficiency of urban operation and services, and competitiveness, while ensuring that it meets the needs of present and future generations with respect to economic, social, environmental as well as cultural aspects.". This quite broad definition includes all aspects that pertain to a SCC project but also touches issues relevant to UN's SDGs being able to provide a holistic approach that incorporates the necessary multi-dimensionality of smart and sustainable city concepts. To this respect, our framework approaches a strong perspective owed to the contribution of multi and cross dimensional indicators in the overall assessment. It is thus strongly recommended that in the weighting procedures this strong sustainability conception [86] should be taken into account from the relevant stakeholders when assigning weights to each indicator. The calculation of each one of the three indices as defined above, requires a series of pre-evaluation and evaluation steps that will be detailed in the Section 5. These steps make use of several statistical and mathematical tools which are presented in the following section (Section 4). Unit Normalization Before moving on describing typical tools for developing composite indices (e.g., normalization, weighting, and aggregation) and the methods of choice under the USE framework, it is important to ensure that the indicators to be utilized are expressed in units that are meaningful and comparable. This is highly relevant for indicators that are expressed in absolute units (e.g., kWh of energy consumed). To deal with this challenge we introduce the term of functional unit (FU), inspired by ISO 14040 series on life cycle assessment. In our case we define as FU "a unit that supports fair comparability and benchmarking between two or more systems". All indicators to be included in the proposed indices need to be transformed first into FUs. This can be done by revising absolute values to be expressed for example per m 2 , per population, per total energy needs, as a % of increase and decrease, etc., depending on the type and special characteristics of every indicator. The specific process could be characterized as first-tier unit normalization, since it enables comparisons between different years and buildings, positive energy blocks (PEB), positive energy districts (PED), and cities of varying sizes (a more detailed definition of PEB and PED evaluation scales is provided in Section 4.2). Even then, SCC projects include several indicators that are expressed in different FUs thus a more universal normalization procedure is needed. Value Normalization to a Uniform Scale A critical question to be answered when developing composite indices is how a uniform evaluation scale can be developed, since most of the adopted indicators are expressed in different units, thus disabling data aggregation. Several normalization methods can resolve this problem and are available in literature such as min-max, z-score, percentage of annual variations over consecutive years, distance to a reference and categorical scales [87]. Zhou et al. have analyzed commonly applied normalization methods by variance-based sensitivity analysis, arguing that the distance to a reference method seems to be the optimum choice for sustainability performance evaluations [88]. Building upon this finding, as well considering that in the case of SCC projects, most of the times both quantitative and semi-quantitative (e.g., through Likert-Scale) indicators are applied, we suggest a hybrid normalization method, integrating the distance to a reference and categorical scale method. A similar approach has been applied to evaluate the sustainability of industrial facilities [89,90]. By utilizing the distance to a reference, we can compare the value of a given indicator to one or more reference points while the categorical scale assigns a score to every indicator using a numerical or qualitative scale. In this case we are adopting a 5-point (ranging from 1 to 5) semi-qualitative evaluation scale with the following conventions and margins adopted: Achieved ≥X 23 and ≤X 34 (3 points are assigned to the examined indicator); 4. Excelled ≥X 45 (5 points are assigned to the examined indicator) where X N(N+1) , N = 1, 4 is the boundary value for each of the 4 margins embedding neighboring scale points. Figure 2, depicts these points on the uniform scale. The adoption of such scale provides flexibility and adaptability for the evaluation procedure to each indicator and consequently index. The scaling is performed based on one or more reference points that can serve as the boundary values. The reference point can be a baseline value (i.e., energy consumption of a building before foreseen interventions) or a threshold value (i.e., something causing irreversibility of the system) [91]. Additionally, reference points can be extracted from best available techniques (BAT), national regulations, commonly accepted standards, or goalssuccess target values and expert judgements. The selection of a reference point depends on the attributes and aim of the KPI. The reference point can indicate both a positive (>X 23 , X 34 , X 45 ) or negative (≤X 12 , <X 23 ) "performance". For instance, if the examined indicator is "energy savings", a reference point for energy savings on a building level over 60% (aimed target for a renovated building to be considered as nZEB [92,93]) could be assigned to X 45 . On the other hand, a reference point of 0% or below (increase) of energy savings, could be assigned to X 12 . In many cases, the reference point could also be applied as the starting point for assigning the rest of the boundary values. This is largely applicable to the case where a target value is known (or set), such as when assessing the PPI through project specified PSIs (it is evident that these target values are then project specific). In this case, the target value could be placed in between the midpoint of the boundary reference points X 23 and X 34 and an index scoring of 3 denotes achieved performance. Combined reference points could additionally be applied if necessary (one indicating failed and the other excelled performance). In this way, the evaluation scale is built upon a distance to commonly accepted boundaries, thus increasing objectivity of the results. It should be clarified that the distance between the boundary values does not need to be necessarily equal. Several examples are provided in Section 6 and in Appendixes A and B. Significant advantages derive from the adoption of the proposed normalization and evaluation procedure [89]. In many cases, it may be necessary for a city to include qualitative indicators in the analysis (especially in case of, e.g., social indicators). Common normalization methods, such as z-score and min-max require an adequate set of data to be efficiently applied. This may serve as a deterrent of application for newly operating city information platforms (CIP) that did not have an organized indicator tracking system until recently. Most methods would give the "best in class" building/PEB/PED/City the highest score, which seems fair at a first glance, it does not however ensure that the specific building/PEB/PED/City is developed in a sustainable way but rather exhibits better performance compared to other relevant initiatives (or the baseline scenarios). In this case, the "best in class" building/PEB/PED/City will still receive a better score in comparison with the benchmarked system; however, it will have to try more to reach the highest score, if the pre-determined performance thresholds are not met. This is more in accordance with the notion of sustainable development, according to which fundamental changes may be needed in various levels (institutional, legal, administrative, etc.). We further discuss the choice of normalization procedure in Section 7. Weighting In studies examining sustainability indices, the validity of the methods used to assign scores to indicators depends on the weighting methods used [27]. Although equal weighting offers a simple and replicable method, it has been questioned by scholars in terms of validity and transparency of indices results [94,95]. Equal weighting implies an implicit judgement on the weights being equal, without taking into account knowledge of causal relationships into a subset of indicators related to a dimension [96] and the relative importance of each indicator to a specific index or subindex categorization. Other statistical methods used in sustainability indices, is the principal component analysis (PCA) and factor analysis (FA), however those methods original scope is to examine relationships and not to weight variables. Therefore, weights determined with these methods may result in important variables assigned a lower weight due to statistically low correlations with other dimensions, instead of real correlations among assessed indicators [97]. Regression analysis assumes that there is no multi-collinearity (e.g., investments is often positively associated with energy efficiency and CO2 reductions, but all three are independently relevant for measuring sustainability). Unobserved component models [98] have been used in literature for constructing aggregate governance indicators [99]. This approach facilitates weighting, aggregation, and index construction. However, it assumes enough data are collected, indicators are not highly correlated, while it is quite sensitive to outliers of an indicator leading to low weighting [96] of this indicator. The analytic hierarchy process (AHP) although used for multiple-criteria decision-making and for weighting indicators, presents two main disadvantages which are the high number of pairwise comparisons required and the relatively short number of indicators in each dimension [96]. The budget allocation method (BAL) applies weighting on indicators based on expert opinion by distributing "n" points over a number of indicators [87,96]. This method's main disadvantages are that weights may be based upon current needs at policy level in a specific region, while it is also questioned if weightings are transferable to other regions with different context conditions [96]. Public opinion polling is a method where stakeholders express their "concern" towards a public agenda and weighting is based mainly on the respondents concern rather than importance, raising also questions on transferability to different local conditions. Finally conjoint analysis (CA) assigns weights to indicators based on individual preferences, ranking a set of alternative scenarios. This method focuses on the preferences of respondents and requires a large sample and large preference data set [100]. In the proposed framework the participatory method of budget allocation (BAL) is selected for determining indicator's weights due to its transparency, explicitness and short time of execution. In order to establish a transparent weighting system, the expert pool includes experts from multiple disciplines with a wide spectrum of knowledge, experience and concerns, e.g., experts in energy efficiency, climate change, mobility, ICT, technology providers, and financiers. Experts should also cover a wider geographical area to ensure that policy initiatives in a specific region do not determine the weights on indicators rendering those weights transferable to other regions. Experts are introduced in the BAL method and appointed "n" points, which they could then distribute over a set of indicators in the different dimensions and, if apply, in different spatial scales (e.g., building, city scale). Experts are advised to distribute more points to those indicators whose importance should be stressed [101]. In case of different spatial scales, the BAL method is advised to be implemented for each spatial scale separately, since indicators might have different importance in the different unit of analysis, i.e., (a) building level, (b) positive energy block (PEB) level (a) PEB is defined as a collection of at least three buildings of different uses, i.e., residential, tertiary in close proximity having an average yearly positive energy balance [102], (c) positive energy districts (PED) [79], and (d) city level. Figure 3 illustrates the BAL method as applied to the SPI for this current framework. The reader is referred to Section 5.3 for more details. Aggregation Aggregation methods are used for summing-up normalized values of sub-indicators to form sustainability indices, with the weighted arithmetic mean being the most commonly used method [27]. Additive aggregation assumes that there is no synergy or conflict between indicators and thence the contributions of all indicators can be added together to provide an overall value [96]. Weights used in additive aggregation methods mainly imply substitution rates in a compensatory logic, therefore no synergy between sub-indicators should apply when using this method [103]. Geometric aggregation methods are also used for sustainability indices, but less extensively. They use multiplicative instead of additive functions, with the weighted geometric mean being the most popular method used [27]. Geometric mean methods allow for limited compensability as similarly to aggregation methods, geometric aggregation methods are considered preferentially dependent [87]. However, unlike additive aggregation methods sensitivity analysis and uncertainty quantifications cannot be analyzed using measurement errors of indicators [104]. When compensation between sub-indicators for the construction of sustainability indices are not permitted, i.e., in strong sustainability indices, non-compensatory methods, i.e., conjunctive and disjunctive functions are used [105]. However, these methods are of limited use for decision makers, since when values of sub-indicators are not extreme, their information is undermined [96]. Multicriteria decision making methods (MCDM) are also used as noncompensatory methods, adopting a decision maker preference approach [106], however they have computation limitations when the number of indicators is increasing [107]. In the proposed framework, the weighted arithmetic mean aggregation method is selected for constructing the three different indices, i.e., (a) the project performance index (PPI); (b) the sustainability impact index (SII); and (c) the sustainable performance index (SPI). The selection is based on the fact that the aggregating methods allow for a compensatory logic when indicators' scores are low, and sensitivity analysis, as the bound for the sustainability index can be precisely defined if the relative measurement error of a set of indicators is already known. More details per axis index are given in Section 5. Evaluating SCC Projects' Performance-The Project Performance Index (PPI) The PPI leverages the project success indicators in order to provide a direct assessment of the project's success against its pre-defined goals. Figure 4, presents the workflow diagram for the calculation of the PPI. We divide the whole procedure into two main categories. The first one is the preevaluation (P) process which includes all the necessary steps (P1-P2) that a project needs to take prior to the evaluation. These include: (a) Step P1: The definition of the PSIs along with their target values (which serve as the midpoint of boundary reference points X 23 and X 34 on the evaluation scale). This step is typically set during the project's design phase; (b) Step P2: The calculation of the PSIs values based on the actual project's results. This step is set during the monitoring phase of a project. The second category is the evaluation (E) process which includes all the necessary steps for the actual evaluation of the project through the PPI (steps E1-E3). These include: (a) Step E1: The construction of appropriate margins, based on reference points, for the uniform evaluation scale per PSI. See Section 4. To calculate the project performance index (PPI) a non-weighted harmonic mean is used as an averaging measure. The reason the harmonic mean is selected is that the PPI is ultimately a ratio of the actual performance achieved by a project, to its targeted performance originally set. The harmonic mean is a measure that is dominated by the minimum of its arguments, offering a correct interpretation of the PPI. In the case, for example, where there is a large discrepancy in the scoring of PSIs, i.e., PSIs that are marked as "failed" drag with a larger weight the PPI towards the left side of the uniform scale to serve the ultimate purpose of a project, being to achieve all its PSIs. The harmonic mean, H, is given by: where x 1 , x 2 , . . . , x n , are the PSI values and n is the number of PSIs. Evaluating SCC Projects' Sustainability Impact-The Sustainability Impact Index (SII) The SII leverages the Key Performance Indicators as defined by each SCC project. The defined KPIs, clustered under KPI dimensions hold different weighting factors to showcase their relevant contribution to each KPI dimension. The final evaluation metric (the SII) is derived by aggregating the KPI dimensions' scores. Figure 5, presents the workflow diagram for the calculation of the SII. We divide again the whole calculation and evaluation procedure into two main categories: the pre-evaluation (P) process which includes all the necessary steps (P1-P4) that a project needs to take prior to the evaluation. These include: To calculate the Sustainability Impact Index (SII), a weighted arithmetic mean (using the BAL method described in Section 4.3 for calculating weights) is used as an averaging measure. The arithmetic mean in each dimension is calculated using the weighted functional values of the KPIs within the dimension, and the total SII is calculated by the arithmetic mean of all dimensions' values. The SPI can be calculated for different spatial scales, i.e., a building, a PEB, a PED, or a city. Additive aggregation functions are used to determine the weighted arithmetic mean at the spatial scale selected. Assuming a multiset of KPI functional values x 1 , x 2 , . . . , x n , with corresponding non-negative weights, ω 1 , ω 2 , . . . , ω n in a SII dimension, we calculate the weighted arithmetic mean, A, as: where ω i is the normalized weight of the ith KPI, obeying: and given by: Evaluating Sccs' All-Inclusively Sustainability Progress-The Sustainability Performance Index As described in the Section 3, we had to make a clear distinction of two use-cases for the SCC project evaluation under the SPI. Nevertheless, the first essential step for this evaluation is to define the all-inclusive sectors under which KPIs should be clustered-a step pertaining to both use cases. In order to define these sectors, we have first performed an extensive literature review on smart and sustainable cities, as well as of other urban sustainability evaluation frameworks, such as the urban sustainability framework (USF) developed by the global platform for sustainable cities [108], that builds upon an integrated city evaluation approach in order to deliver urban sustainability outcomes. The latter includes 4 sectors that rely on the outcomes that cities can achieve by addressing urban sustainability in line with the SDGs, namely: urban economies, natural environment and resources, climate action and resilience, inclusivity and quality of life. In addition, the proposed framework by the Belt and Road Initiative-Developing Green Economies for cities (BRIDGE) entails a similar-oriented approach of a sustainable city indexing based on four key principles promoting inclusive and sustainable urban-industrial development; namely urbanization-industrialization nexus; sustainable economy and social growth; shared prosperity; and resource efficiency and environmental sustainability; three of which coincide while all of them are linked to SDGs [109]. Second, we have based our definition on the nexus between SDG 11 "Sustainable Cities and Communities" and other SDGs, in parallel to global policy processes, commitments, roadmaps, and best practices related to smart city, urban resilience, sustainability, and climate neutrality, such as those reflected by the New Urban Agenda [110], the Coalition for Urban Transitions [111] and the European Green Deal [112]. The SDGs comprise a key instrument for cities to be smarter and more sustainable, while emerging also the need to develop robust assessment frameworks entailing all-inclusive sustainability areas that provide a shared vision in the way cities are evaluated. In particular, regarding EU strategy, the SDGs represent a priority towards smartness and innovation, low carbon future, climate resilience, job creation and poverty mitigation. Capitalizing on the objectives and outcomes of the sustainability policies and frameworks worldwide, it is clearly reflected that a city cannot be smart and sustainable without efficient use of resources, progress in technology, climate response, as well as social engagement and promotion of better quality of life for people. It was also observed that most of the assessment frameworks and indices presented in Section 2 include measures and cover impacts related to those areas. As a result, a first attempt was made to define and use all-inclusive sectors linked with relevant to smart cities SDGs, that we believe they offer a holistic, thematic, and comparable evaluation approach of the smart and sustainable performance of projects. Figure 6 depicts the four sectors along with the related SDGs linked to each sector. The sectors adopted are the following: • Sector 1: Resource Efficiency. Given the continuous earth's population growth, an increasing global demand for resources is recorded which is also expected for the following decades. Resource efficiency is of utmost importance for smart cities striving to identify material, energy and human resources and link them properly in order to reduce environmental, economic and social risks and impacts and provide increased opportunities for sustainable living with greater productivity, lower costs, macroe-conomic stability, and feasible consumer choices. The sector of "Resource efficiency" is highly relevant to aspects of natural and energy resources pertaining to the smart city ecosystem and the built environment including, but not limited to, RES pene- In this context, this particular sector attempts to evaluate smart city projects in terms of public health, well-being, sustainable lifestyle, as well as economic development, i.e., whether the solutions and actions implemented can provide benefits, opportunities, and profits to the citizens. The most indicative aspects that should be covered by this sector are: air quality, reduced waste, water quality, reduced energy poverty, active transport and clean mobility, reduced noise, jobs creation and business opportunities, innovation uptake and in-city propagation, governance, citizen engagement, health and safety, and education. As a consequence, this sector has strong links with SDGs #1, No Poverty, #3, Good health and well-being, #4 Quality education #5 Gender equality, #8 Decent work and economic growth, #11 Sustainable cities and communities, and #16 Peace, justice and strong institutions. • Sector 4: Climate Change Adaptation and Mitigation. The fact that the future transformation of cities into liveable ones relies on an effective decarbonisation strategy at global and regional level sets the area of climate change adaptation and mitigation as a key pillar for sustainable results. The specific category is highly relevant to the successful performance of smart city projects regarding responses to climate targets pertaining to reduced GHG and pollutant emissions in compliance with common standards and strategies, as well as appropriate adaptation measures preventing climate risks, such as floods, etc., and considering local particularities and vulnerabilities. The sector is linked with SDG#13 Climate action, as well as SDGs #3, Good health and well-being, #7 Affordable and clean energy, #11 Sustainable cities and communities, and #12 Responsible consumption and production. The sector may include indicative aspects that focus on air quality, RES penetration, waste management, e-mobility, water and wastewater treatment, circularity and recycling, reduced pollution, land use and urban space, and climate resilience (also including nature-based resilience, green areas, trees, etc.). The aspects addressed by each sector as mentioned above, were identified with a view to demonstrate and cover most of the key variables and measurable fields that describe and enable the assessment of technological, social, and environmental systems and their interrelationships in the urban context. These aspects are based on focused topics within smart cities and relevant SCC projects, as of energy efficiency in the built environment, green mobility, low-carbon energy, digital innovation and ICT, energy and transport networks, water and waste, citizen engagement, etc. Most of them are being addressed by a multitude of assessment frameworks that evaluate performance of smart and sustainable cities or SCC projects, e.g., SCIS [8], Citykeys [113], or may have been used for measurement purposes by composite city sustainability indices, e.g., the domains defined in the indexing of the European Green Capital Award [60]. In addition, a vast majority of those smart city aspects can be found as key focus areas or targets in global policies set by multi-governmental organizations like the United Nations and the European Commission or in the principal guides of relevant city initiatives, such as C40 Cities, ICLEI, etc., e.g., the guidance of reinventing cities to design a low-carbon, sustainable, and resilient project [114]. It is also clarified that the aspects can belong to multiple sectors emphasizing the cross-SDG (and cross-dimensional) nature of the SPI. Having set the four overarching SCC sectors, we can now proceed in describing the methodology for evaluating the SPI under both use-cases: UC1: For on-going projects, the project's KPIs along with their clustering into KPI dimensions have already been defined. To this respect, each SCC sector is assessed by leveraging relevant KPIs from multiple KPI dimensions. Assuming a project with KPIs clustered under energy, ICT, economic, mobility, and social dimensions, the resource efficiency sector should include KPIs from the energy dimension (e.g., relevant to RES penetration), as well as KPIs from the ICT dimension (e.g., relevant to ICT measures in PEDs). KPIs belonging to a sector have different weights, emphasizing their relevant importance towards the sector's objectives. Moreover, numerous aspects and, consequently, various relevant indicators should be affecting more than one sector, thus those indicators could be assigned and included in all of these sectors. In this case, their weighting contribution is also different. A good illustration of this, is the RES penetration aspect which is covered by "Resource efficiency", as well as by "Climate change adaptation and mitigation" sectors. As a result, a KPI that covers the aspect of RES penetration and contributes to both sectors, such as the "degree of energetic self-supply by RES" will not have same weights within each sector. The weights characterize the contributions to the SPI and are defined according to the budget allocation method (BAL-see Section 4.3). We note here that, due to the restrictions of the BAL method that imposes a maximum number of 10-12 indicators per sector to reduce cognitive stress on the experts [87], only the most important KPIs from all available dimensions should be selected and assigned into the 4 SCC sectors (a fact that might be beneficial in a future attempt to pre-define common-and limited-KPIs per sector as required in use-case 2). The sectors are assessed by each sub-set of SPI-weighted KPIs and they can finally be condensed in a single SPI metric via equally aggregating, i.e., averaging their individual scores. UC2: For future projects, the main objective of the SPI is to illustrate the value and contribution of a project towards the overall sustainability performance status of a city via a metric which evaluates the project against broad and cross-dimensional aspects pertaining to the smart and sustainable urban concept. This type of evaluation that promotes crosssectional integration of smart city focus areas and attributes into all-inclusive SCC sectors is able to analyze the sustainability progress of SCC projects beyond the KPI dimensions, thus assessing aspects affected by more than one dimension, while also laying the foundation for a fair, reliable, and comparative assessment between projects, unlocking the potential for assessing the sustainability performance of projects with different focus when using a pre-defined common KPI repository per sector. To this respect, each SCC sector should be assessed by leveraging KPIs attributed to each sector from a common-to-all-projects repository. This repository should include the most essential KPIs that cover all SCC aspects per sector. The definition of the common-KPIs along with their functional units, target values, evaluation margins and weights per sector is a prerequisite for the SPI evaluation under a use-case 2-an essential process which is outside the scope of this work, reducing the methodological steps for evaluating the SPI per SCC project. We note here again that due to the restrictions of the BAL weighting method, the common KPI repository per sector should include up to 10-12 indicators (i.e., up to a maximum of 40 KPIs should be included in the global repository for all 4 SCC sectors). Each project should choose all relevant to the project KPIs from this repository pertaining to each sector. Then, the SCC sectors are assessed by each sub-set of pre-weighted KPIs and they can finally be condensed in a single SPI metric via equally aggregating, i.e., averaging their individual scores. In case a KPI in a particular sector is not relevant to the project's scope, a zero score is assigned to this KPI, lowering its total SPI score, in accordance with the index scope of providing an inclusive evaluation of the project's impact towards the total sustainability of a city. Figure 7, presents the workflow diagram for the calculation of the SPI under both use-cases. The required steps per use-case are also indicated. Once again, we divide the whole calculation and evaluation procedure into two main categories: the pre-evaluation (P) process which includes all the necessary steps (P1-P3) that a project needs to take prior to the evaluation. Note that P1-P3 have already been performed under the SII pre-evaluation procedure and they are thus redundant in the case of a complete evaluation under all-three axis. (a) Step P1: The definition of the KPIs. This step is typically set during the project's pre-monitoring phase; (b) Step P2: The definition of baseline and target values per KPI. This step is typically set during the baseline monitoring phase of a project. Note that under UC 2, target values definition should be pre-set for all KPIs in the common-KPI repository and thus not required by each SCC project; (c) Step P3: The calculation of KPIs in each aggregation level of interest (e.g., building, PEB, PED, city level). The second category is the evaluation (E) process which includes all the necessary steps for the actual evaluation of the project through the SII (steps E1-E7). Note that E2, E3, and E4 have already been performed under the SII evaluation (for all KPIs) procedure and they are thus redundant in the case of a complete evaluation under all-three axis. Nevertheless, we present them here again for completeness. Additionally, note that under UC 2, the assignment of the KPIs into the 4 SCC sectors is automatic as they should been extracted from the common-KPI repository. Moreover, functional units, margins, and weights are also pre-set and thus only steps E4, E6, and E7 are relevant, as noted in Figure 6. The evaluation steps for both Use Cases of the SPI are as follows: (a) Step E1: The assignment of KPIs into the 4 SCC sectors. We note here again that up to 10-12 KPIs need to be assigned per sector as a prerequisite for the BAL weighting method. See Section 4. To calculate the sustainability performance index (SPI), the weighted functional values of KPIs defined in each sector are used to calculate the weighted arithmetic mean in each sector. The total SPI index is calculated by the arithmetic mean of all sector values. The weights in the SPI Index need to be calculated with the BAL method as explained in Section 4.3 and are not the same as those ones used for calculating the SII, since each sector in SPI is a different construct of indicators than the SII dimensions constructs. Illustrative Examples and Discussion on Index Scoring With a view to acquire an initial insight, test and pre-validate the applicability of the proposed evaluation framework, a provisional use-case has been defined building upon the preliminary outcomes of a positive energy city transformation framework (POCITYF), a European Horizon 2020 smart city project, approved for funding in 2019. POCITY identified and will orchestrate the demonstration of several solutions towards energy transition at two lighthouse cities (LH): Evora from Portugal and Alkmaar from the Netherlands; as well as the replication of solutions in six fellow cities (FC): Bari from Italy, Celje from Slovenia, Granada from Spain, Hvidovre from Denmark, Ioannina from Greece, and Ujpest from Hungary. A key characteristic of this project is its special focus on historical cities and buildings, attempting to demonstrate energy-oriented upgrades that are highly compatible with the respective challenges. During the first year of the project's implementation the definition of POCITYF KPIs has been realized through a detailed methodological process which strongly relates to the needs of LH cities and their citizens towards their energetic transition [17]. These needs include concerns on: (1) energy, (2) environmental, (3) social, (4) ICT, (5) mobility, (6) economic, (7) governance, and (8) diffusion and propagation dimensions which also relate to the various stakeholders participating or being interested in POCITYF's interventions. A list of 63 KPIs are included in the final KPI repository of POCITYF, categorized in the eight dimensions, offering a holistic framework to assess four different aggregation levels: (a) building, (b) block, (c) district, and (d) city level. From this list, 37 KPIs (characterized as core KPIs by POCITYF) were utilized to assess the SSI and SPI indices. Lastly, a series of PSIs (32 in total) have been identified which provide a global view of the project success and its impact towards green, smart, resilient and autonomous cities. All 32 PSIs were utilized to assess the PPI Index. Key Assumptions POCITYF is currently going through the second year of implementation and as a result the application of the proposed framework can only be fully performed in the near future, when the monitoring phase will be initiated. Nevertheless, POCITYF offers ready to be applied lists of KPIs, PSIs, dimensions, and aggregation levels that can serve as an excellent test-bed for extracting some preliminary results. Below a number of key assumptions applied to test the evaluation framework are summarized. • SSI and SPIs indices were extracted on a city level only. In total, 31 out of 37 KPIs were applied to assess this level (the rest KPIs were focusing on a building, block or district level only). Respective results can be extracted also for these aggregation levels but for the sake of simplicity and space restrictions we opted for a city analysis only in this paper. (3 points) whereas the rest of the evaluation scale has been marginalized based on the worst (<2) and best (>8) in class performant countries on this issue, assigning a score of 1 and 5 points, respectively. Setting solid and well-justified evaluation scales was found to be a very challenging but of high added value process. Developments on this subject are still on-going and a more detailed presentation of this issue is foreseen in the future. • Target values applied in PPI have been defined by POCITYF during proposal submission and reflect the project's own ambitions and expected impact. Minor modifications may apply in the future. • The BAL method was deployed for determining weights and evaluating the impact to sustainable goals, i.e., for the SII. The method was also applied for determining weights and evaluating inclusive sectors in SCC projects, i.e., for the SPI. For the latter (SPI), an initial process of choosing the most important cross-dimensional KPIs per sector was performed based on an BAL-like method-KPIs with zero points assigned by all experts were excluded while the maximum KPIs per sector was limited to 12, in order to comply with the BAL method requirements, reducing cognitive stress to the experts. In all of the above, the evaluation was performed by the authors of the paper, serving as a preliminary group of experts. A wide pool of experts from different fields and countries will be developed and applied during the final implementation of the methodology. • The KPI calculation (input of functional unit values by the user) were assigned indicatively by the authors considering a hypothetical performance of the POCITYF project by its end for one of the two LH cities. Consequently, results presented in the following sub-sections serve mostly as an illustrative example of the potential applicability of the proposed framework rather than an actual evaluation of the POCITYF project. In that aspect, deep analysis and interpretation of results falls out of the scope of the specific paper. Note that all data used to create the analytics and graphs in the following subsections are provided in the Annexes (Tables A1-A4). No elaborated scripts have been used for the calculations needed at this stage (calculations have been performed in MS Excel). The authors plan to fully implement the proposed framework, not only for the case of POCITYF but also other SCC and sustainability-oriented projects, with a view to validate and re-adapt the proposed methodology if needed. To do so, relevant software-tool will be developed that will facilitate estimations and simplify procedures, thus increasing the potential applicability in several use-cases. PPI Results PPI results (project level) for the case of POCITYF are summarized in Appendix A, Table A1. The final PPI score was 2.6 (based on the harmonic mean of all PSI scores). This means that, overall, the project can be considered close to successful against the call expected impacts, having achieved a satisfactory performance (score ≥ 3 points) in most of the defined PSIs. More specifically, 23 out of 32 PSIs met or even surpassed the target value-three of which (V2G storage within PEBs; Total carbon dioxide emission reduction; Number of peer-reviewed publications due to POCITYF activities) even gained the maximum score (5 points) exhibiting an excellent performance much above the expected one. On the other hand, two PSIs (Batteries storage within PEBs; Number of new and feasible product ideas generated within the project duration) failed (1 point) to reach the target values. This can be attributed both to technical reasons and even unrealistic targets set during the design phase of the project. For instance, the expected number of new product ideas within the project duration was found to be overambitious since it is after the wider scale exploitation and penetration of solutions to the market that these ideas can be actually generated. Estimating the PPI on a regular interval (e.g., every 6 months or annually) can help identify problematic areas affecting the project performance and proceed to mitigation actions on time. SII Results SII results on a city level for the case of POCITYF are summarized in Appendix B, Table A2. The final SII score was 3.6. This means that, overall, the project can be considered successful, achieving an above average performance towards meeting the city's sustainability goals in most of the dimensions examined (Figure 8). The project achieves close to excellent performance in the social dimension (4.2 points) exhibiting the maximum performance (5 points) in KPI "degree of satisfaction" of the implemented solutions. Most of smart-city projects, as well as POCITYF, adopt a citizen-oriented approach trying to involve and actively engage citizens in the city transformation process. Coupled by extensive dissemination activities these projects are able to reach a very wide audienceenhancing social performance. The next best performing dimension was ICT (4.1 points) which is to be expected considering that ICT and respective measures are vital towards the "smartification" of cities. On the other hand, the project scored less (2.8 points) in the economic dimension. This result highlights a key challenge that most city projects need to overcome which is how to achieve cost-efficiency when applying innovative technologies, usually characterized by increased costs in comparison with conventional ones. For this reason, SCC projects usually also provide business models to support future exploitation but this can only be reflected in the SII score in a future (some years after the project's end) implementation. A significant margin for improvement is available if we consider that the project was not able to achieve a high performance in some KPIs that are characterized by increased importance (weight). An indicative example (Figure 9) is KPI "energy savings" for which a high weight has been assigned. A reduction in energy savings by 19% was achieved which leads to a score of 2 points (threshold value was 32.5%). This type of analysis, considering all KPIs, can help projects and decision makers with limited budgets to focus on the issues that will have the highest impact on the city's sustainability score. SPI Results SPI results on a city level for the case of POCITYF are summarized in Appendix C, Tables A3 and A4. The examined city exhibited a balanced performance in all four sectors: resource efficiency-3.7 points; smart and reliable infrastructure-3.7 points; quality of life and prosperity-3.4 points; climate change and mitigation 3.4 points. The city exhibits a very high performance regarding the utilization of local RES and relevant aspects but puts much less emphasis on increasing the energy efficiency of its building stock-which has been indicated as a key aspect for further improvement. The wide scale roll-out of EVs and is also another aspect that its potential improvement will increase the score in several sectors. The total SPI score was 3.6 points being equal to the SII score. This is an indication that KPIs selected by POCITYF project are well-defined and cover adequately key critical aspects affecting the overall sustainability on a city level. As expected, several KPIs were included in more than one sector, e.g., energy savings, increased system flexibility, degree of energetic self-supply by RES, carbon dioxide emission reductions, etc., but with a different weight. For example, KPI "energy savings" was found to be the most significant one in the resource efficiency sector (weight 0.209) but also contributes to the quality of life with less significance (weight 0.064) since energy savings can lead, among other benefits, to reduced energy costs and making it easier to ensure a comfortable indoor environment. This is in accordance with SPI goals and objectives according to which different KPIs belonging to different dimensions may affect different widely acclaimed all inclusive sectors. Discussion As elaborated throughout this work, the proposed evaluation framework presents strong benefits compared to existing ones. Nevertheless, below, we mention and comment on shortcomings and necessary steps to be taken before reaching its full potential: • The implementation of the current framework is foreseen to occur in the next years for several EU funded SCC projects. The illustrative example provided in Section 6 should be considered as a fictitious case-study due to the randomly assigned KPI values. As such, the authors plan to publish concrete and real data in the future as soon as the latter are available, illustrating the capabilities of USE framework into assessing the sustainability performance of SCC projects and elaborating on the results with thorough analysis. • Concerning the SPI, we have already mentioned that under UC2 (future projects), the correct implementation of this axis-evaluation requires a common KPI-repository per sector, in order to obtain comparable results and towards a consistent benchmarking between cities. The process of populating such repositories is not straightforward. It should involve a variety of stakeholders (city authorities, technology providers, research institutes, policy makers, citizens, etc.) with adequate expertise, as well as diversity in each sector, so that all relevant aspects are covered and each sector truly becomes an overarching group which contributes to the total sustainability of a city. Moreover, we are fully aware that setting the SCC sectors and clearly defining their key aspects, is a quite challenging and demanding process on its own. The authors have already started collecting data towards the definition of the common KPIs per sector while working on a more elaborated justification for the SCC sectors definition. This work is outside the scope of the current article while we plan to enhance the definition of defined herein preliminary sectors and their aspects in the near future. • The normalization procedure described in Section 4 requires a process of finding well accepted reference points for every indicator. This process is time and effort intensive, whereas a level of subjectivity is still involved. Reference points should be based on commonly accepted data and targets to increase objectivity as far as possible. In order to reduce uncertainty, it is proposed that these targets must be re-evaluated and modified regularly. Still, we consider this a step forward in comparison with business-as-usual practice where indicators are mostly assessed based on the increase or decrease in their value. Additionally, by utilizing a 5-point scale a lot of information is lost, which may lead to accumulation of scores into the same cluster (e.g., many buildings/PEBs/PEDs/Cities with the same score). Although for a single KPI it is very likely that scores will coincide, for a higher number of KPIs (usually applied by SCC projects) this is unlikely to happen. It is further proposed that the city should still perform the traditional indicator analysis (e.g., examine trends of absolute values over consecutive years) in order to identify more specific internal problems or opportunities for improvement. • The evaluation on different spatial scales of a city (e.g., building level, PEB level, PED level, etc.) is inherent inside the USE framework. Adequate aggregation techniques can be used to leap from one level to the next (e.g., summing up each KPI contribution of the buildings that consist a PEB can provide the required PEB value). Such aggregation might seem oversimplified and can not surely take into account particular aspects of each lower level component (e.g., buildings) that might contribute non-uniformly to the upper level of evaluation (e.g., PEB). Moving to even higher spatial aggregation levels, such approach becomes further complicated as typically in SCC projects, the union of different PEBs is not equal to a whole PED (and similar for several PEDs aggregated to a city level). Choosing representative components of each evaluation level is ambiguous, although averaging provides a simple solution. In any case, the clear definition of these spatial scales inside a SCC project is pending and thus we are planning to redefine if necessary the aggregation techniques inside USE framework to comply with the SCC standards. Conclusions The outcomes of the specific study, serve as a major first step towards the deployment of an inclusive and uniform evaluation framework that is able to assess in parallel the impact, performance, and sustainability potential of smart city projects. The proposed USE framework can support the needs of various stakeholders related with the development and implementation of smart city projects and initiatives, such as project managers, technical experts, public authorities, decision makers and urban planners, who wish to apply an all-inclusive evaluation and monitoring procedure. The utilization of widely accepted reference points, upon which evaluation scales are defined, supports strong sustainability assessments, since a distance to a sustainable target is integrated and reflected into the final evaluation. The paper also summarizes several insights on key characteristics and limitations of currently available urban sustainability and smartness evaluation frameworks and indices, and recommendations on normalization, weighting, and aggregation procedures. This info can be valuable for those who wish to develop their own or revise their index-based evaluation methodology. The preliminary application of USE framework in an on-going SCC project, confirmed its potential applicability for assessing and comparing the success of a project and the extent to which it contributes to the city sustainability progress against EU goals. The authors plan to implement USE in several case studies in the future to fine-tune the proposed steps and validate its applicability.
2021-07-14T13:23:31.517Z
2021-07-01T00:00:00.000
{ "year": 2021, "sha1": "26e4e24d0bd3347bb43ac94813d415331923e63c", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2071-1050/13/13/7395/pdf?version=1625460072", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "68547ffb48f8dfec6501cb8742eea094444180cb", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
30015255
pes2o/s2orc
v3-fos-license
Demographic features of subjects with congenital glaucoma Context: Congenital glaucoma is a potentially blinding ocular disease of the childhood. Identi fi cation of the possible associated risk factors and may be helpful for prevention or early detection of this public health problem. Aims: To demonstrate the demographic features of congenital glaucoma subjects. Se (cid:308) ing and Design: The charts of congenital glaucoma patients referred to Tamcelik Glaucoma Center were retrospectively reviewed through the dates of 2000 and 2013. Materials and Methods: Analyzed data included diagnosis, age at fi rst presentation, symptoms at fi rst presentation, laterality of the disease, sex, presence of consanguinity, family history of congenital glaucoma, maturity of the fetus at delivery, and maternal age at conception. Statistical Analysis Used: Statistical Package for Social Sciences (SPSS) version 19.0 by IBM (SPSS Inc, Chicago, Illinois, USA) was used to compare the mean of continuous variables with Student’s t -test and analysis of variance (ANOVA) and χ 2 test was used to test di ff erences in proportions of categorical variables. Results: The data of 600 eyes of 311 patients were analyzed. The distribution of primary and secondary congenital glaucoma among the patients were 63.3% ( n = 197) and 36.7% ( n = 114), respectively. Of the 311 patients, 57.2% ( n = 178) were male and 42.8% ( n = 133) were female. The overall frequency of bilateral disease was 92.3% ( n = 287). Overall rate of consanguinity and positive family history was 45.3% ( n = 141) and 21.2% ( n = 66), respectively. Conclusions: Bilateral disease in this study was more common than previously reported studies. Positive family history was more frequent in primary congenital glaucoma although not statistically signi fi cant. Congenital glaucoma is a potentially blinding ocular disease of the childhood which is more often observed in the developing world due to higher frequencies of consanguinity. [1,2] By defi nition, congenital glaucoma is a developmental disorder of the childhood that is associated with elevated intraocular pressure, globe enlargement (buphthalmos), corneal edema, Haab striae, optic nerve cupping, and atrophy. [3] Various classifi cation methods of congenital glaucomas have been proposed with respect to ocular and systemic associations or the primary ocular anatomic site involved in the disease. [1] In the former classifi cation method, congenital glaucoma is classifi ed into primary or secondary congenital glaucoma. Primary congenital glaucoma, also known as isolated trabeculodysgenesis, is typically an isolated idiopathic developmental abnormality of the trabecular meshwork in the absence of other ocular and systemic conditions. In secondary congenital glaucoma, a contributory ocular or systemic pathology is present as a cause. Primary congenital glaucoma has previously been reported as the most common type, [4][5][6] however, studies which demonstrate the frequency of specifi c entities is limited. Congenital glaucoma is also one of the very few conditions that requires strict follow-up with the need for more than one specialist monitoring the patient throughout his/her life. Identifi cation of the possible associated risk factors, thus, may be helpful for prevention or early detection of this public health problem. The purpose of this study is to demonstrate the distribution of primary and secondary congenital glaucoma and to identify possible associated risk factors. Materials and Methods The registry of Tamcelik Glaucoma Center was browsed for patients diagnosed as congenital glaucoma through the dates of 2000 and 2013. The charts of the patients were retrospectively reviewed. Demographic and clinical information of the subjects were obtained. Congenital glaucoma was defined as the presence of optic neuropathy caused by elevated intraocular pressure (>20 mmHg) with associated clinical signs of disc cupping (>0.3), disc asymmetry (>0.2), enlarged corneal diameter (>11 mm in the newborn, >12 mm in a child of any age), corneal edema, Haab striae, and progressive myopia. All of the patients in this study underwent examination under general anesthesia in preoperative evaluation and in the follow-up period after surgical intervention at necessary intervals. The examination under general anesthesia included intraocular pressure measurement, biomicroscopic examination, gonioscopic examination, fundoscopic examination, central corneal thickness measurement, and corneal diameter measurement. Patients were classifi ed as primary or secondary congenital glaucoma according to their fi ndings in examination under general anesthesia (in preoperative evaluation, confi rmed later in the follow-up) and systemic work-up done by the pediatrics consultant. Primary congenital glaucoma was defi ned as a glaucoma that presents from birth to 3 years of age caused by the maldevelopment of the trabecular meshwork in the absence of any other ocular or systemic pathology. Meanwhile, secondary congenital glaucoma was defi ned as glaucoma with associated systemic and/or ocular pathologies. Children with acquired glaucomas including aphakic glaucoma, uveitic glaucoma, and traumatic glaucoma were excluded. Iridotrabeculodysgenesis was defi ned as the coexistence of trabecular meshwork and iris abnormalities, which included iris hypoplasia/hyperplasia, anomalous iris vessels, iris atrophy, iris coloboma, and iris holes. Patients with aniridia were incorporated into the iridotrabeculodysgenesis/aniridia group. Phakomatoses group included patients with Sturge-Weber and Klippel-Trenaunay syndrome. Patients with severe and multiple anterior segment abnormalities which are clinically difficult to distinguish from some of the known anterior segment dysgenesis syndromes, such as Peters anomaly, were grouped as "anterior segment dysgenesis". The demographic and clinical data obtained from the records included age at first presentation, symptoms at fi rst presentation, laterality of the disease, sex, presence of consanguinity, family history of congenital glaucoma, maturity of the fetus at delivery, and maternal age at conception. Consanguinity was defi ned as history of marriage with a fi rst or second degree cousin. Patients were referred as preterm delivery if child birth occurred before 37 completed weeks of pregnancy. Patients with retinopathy of prematurity (ROP) were separately analyzed as a subgroup of secondary congenital glaucoma. All data were recorded in a Microsoft Excel datasheet and transferred to Statistical PAckage for Social Sciences (SPSS) version 19.0 by IBM (SPSS Inc, Chicago, Illinois, USA) for statistical analysis. Student's t-test and ANOVA was used to compare the means of continuous variables of the groups. χ 2 test was used to test diff erences in proportions of categorical variables (Fisher exact test or Pearson Chi-square when indicated). P < 0.05 were considered statistically signifi cant. Institutional review board approval was obtained for this study. Results The data of 600 eyes of 311 patients were extracted from the registry. The distribution of primary and secondary congenital glaucoma among the patients were 63.3% (n = 197) and 36.7% (n = 114), respectively. Table 1 shows the frequencies of primary congenital glaucoma, secondary congenital glaucoma, and its subgroups. The fi rst three most common pathologies in the secondary congenital glaucoma group were iridotrabeculodysgenesis, Axenfeld-Rieger syndrome, and phakomatoses. Phakomatoses group consisted of 10 patients with Sturge-Weber syndrome and two patients with Klippel-Trenaunay syndrome. Table 2 shows the distribution of sex and laterality among the groups. Of the 311 patients, 57.2% (n = 178) were male and 42.8% (n = 133) were female. In primary congenital glaucoma, 58.9% (n = 116) of the patients were male and 41.1% (n = 81) of the patients were female. Compared to primary congenital glaucoma, the proportion of female subjects were slightly higher in secondary congenital glaucoma (45.6%, n = 52), but this diff erence was not statistically signifi cant (P = 0.476). Of the 311 patients, 45.3% (n = 141) had a history of consanguinity. Parents of primary congenital glaucoma patients reported higher rates of consanguinity (53.2%, n = 105) than parents of secondary congenital glaucoma patients (32.4%, n = 37), and this difference was found to be statistically signifi cant (P = 0.005). There was no signifi cant diff erence between the subgroups of secondary congenital glaucoma in terms of consanguinity (P = 0.321). Table 3 shows the demographic features of patients with primary and secondary congenital glaucoma. Positive family history was noted in 21.2% (n = 66) of all patients. Although the proportion of positive family history in primary congenital glaucoma (23.8%, n = 47) was higher than in secondary congenital glaucoma (16.6%, n = 19), this difference was not statistically significant (P = 0.283). Contrary to these fi ndings, positive family history was signifi cantly higher in patients with iridotrabeculodysgenesis-aniridia (54.4%, n = 6), Axenfeld-Rieger syndrome (47.8%, n = 11) and Walker-Warburg syndrome (100%, n = 2) in the secondary congenital glaucoma group (P = 0.004). Furthermore, family history was negative in all of the patients with phakomatoses. The majority of the patients (94.2%, n = 293) were term at delivery. 95.9% of patients in primary congenital glaucoma (n = 189) as opposed to 91.2% of patients in secondary congenital glaucoma (n = 104) were term at delivery, but this diff erence was not statistically signifi cant. There was no statistically signifi cant diff erence in the distribution of term/ preterm deliveries among the secondary congenital glaucoma subgroups, when ROP-associated cases were excluded from statistical analysis. Table 4 presents the mean age of the patients at fi rst presentation and mean maternal age at conception. The mean age at fi rst presentation of all patients was 110.22 ± 160.35 days (range: day of delivery to 3 years). Although secondary congenital glaucoma patients (96.92 ± 179.63 days) presented slightly earlier than primary congenital glaucoma patients (119.47 ± 145.55 days), this diff erence was not statistically signifi cant (P = 0.335). The mean maternal age at conception was 27.13 ± 5.61 years in all patients. The mean maternal age at conception of primary and secondary congenital glaucoma patients were 27.36 ± 5.76 and 26.81 ± 5.42 years, respectively. (P = 0.533). There was no statistically signifi cant diff erence in the mean age at fi rst presentation and mean maternal age at conception in subgroups of secondary congenital glaucoma; however, statistical power was low due to small sample size of some subgroups. Table 5 shows the frequency of symptoms at fi rst presentation. Discussion Congenital glaucoma is a rare yet a preventable cause of blindness which is more frequent in populations where consanguinity is common. Its incidence ranges from one in 1,250 in Slovakian Roms (Gypsies), and one in 2,500 in Saudi Arabia to one in 10,000-12,500 in the western world. [6,7] It is also more common in certain regions of Turkey, but epidemiologic data regarding its incidence is lacking. This study demonstrates the demographic features of congenital glaucoma patients, in a referral center, in Istanbul, Turkey. The sex distribution of the subjects in this study (57.2% male, 42.8% female) was consistent with the previous literature which suggests a 3:2 male to female ratio. [4,6,7] The only confl icting study regarding sex distribution was from Japan, which reported a predominance of female subjects over male subjects in patients with primary infantile glaucoma. [7] Many classification methods have been employed for congenital glaucoma, some of which are more useful for prognostic implications. In this study, isolated abnormality of the trabecular meshwork in the absence of any other ocular and/or systemic association was defi ned as primary congenital glaucoma and patients who have associated conditions were grouped into secondary congenital glaucoma. The reason to use this classifi cation method was to reveal any diff erences of demographic and clinical features between these two groups of disease which possibly have diff erent underlying mechanisms and genetic backgrounds. Primary congenital glaucoma accounted for the majority (63.3%) of all congenital glaucoma patients in this study, which is a consistent fi nding with previous literature. [5,8] Diff erent frequencies were reported as causes of secondary congenital glaucoma in several studies. The BIG eye study reported lens-related congenital glaucomas, phakomatoses, uveitic glaucomas, and anterior segment dysgenesis as the more frequent causes of secondary pediatric glaucomas. [4] Another study reported Peters anomaly, anterior segment dysgenesis, and aniridia/Rieger syndrome as the more common causes of secondary congenital glaucoma. [8] An epidemiologic study of 306 patients revealed aphakic glaucoma, Sturge-Weber syndrome, anterior segment dysgenesis, trauma, and aniridia as the major causes of secondary glaucoma. [5] Anterior segment dysgenesis group, in the mentioned study, also included patients with Axenfeld-Rieger syndrome and Peters anomaly along with unclassifi ed developmental disorders of the anterior segment. By having excluded acquired glaucomas; iridotrabeculodysgenesis, Axenfeld-Rieger syndrome, and phakomatoses were the more frequent causes of secondary congenital glaucoma according to this study. These results may more accurately resemble the actual distribution in this group of disease, as the total sample size in this study is larger than previously reported. A striking feature of this study was the distribution of laterality. Virtually, all previously published studies report bilateral disease between 70 and 80% in congenital glaucoma patients. [1,4,5,9] Congenital glaucoma in this study is, however, observed in 92.3% of the patients. The frequency of unilateral disease in secondary congenital glaucoma was higher than in primary congenital glaucoma, probably owing to the more frequent observation of unilaterality in phakomatoses, and Peters anomaly. As aforementioned, Peters anomaly and phakomatoses patients showed higher rates of unilateral disease and this difference was statistically significant compared to other causes of secondary congenital glaucoma. A recently published review of literature reported unilateral disease in 36.8% of 58 cases with Peters anomaly. [10] The rate of unilateral disease, however, was observed in 71.4% of seven patients in this study, but this diff erence may be a ributed to the relatively smaller sample size. Comparably, half of the phakomatoses patients had unilateral disease in this study and this is consistent with published literature which also demonstrate a relatively more even distribution of laterality. [5,11] Strong eff orts have been spent in identifying gene mutations which might be associated with congenital glaucoma. Examples to these gene mutations include the recessively inherited CYP1B1, LTBP2 mutations, and the dominantly inherited heterozygous MYOC mutation. [12][13][14][15] It is suggested that siblings of children with these mutations should be tested and provided with genetic counseling even if they are clinically unaff ected by the disease. [16] Most of these genes are inherited in an autosomal recessive pa ern with variable penetrance which explains the familial pa ern in 10-40% of the cases. [17] Consanguinity is responsible for clustering of these certain gene profi les that are found to be linked with primary congenital glaucoma. [18,19] In this study, consanguinity was present in 45.3% of all patients. The rate of consanguinity was signifi cantly lower in secondary congenital glaucoma group, probably due to the sporadic nature of some of the diseases in this group. In addition, positive family history was also more frequent in primary congenital glaucoma (23.8%) than in secondary congenital glaucoma (16.6%); however, this diff erence was not statistically signifi cant. What needs to be mentioned regarding positive family history in this study, is the presence of signifi cantly higher rates in iridotrabeculodysgenesis-aniridia, Axenfeld-Rieger syndrome, Walker-Warburg syndrome, and the total absence in phakomatoses. Aniridia is an uncommon bilateral ocular disease which aff ects not only the iris but also the cornea, anterior chamber angle, lens, retina, and the optic nerve. Most of the cases are inherited by autosomal dominant transmission with high penetrance and variable expression. About two-thirds of the cases have an aff ected parent, but sporadic cases have also been reported. [20] In this study, 54.4% of patients with iridotrabeculodysgenesis-aniridia had a positive family history. Axenfeld-Rieger syndrome is a spectrum of disease associated with several ocular and/or systemic fi ndings which is inherited in an autosomal dominant pa ern. [21,22] Very few studies exist in literature which describe the rate of positive family history. A study with a relatively large number of patients from Germany reported positive family history in 46.2% of 26 Axenfeld-Rieger patients who also showed signs of glaucoma or elevated intraocular pressure. [9] This fi nding is consistent with the positive family history rate of 47.1% of 23 patients in this study. Meanwhile, Walker-Warburg syndrome is a very rare autosomal recessive congenital muscular dystrophy, associated with cerebellar and ocular abnormalities. Ocular abnormalities include anterior segment anomalies (cataracts, shallow anterior chamber, microcornea and microphthalmia, and lens defects) and a spectrum of posterior segment anomalies (retinal detachment or dysplasia, hypoplasia or atrophy of the optic nerve and macula, and coloboma). Glaucoma or buphthalmos may also be present. [23] Positive family history is also a hallmark of this genetic disease. [24] Conversely, Sturge-Weber and Klippel-Trenaunay syndromes are neurocutaneous disorders in the phakomatoses group of diseases which are typically sporadic. [25,26] Consistent with this knowledge, family history was not reported in any patient with phakomatoses in this study. The mean age at fi rst presentation for all patients in this study was 110.22 ± 160.35 days. Overall, 34.4% of the patients presented to the referring ophthalmologist or congenital glaucoma specialist within the 1 st week, 45.1% within the 1 st month, and 96.4% within the 1 st year of life. Although secondary glaucoma patients presented slightly earlier than primary congenital glaucoma patients, this diff erence was not statistically signifi cant. Patients with Peters anomaly, iridotrabeculodysgenesis-aniridia, and anterior segment dysgenesis presented earlier than those patients with iridotrabeculodysgenesis, Axenfeld-Rieger syndrome, and phakomatoses in the secondary congenital glaucoma group; however, statistical analysis could not be performed due to small sample sizes in more than one subgroups of secondary congenital glaucoma. These results are comparable to previously reported studies. [4,5,8] Mean maternal age was 27.13 ± 5.61 years in all patients and it did not diff er signifi cantly between primary and secondary congenital glaucoma patients. Overall rate of preterm delivery was 5.8%, a higher rate was observed in the secondary congenital glaucoma group (8.8%) due to the presence of subjects with associated ROP. The diff erence between primary and secondary congenital glaucoma; however, was not statistically signifi cant when ROP subjects were excluded. A population-based, case-control study of isolated primary congenital glaucoma reported a preterm birth rate of 28.9% in 52 subjects and this result was signifi cantly higher than the matched controls and population controls. However, this diff erence was explained by the Gypsy demographics of several mothers included in the study. The authors have reported that certain anthropological characteristics of these mothers, which include low body weight and small stature, together with their low socioeconomic status may result in a higher rate of low birth weight and associated prematurity. [27] Finally, as expected, cloudy cornea and buphthalmos were the most frequent signs that had alerted the parents to seek for a specialist according to this study. Pediatrician concern should not be overlooked as pediatricians should be able to recognize this disease and refer promptly to a specialist, as early intervention is usually curative. It is also important to note that more thorough investigations regarding identifi cation of gene mutations associated with congenital glaucoma are needed. These investigations will improve our knowledge about the genetics of congenital glaucoma and will provide us newer developments in genetic counseling and antenatal detection of this preventable disease.
2018-04-03T01:34:10.672Z
2014-05-01T00:00:00.000
{ "year": 2014, "sha1": "b9575ff49a2942b570014c9e5b25b703c82d205c", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/0301-4738.126988", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "dacbfcd3b9d493851b5ffa9447644633ab32da66", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
239468800
pes2o/s2orc
v3-fos-license
Evaluation of the Presence and Viability of Mycobacterium bovis in Wild Boar Meat and Meat-Based Preparations The aim of the present study is to provide information about the ability of Mycobacterium bovis to survive within wild boar (Sus scrofae) meat and meat-based preparations and the duration of this survival, and to consider the preservation of its infectious potential toward humans and animals. Meat samples were artificially contaminated with an M. bovis field strain and then stored at −20 °C, while two sausages batches were contaminated with the same field strain at two different concentrations, 105 CFU/g and 103 CFU/g, before storing them in proper conditions to allow for their ripening. A third sausage batch was contaminated by adding 2 g of wild boar lymph nodal tissue with active tuberculous lesions to the meat mixture. Bacteriological and biomolecular (PCR) methods were used to test the meat and sausage samples every 60 days and every 7–10 days, respectively. M. bovis was detected as still alive and viable on the frozen meat for the last test on the 342nd day, while from the sausage samples, M. bovis was isolated until 23 days after contamination. Our results indicate that M. bovis can stay alive and be viable for 23 days within sausages prepared with contaminated meat from infected wild boars. These products are usually eaten as fresh food after grilling, often cooking at a temperature that does not ensure complete inactivation of the pathogenic microorganisms present, which can pose a risk for humans to develop zoonotic tuberculosis. Introduction Currently, bovine tuberculosis is still an important global issue for human health and causes severe economic impact due to the difficult eradication process of the disease in cattle farms and due to the spread of this infection to wildlife. Indeed, as occurs in multihost epidemics, tuberculosis control and eradication in farmed hosts cannot be reached if it is not carried out together with disease control on wild reservoirs. Actually, few options for tuberculosis control in wildlife are available [1]. An important preventive action must be carried out by consumers and operators working in slaughterhouses to highlight the risk associated with the consumption of raw or undercooked meat [1]. A recent survey, conducted among Italian wild mammals hunters about the use of hunted animal meat, reports that 32% of them consume all meat within their family, 28% give some meat to friends and relatives, 3% give all meat away, and 17% give away about 1/3-1/2 of the meat, while 20% of the interviewed hunters did not answer the question [2]. We can therefore deduce that game meat consumption is no longer limited to hunters' families but involves a wider range of consumers too, up to distributing game products within group catering. In recent years, the game meat sector has shown a steady increase in demand and supply thanks to the significant increase in the number of some wild species, especially some ungulates such as wild boar (Sus scrofae). The growing commercial interest toward the possibility of wild boar meat as food has nevertheless also highlight health hazards linked to many viral, bacterial, and parasitic pathogens that this animal species can spread to other animals, both wild and domestic ones, and to humans, as a result of direct and/or indirect contact with animals or by ingesting their meat [3,4]. Tuberculosis caused by Mycobacterium bovis (M. bovis) is a widespread infection in the wild boar population of many European countries [5], including Italy, where bovine tuberculosis is still present within southern regions despite sanitation plans started about 40 years ago. Wild boars infected by M. bovis can become direct or indirect sources of contagions for humans and other wild and domestic animals [6,7]. In this wild ungulate, infection affects many lymph nodes and organs, mainly those belonging to the respiratory and digestive tracts, producing anatomopathological pathognomonic lesions on retro-pharyngeal, sub-mandibular, bronchial, mediastinal, hepatic, mammary, sub-iliac, and popliteal lymph nodes and on the lungs and liver, even though we cannot exclude affections of other areas and organs. With regard to food safety, at present, few studies concerning the risk of transmission of M. bovis to humans due to the consumption of wild boar meat exist despite the worrying prevalence of infections found in this species within many Italian areas [8][9][10]. This hazard is often overlooked because of the cooking process under which meat usually undergoes [11][12][13]. However, recent studies confirmed M. bovis contamination in carcasses of regularly slaughtered cattle [14,15] and frozen carcasses of regularly slaughtered buffalo [16] ready for commercial distribution. In light of the above-reported observations for cattle and buffalo meat, admitting to a potential risk of infection by M. bovis for humans due to the consumption of wild boar meat seems reasonable [17]. This likely hypothesis should not be overlooked, considering that the infectious load needed to cause human or animal disease through food-borne routes is still unknown. In Italy, sausages made from wild boar meat are food products in high demand and are often consumed within public catering and during gastronomic festivals. The risk of zoonotic tuberculosis linked to the consumption of these foodstuffs can be due to inadequate cooking if the sausages were produced using meat from wild boars infected by M. bovis or meat contaminated during slaughtering activities involving groups of animals. At least three cases of human tuberculosis caused by M. bovis in poachers and/or their relatives who had eaten wild boar meat were observed in the geographic area involved in this study. These data have been derived from local findings of some human health structures, but they have not been published as a scientific paper/report. The need for a risk assessment of human tuberculosis infection with M. bovis as a result of eating wild boar fresh and processed meat (e.g., sausages) is well justified by the worrying prevalence of tuberculosis infections observed in wild boar populations as well as by the increase in human cases of the disease noticed within our territory, mainly observed among butchers, hunters, and common consumers (unpublished personal data). Our study intends to produce data about human infection hazards related to consuming meat and cured meats produced using wild boar meat contaminated with M. bovis. For this purpose, two main evaluations were carried out: • A risk evaluation for M. bovis contamination of carcasses of wild boar; • A risk evaluation for M. bovis contamination of meat and sausages produced using wild boar meat. The mycobacterium involved was detected alive and viable until 23 days after the preparation of artificially contaminated sausages and until 342 days on frozen meat samples. M. bovis Contamination of Carcasses of Wild Boar We considered a group of 18 animals killed during a single hunting trip. During veterinary post-mortem inspection, 3 out of 18 killed wild boars showed disseminated tuber-culous lesions in different organs and were to be destroyed. The remaining 15 carcasses were declared suitable for human consumption and cold stored until further meat processing. Sterile swabs were used for microbiological sampling from the surfaces of the muscle tissue of these 15 carcasses; all samples were tested for M. bovis presence using bacteriological examination (BE) and polymerase chain reaction (PCR). M. bovis Contamination of Meat and Sausages Produced Using Wild Boar Meat In order to assess the risk linked to consuming wild boar meat and sausages, we needed to simulate the production and experimental contamination of these foodstuffs. For our purposes, we planned and performed different activities carried out during four experimental sessions: • During the first session, we prepared fresh sausages from minced wild boar meat, and experimental contamination of the mixture was achieved by adding infected material consisting of small portions of lymph nodal tissue with active tuberculous lesions (the first experimental batch). The mixture, weighing about 1200 g, was made up of wild boar meat that was minced and seasoned with salt and ground pepper. We contaminated it by adding about 2 g of pharyngeal lymph nodes from a wild boar with clear active tuberculous lesions and which was previously tested positive for the presence of M. bovis through bacteriological examination. Fresh sausages, weighing about 100 g, were produced using the contaminated mixture, and microbiological evaluations were performed on six of these portions with 7-10 day intervals. The sausage samples were stored at room temperature (10-18 • C) in a controlled environment for 37-43 days. During this storage period, the fresh product underwent a ripening process, which allowed the sausages to dry out and to experience all of the biochemical modifications that characterize aged, cured meat; • During both the second and the third sessions, we produced fresh sausages using the same procedure previously described: six of them were used for experimental tests and were stored at the same conditions already described. For experimental contamination of these batches, we used a field strain of M. bovis, which was cultured and added to the matrices at different concentration levels: 10 5 CFU/g for the second experimental batch and 10 3 CFU/g for the third experimental batch; • During the last session (the fourth experimental batch), 700 g of wild boar loin meat was cut into slices of about 100 g in weight, and the surfaces of each slice were contaminated using a sterile swab soaked with a suspension of 10 3 CFU/mL of the same field strain of M. bovis used for the previous two sessions. Contaminated slices of meat were stored at −20 • C. Single portions of this meat were tested with BE and PCR, with intervals of about 60 days. The strain of M. bovis used to contaminate the experimental samples was the SB120/4,5, 5,3,3,10,4,4,4,3,6,5 genotype that was isolated from bronchial lymph nodes belonging to a wild boar killed during a hunting trip and that, during veterinary post-mortem inspection, presented a complete primary bronchopulmonary tubercular complex. This strain was chosen from a collection of 14 different strains, with all of them isolated from wild boars, because it represents the most common genotype in Italy, and it was often detected within the wild boar population in Calabria starting in 2008. Over the years, many epidemiological data have highlighted that this strain infects both cattle and wild boars in the same area. The genetic profile of this strain was identified using the spoligotyping method together with an ETR (exact tandem repeat) loci analysis in order to reveal any possible homology with other circulating strains within the same territory [18][19][20][21][22][23]. The order of the employed markers is reported in Table 1. Analytical Procedures All experiments were performed in a Biosafety level 3 laboratory according to standard procedures intended for handling tubercular mycobacteria. Sausages produced as described above for first, second, and third experimental batches, and later put in a controlled environment at 10-18 • C, underwent BE and PCR tests throughout the survey period according to the following procedures. • Bacteriological examination: we used a method employing a combined system of liquid and solid media, but the latter medium was used only for samples that tested positive on the liquid medium. An amount of 10 g of meat/sausage was previously minced and then diluted to 1:2 using an 8.5% saline solution, before being homogenized and decontaminated using a 4% sodium hydroxide solution; the sample thus processed was incubated at 37 • C ± 1 • C for 30 min. After this phase, a 10% sulfuric acid solution was added to the homogenized sample in order to neutralize its alkalinity, and phenol red was used as a pH indicator. The sample was then centrifuged at 3000 rpm at 20 • C for 15 min, the supernatant was discharged, and the pellet was re-suspended with 2 mL of a phosphate buffer at pH 6.8. For the first culturing step on the liquid medium, a Mycobacteria Growth Indicator Tube (MGIT TM ) was used, and an aliquot of 0.5 mL of the phosphate buffer suspension was inoculated on MGIT TM 960, previously added with 0.8 mL of PANTA MGIT TM , made up of a growth supplement and a mix of antibacterial drugs. The vials were then incubated at 37 • C in the specific incubator Bactec TM MGIT TM 960 System (Becton Dickinson); the instrument automatically identifies and marks positive vials if mycobacterial growth occurs. From each positive vial, 0.2 mL of the sediment was transferred to solid eggbased media, Lowenstein-Jensen (LJ) and Stonebrink (ST); it was further incubated at 37 • C ± 1 • C for a maximum period of 8 weeks, and was monitored weekly to verify the growth of typical colonies. If any typical colony was not observed at the end of this last incubation period, the sample was reported as negative. All bacterial isolates belonging to Mycobacterium spp. were sent to the National Reference Centre for Bovine Tuberculosis at the Experimental Zooprophylactic Institute of Lombardia and Emilia Romagna in order to perform further evaluations about their genetic profiles; • Polymerase chain reaction: DNA extraction was performed by applying mechanical lysis to the sample tissue, and the supernatant was then collected and processed using a commercial extraction kit (QIAamp DNA mini kit, Qiagen, Germany). Extracted DNA underwent a real-time PCR targeting the specific genetic insertion region IS6110, a highly repeated sequence in strains belonging to the genus Mycobacterium, using primers complementary to the target sequence and that amplify a 209 bp amplicon. Any possible presence within the sample of substances able to inhibit the reaction was monitored by adding an internal control to the PCR mix; the primers and probe used are presented in Table 2. For the amplification procedure, the commercial kit Quantifast pathogen +IC KIT (Qiagen) was used, and the following PCR protocol was adopted: initial denaturation at 95 • C for 5 min, then 45 cycles of denaturation at 95 • C for 15 min and annealing/extension at 60 • C for 30 min. Samples were reported as positive if they showed the following cycle threshold (CT) value for FAM: 5 ≤ CT ≤ 38. The whole process was previously validated as an internal method and showed values ≥95% for both sensitivity and specificity. Each sausage portion used as an analytical sample was checked starting from day 0 (T 0 , preparation date) and about every 10 days for a total of six examinations performed with each method, aiming to detect the presence and viability of inoculated M. bovis by means of PCR tests, which pointed out the presence of deoxyribonucleic acid (DNA), and of isolation through cultural techniques, making marking its viability and fertility possible. Meat portions from samples from the fourth batch were similarly tested following the different time intervals reported in the previous sub-paragraph. Neither the BE nor PCR methodologies were used here to quantify the level of contamination but, rather, to simply reveal the presence of the mycobacteria, so no colony counting was performed after the cultures, and no calibration curve was created for the molecular analysis. Statistical Analysis The experimental results underwent statistical analysis using SPSS ® (Statistical Package for Social Science), an IBM ® (International Business Machines Corporation, Armonk, NY, USA) software. The tests employed for this purpose were chosen by taking into account the limited amount of sample units and the nature of the binary outcome produced by the laboratory analyses, which have to be processed as a categorical/nominal variable. In the beginning, curves following the Kaplan-Meier method were generated, and we evaluated them in a "time to failure" model, referring to the failure to detect/isolate the pathogen as an event of interest for both analytical methods; indeed, the data present analytical detection more than the presence of live mycobacteria. Then, we used a logarithmic rank test (the Cox-Mantel log-rank test) in order to obtain a first evaluation of the differences between groups of data. To confirm the significance of the results and to identify the effect of each considered variable within the system, we also performed a regression according to the Cox model after verifying that it was adequately applicable. We considered p values equal to or below 0.05 as being statistically significant, and hazard ratios (HRs) with a 95% confidence interval (CI) were calculated. We examined the ability of the two analytical methods used to detect M. bovis contamination and the effects of other parameters, such as different contamination levels, types of matrices, and the contamination procedure, considering all of the analytical results from the first three batches (whole data set) and the separate results obtained from the two analytical methods. For each considered variable parameter, we calculated the HR by comparing the outcomes produced by data groups characterized by differences related to that specific parameter, and for this purpose we needed to choose a reference/control group. Within each data set analysis, the chosen category was the one that produced a median value, referred to as the period of time during which detection of the pathogen in question was possible, lower than those observed for the other groups. Due to the different time intervals adopted for the tests on samples from the fourth experimental batch compared with those adopted for the tests on the other batches, the data originating from these analyses were excluded from the above-mentioned comparisons. Moreover, with regard to contamination levels, on the basis of the limited available bibliography and of the results from our previous studies, we hypothesized that contamination realized by adding naturally infected lymph nodes can be considered and used to produce contamination at intermediate levels if compared with the levels achieved on the second and the third experimental batches. Evaluation of M. bovis Contamination of Carcasses of Wild Boar For wild boar carcasses cold stored and sampled by swabbing the muscle surfaces, PCR tested positive for the presence of M. bovis for 2 out of the 15 animals. The BE results, instead, were always negative. Evaluation of M. bovis Contamination of Sausages Produced Using Wild Boar Meat (First Experimental Batch) Analyses were performed at intervals of about 7-10 days starting from T 0 . As presented in Table 3, for the first three checks (T 0 , T 1 , and T 2 ), M. bovis was always detected in the samples by PCR while BE always tested negative. At the fourth check carried out 23 days after preparation/contamination date (T 3 ), M. bovis was detected by both PCR and BE. After further analysis, the isolated strain proved to be genetically correlated with the strain obtained from organs of the infected animal used as a source of the retro-pharyngeal lymph nodes added to the experimental samples. At the fifth and the sixth checks (T 4 and T 5 ), PCR still tested positive, but no bacterial growth was found using BE. For the second batch, samples were checked at intervals of about 7-10 days starting from T 0 , and the results are shown in Table 4. For the first four checks, M. bovis was always detected in the matrices by PCR, and BE tested positive until 22 days after preparation/contamination (T 3 ). At T 4 and T 5 , instead, M. bovis was detected by PCR while BE tested negative. The isolated strain was proven to be genetically correlated with the strain used for the experimental contamination. The results of the third batch are shown in Table 5. For all checks, BE tested negative for the presence of M. bovis while PCR detected its presence at T 0 , T 1 , and T 4 . Table 6). The check scheduled at T 2 was not performed due to a sanitary emergency involving SARS-CoV2. Statistical Analysis Data from the fourth experimental batch were only analyzed through the production of Kaplan-Meier curves related to the two analytical methods; we could not perform further evaluations due to the two groups of data fully overlapping. Indeed, both techniques enabled the detection of contamination during the whole observation period (except for tests at T 2 that were not carried out). On the other hand, Kaplan-Meier curves related to data from the other batches indicate that PCR can detect the pathogen for a longer period than BE (mean values of 37 days and 30 days, respectively, are provided for the whole data set) (Figure 1a). With regard to this parameter, we observed differences in the cumulative probability of detecting the pathogen at 31 days ranging from 28%, concerning contamination by bacterial cultures (more specifically 17% and 50% referred, respectively to the third and second batches), to 50% when samples were contaminated using naturally infected tissue (first batch). Figure 1b,c present the curves obtained from data separated according to sample contamination level and to the contamination source/procedure, respectively. Concerning the first parameter, the median time value during which detection was achieved was lower for samples from the third batch (12 days), and for both methods the values observed were higher for samples from the second batch. Separately examining the data produced by cultural and molecular analyses, dissimilar images appeared from a comparison of the curves related to the different contamination levels. Namely, for BE the first and the third batches generated similar median values, while the results referred to the second batch clearly differ, proving to be higher than the previous ones. On the other hand, the PCR data produced a curve related to the first batch that showed a median value quite similar to that from the curve of the second batch, and we can point out a lower value only for the batch characterized by a low contamination level. Concerning the second parameter, the median time values were broadly similar, despite a slightly lower value for samples contaminated by adding infected lymph nodes, but by evaluating the data separately according to the analytical technique used, we noticed some differences. All curves produced with the BE results have a similar trend, and detection was possible for a slightly longer period from samples of the group made up from the second and third batches. Instead, for the PCR curves the trend diverged and, although periods enabling pathogen detection were similar to what was observed for BE with regard to the two batches contaminated with bacterial cultures, this period was extended for samples from the first batch and reached the end of the observation period. With regard to this parameter, we observed differences in the cumulative probability of detecting the pathogen at 31 days ranging from 28%, concerning contamination by bacterial cultures (more specifically 17% and 50% referred, respectively to the third and second batches), to 50% when samples were contaminated using naturally infected tissue (first batch). Figure 1b,c present the curves obtained from data separated according to sample contamination level and to the contamination source/procedure, respectively. Concerning the first parameter, the median time value during which detection was achieved was lower for samples from the third batch (12 days), and for both methods the values observed were higher for samples from the second batch. Separately examining the data produced by cultural and molecular analyses, dissimilar images appeared from a comparison of the curves related to the different contamination levels. Namely, for BE the first and the third batches generated similar median values, while the results referred to the second batch clearly differ, proving to be higher than the previous ones. On the other hand, the PCR data produced a curve related to the first batch that showed a median value quite similar to that from the curve of the second batch, and we can point out a lower value only for the batch characterized by a low contamination level. Concerning Table 7 presents the main results of further statistical data processing, together with reference categories used to calculate HR values. The choice to adopt a multivariable regression model allowed us to significantly improve the likelihood values when compared with the simple regression models. The difference between the results of the two methods used to reveal contamination throughout the observation period was proven to be statistically significant both in the log-rank test and Cox regression ( Table 7), but by separately analyzing each single batch, the above-mentioned difference maintained significance only within the first experimental batch (p = 0.025). For this parameter, we calculated a general HR value of 0.308, and values lower than 1 were observed in relation to the single batch analyses, too (the maximum HR value was related to the third batch: HR = 0.667). Additionally, the different contamination degrees were proven to affect the analytical results significantly, and an HR value of 0.459 for the whole data set was observed. Hazard ratio values < 1 were calculated in relation to each method, too, even if a significant difference between data groups was confirmed only for the PCR analyses; however, for these latter data, we observed a loss of significance when the multivariable model was adopted (see Table 7). With regard to the same variable, a paired comparison performed within the entire data set enabled us to point out statistically significant differences only between data from the second and third batches (p = 0.015), and the data were confirmed using a multivariable analysis too (p = 0.045). A comparison between the results from the first and third batches produced p values a little above the set significance threshold (p = 0.066), and for this comparison, we observed the only significant data that was revealed by separating the analytical results by the method employed (p = 0.046 at log-rank test for PCR results), even though significance was not confirmed when adopting the Cox model. Finally, concerning the effects of different contamination sources or food matrices, as presented in Table 7, the statistical tests showed the absence of a significant difference between the two groups. Related HRs were different depending on the data set from which they originated, but all three data processing procedures produced higher HR values as a result of the multivariable analysis compared with the results obtained from a simple regression model (see Table 7). From the overall comparison and with regard to the molecular tests, the multivariable regression produced HR values < 1; conversely, for the bacteriological exams we observed an HR value that was slightly lower than 1 from the simple regression analysis but slightly higher than 1 from the multivariable one. Discussion and Conclusions The present study produced data and useful information about the ability of M. bovis to survive viably within matrices such as wild boar meat and sausages. The presence of tuberculosis by M. bovis in wild boar is well documented, and it allows the wild reservoir to infect cattle farmed within the same area, affecting the outcome of the eradication programs. This wild species can often move across territories, which can result in a re-introduction of the disease within cattle farms that had previously become tuberculosis-free. The risk of zoonotic tubercular infection in humans is then linked both to the persistence of tubercular infection in cattle and to its presence in a wild boar population. The possibility of human contagions from the meat of slaughtered animals is one of the epidemiological issues related to zoonotic tuberculosis by M. bovis that still needs to be definitively verified. Many international authorities operating in the field of food safety, among which include the European Food Safety Authority (EFSA) and the Advisory Committee on the Microbiological Safety of Food (ACMSF), developed and provided opinions based on scientific findings. They report the outcome that, despite the possibility of bovine meat having a role in spreading M. bovis to humans, the related risk is substantially low [25] or even absent [26], and that it can be ignored with a medium uncertainty level [27,28] due to the cooking process that meat undergoes before consumption and thanks to the control measures enforced. Nevertheless, we need to highlight that the same documents do not exclude the possibility that meat, in specific situations, can be the origin of crosscontamination of other foodstuffs, utensils, work surfaces, refrigerators, and kitchens. The same bovine meat, if coming from facilities also slaughtering cattle that test positive in the Mantoux test, can turn out to be contaminated by M. bovis when it enters a commercial market [14,15]. We cannot even exclude that consuming undercooked or uncooked meat can occur considering the recently reported dietary habits. The first experimental batch was arranged in order to reproduce a similar situation to one that might occur under natural conditions during traditional practices for wild boar sausage preparation by hunters' families. Indeed, during these procedures, fragments of organs or lymph nodes affected by tuberculous lesions can enter the mixture used to produce processed meat. In wild boars with active tuberculosis, the sub-iliac, popliteal, and mammary lymph nodes that are situated near muscles usually used in cured meat production are often affected. The results of our experimental tests indicate that M. bovis can stay alive and viable within sausages up to 23 days after the preparation date (T 3 ). Actually, the BE-negative results observed at T 1 and T 2 can be due to an uneven distribution of the contaminating material in the meat mixture. Indeed, as with the other members of the MTBC (Mycobacterium tuberculosis complex), M. bovis is an endocellular pathogen that survives and replicates mainly within macrophages, whereas it does not replicate in the external environment or at conditions similar to those adopted for our experimental trials [29]. Similar results were obtained also using 10 5 CFU/g to contaminate samples of the second experimental batch: the re-isolated strain of M. bovis SB0120 was alive and viable up to 22 days after batch preparation. This strain of M. bovis is the most widespread genetic profile in Italy and has also been isolated in four different cattle farms within the area evaluated in our study. Analyses regarding the third experimental batch that involved the use of 10 3 CFU/g to contaminate the samples did not the enable re-isolation of M. bovis from any of the tested samples. The low concentration of infecting materials dispersed in the matrix was probably lower than the recovery limit of the employed cultural method, as also demonstrated by the PCR test being positive up to 32 days (T 4 ). However, we cannot exclude the possibility that low bacterial levels, insufficient to induce growth on the cultural media, were instead able to produce disease onset in humans and animals. With regard to the latter, in fact, the fact that 1 CFU of M. bovis, containing six to ten viable bacteria, via endo-tracheal inoculation is a sufficient amount to cause disease in cattle and badger (Meles meles) has been proved by experimental studies [30,31]. Within the fourth batch, we always detected M. bovis as being alive and viable in experimentally contaminated meat stored at −20 • C, even 342 days after contamination. Although freezing at −80 • C ensures an improvement in obtaining live and viable mycobacteria [32], we chose to freeze samples at −20 • C because this is the procedure adopted by hunters' families and within public catering to store wild boar meat supplies. Operations involving skinning, evisceration, cutting, and storing the meat of animals killed by hunters themselves, often in poor hygienic conditions, produce a concrete risk for meat contaminated with pathogenic bacteria, including M. bovis [33]. For the statistical analysis, the data obtained provide interesting ideas and leads to consistent conclusions, despite the lack of statistical significance shown by some comparison and the limitations due to the 95% CI ranges. Indeed, the confidence interval ranges related to data from the molecular analyses are wide, and this condition limits the reliability of the calculated HR values; an improvement in the strength of this index could be achieved by increasing the sample size on further studies. In general, the results produced by our experiments showed a reduction of 69% in the probability of not detecting contamination in the tested samples if a molecular technique was adopted rather than a microbiological one, with a statistically significant impact on the analytical results; the extent of reduction was obviously more limited when low contamination levels were involved. A temporal analysis of the data equally indicated that BE provides a relatively suitable performance similar to that shown by PCR (cumulative detection rate ≥ 75%) until the fourth week only if a matrix was contaminated with high bacterial counts, a condition that could affect the number of surviving mycobacteria during the storage period of foodstuffs. Instead, when low contamination occurred, an efficiency loss was already observed from the third week for PCR too. Of course, with the increase in the degree of contamination, both methods showed an improvement in their ability to detect contamination, and, in general, the risk of not detect the mycobacterium gradually and significantly decreased (p < 0.05 for both the paired comparison between the second and third batches and the overall Cox model generated from the whole data set). However, we have to highlight that cultural analyses underwent a more limited reduction in the risk of testing negative, while for PCR this reduction was greater and occurred earlier. With regard to the last considered variable, even if we generally noticed that the detection of the pathogen seemed to be slightly easier or more probable throughout the observation period in the batch contaminated with a naturally infected material and that a lack of significance in the difference between data groups separated on the basis of the contamination source emerged, our remarks are probably the result of the different performance from PCR with respect to BE and of the different requirements underlying the two methods. In fact, the addition of cultured bacterial colonies to the food matrix ensured, at least in the early stages of the trial, the presence of live and viable mycobacteria: this is the basic condition that makes bacteria detection possible in BE, but it does not affect the performances of molecular analyses. This would also explain the similar median values for the detection period for both methods when assessing this group of samples. On the other hand, within naturally infected tissue, part of the bacterial population could have no longer been viable, making bacteriologic isolation less feasible but leaving the detection capability of PCR almost unchanged, which is the reason PCR was able to perform well for a longer period. Finally, as already mentioned, evaluations involving the fourth experimental batch confirmed the effect of freezing on keeping the infectious load active. The choice to create both simple and multivariable regression models enabled us to notice some other interesting remarks: in fact, although only variations in the "method" and "contamination level" parameters, among those considered, seemed to significantly affect the probability of properly detecting the pathogen in meat or processed meat, our results suggest it is important to not overlook other variables potentially responsible for confounding effects. For instance, we noticed that, despite the lack of statistical significance of the results produced by comparing the two different procedures/sources of contamination, at first glance, an improvement seemed to occur in the performances for the analyses of the samples contaminated with infected lymph nodes compared with the other group. The extent of this improvement, however, was scaled down when we also considered other covariates, specifically when we also took into account the effects of a different bacterial count contaminating the matrix. In particular, for the microbiological examinations, the risk of not detecting the mycobacterium underwent a quite limited reduction (only 10%) when comparing results from testing the matrix in which naturally infected tissue was added to the results from the samples from the second and third batches. Nevertheless, this apparent "advantage" tends to be reversed if we analyze the data using a regression model also including the contamination level. Indeed, this latter model showed that the culturing method seemed to present a slightly higher risk of not detecting contamination in samples on the first batch compared with the other two batches (HR > 1). The worsening in analytical performances is probably due to the fact that the samples from the first group had a contamination level that negatively affected the BE output. Further proof of the reciprocal influence of the two variables considered was obtained by evaluating the calculated HR within the comparison of the groups with different contamination levels using a microbiological analysis. We can hypothesize that a more limited reduction in the risk when comparing a low-contamination batch and a high-contamination one against the reduction observed when comparing a low-contamination batch and a moderately contaminated one may be due to a confounding effect and to the co-occurring variation in both of the considered parameters in the latter paired comparison. Indeed, when we added the last covariate to the Cox model generated from the whole data set, the HR values for the effects of contamination level also showed a slight increase, pointing to a reduction in the differences between the groups. The same change was observed for the PCR-only data but was not present for the BE data, which instead presented an inversion of the hazard trends. This disturbance effect could underlie the above-mentioned unexpected result, considering that, unlike the second and third batches, the first and third batches differed in both contamination level and source, suffering from a further influence of this latter parameter in a different way for the two analytical techniques. For the sake of completeness, we report that the "method" parameter seemed to not suffer or provide any interference effect when introduced into the multivariable regression model. From the results of this study, the role of meat in spreading infections with M. bovis to humans, despite being less relevant with respect to an inhalation route of infection, the ingestion of unpasteurized milk and milk products, and transmission by a dermal route, must be further and adequately evaluated in order to comprehensively quantify the real risk. To do this, we need to consider the many ways in which the meat is consumed. The chance to achieve heat inactivation of M. bovis in foodstuffs is correlated with the temperature/time ratio [34]. Moreover, it has been demonstrated that, under certain conditions, M. bovis could resist high temperatures and maintain its infectiveness and pathogenicity [35]. We estimate that the reduced temperature/time ratio to which fresh wild boar sausages are generally subjected during cooking could represent one of these conditions. In addition, we stress that the analytical detection of contamination, essential in prevention and monitoring activities, can be affected by many factors, which have to be taken into account. In more "natural" conditions, such as those simulated in the first experimental session, biomolecular analysis may represent a better choice compared with BE to identify a contaminated matrix, but even so, for the reasons reported above, expressing an unquestionable response about the potential of the food product as a means to transmit infection is not possible. Data that suggest zoonotic tuberculosis is correlated with meat from wild species are still insufficient and, in any case, underestimated. The present study demonstrated the survival and viability of M. bovis in wild boar sausage artificially contaminated up to 23 days after preparation. Wild boar sausages are a type of foodstuff traditionally consumed as a fresh product within 2-3 days from manufacturing, after grilling or cooking using a plating process that, due to the reduced temperature/time ratio, is not always able to completely inactivate pathogenic microorganisms present within the food product. The remarkable increase in the number of animals killed during hunting activities, the treatment of meat from wild animals, from evisceration to storage procedures, as well as the proven presence of M. bovis in wild boars are all issues that need adequate hygienic safety assurance, specifically for the use of meat from wild fauna, in order to protect consumer health.
2021-10-15T15:19:21.497Z
2021-10-01T00:00:00.000
{ "year": 2021, "sha1": "fcebfa089cf591ddb8c65357fd8230f58bc99399", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2304-8158/10/10/2410/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "df9dda0fe4dfaba0f53c384aa36459b230c9d18d", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
30375865
pes2o/s2orc
v3-fos-license
Serum Procalcitonin Measurement and Viral Testing to Guide Antibiotic Use for Respiratory Infections in Hospitalized Adults: A Randomized Controlled Trial Abstract Background. Viral lower respiratory tract illness (LRTI) frequently causes adult hospitalization and is linked to antibiotic overuse. European studies suggest that the serum procalcitonin (PCT) level may be used to guide antibiotic therapy. We conducted a trial assessing the feasibility of using PCT algorithms with viral testing to guide antibiotic use in a US hospital. Methods. Three hundred patients hospitalized with nonpneumonic LRTI during October 2013–April 2014 were randomly assigned at a ratio of 1:1 to receive standard care or PCT-guided care and viral PCR testing. The primary outcome was antibiotic exposure, and safety was assessed at 1 and 3 months. Results. Among the 151 patients in the intervention group, viruses were identified in 42% (63), and 83% (126) had PCT values of <0.25 µg/mL. There were no significant differences in antibiotic use or adverse events between intervention patients and those in the nonintervention group. Subgroup analyses revealed fewer subjects with positive results of viral testing and low PCT values who were discharged receiving antibiotics (20% vs 45%; P = .002) and shorter antibiotic durations among algorithm-adherent intervention patients versus nonintervention patients (2.0 vs 4.0 days; P = .004). Compared with historical controls (from 2008–2011), antibiotic duration in nonintervention patients decreased by 2 days (6.0 vs 4.0 days; P < .001), suggesting a study effect. Conclusions. Although antibiotic use was similar in the 2 arms, subgroup analyses of intervention patients suggest that physicians responded to viral and biomarker data. These data can inform the design of future US studies. Clinical Trials Registration. NCT01907659. Lower respiratory tract infection (LRTI) commonly causes adult hospitalization, and viruses account for many of these illnesses [1][2][3][4]. Increased availability of multiplex polymerase chain reaction (PCR) assays allows clinical laboratories to rapidly detect a wide variety of respiratory viruses [5,6]. Despite viral detection, most patients in US hospitals receive broad-spectrum antibiotics, partly because of concerns about bacterial coinfection [7][8][9]. Reports from Europe suggest that elevated serum procalcitonin (PCT) levels predict bacterial infection and that PCT algorithms can be used to safely guide antibiotic use in LRTI, resulting in significant reductions in antibiotic duration [10,11]. However, PCT-based treatment algorithms for respiratory infections have not been widely adopted in the United States [12][13][14]. In a prior study of adults hospitalized with respiratory illnesses, we showed that viral infection was common and that, of those with viruses, 60% had no evidence of bacterial infection, although most received antibiotics [15]. Results of a post-hoc physician survey indicated a perception that serum biomarkers coupled with viral testing would be most helpful to guide antibiotic decisions. Nevertheless, confidence in PCT-guided algorithms among US physicians will likely be an iterative process. Moreover, because early diagnosis and treatment with antibiotics is recommended by professional societies for patients hospitalized with pneumonia and mandated in clinical practice, the potential to prevent inappropriate antibiotic use in the setting of viral LRTI is likely greatest in the population of patients without clinical and diagnostic evidence of a definite pneumonic process. We therefore believe that initial US clinical trials for PCT-guided care of LRTI should focus on persons at lower risk for invasive bacterial disease and without definitive pneumonia. Thus, we investigated the feasibility of conducting PCT-based algorithm trials in a US hospital and the value of concurrent viral testing to reduce unnecessary antibiotic use in adults hospitalized with nonpneumonic LRTI. Study Design The trial was designed as a 1-year open label randomized clinical trial in which 300 patients hospitalized with non-pneumonic LRTI were randomized 1:1 to standard care or PCT-guided care in combination with multiplex viral PCR testing. The trial was registered on ClinicalTrials.gov (NCT01907659). Site Rochester General Hospital (RGH), a 528-bed community hospital in Rochester, New York, was the study site. RGH uses an electronic medical record (EMR), and most inpatients are cared for by staff of the Department of Internal Medicine. A duplex PCR for influenza virus and respiratory syncytial virus (RSV; hereafter, "hospital PCR") is routinely available, and PCT testing is only available for patients in intensive care units (ICUs). RGH was also the site of a previous respiratory illness surveillance study (conducted during 2008-2011), for which the study population and inclusion criteria were identical to those for the present study [15]. Information Sessions Prior to the study, physicians and midlevel providers (nurse practitioners and physician assistants) were formally educated regarding results of the previous surveillance study, causes of respiratory infections, antibiotic guidelines and antibiotic complications, and the use of PCT algorithms to guide antibiotic therapy. Subject Recruitment Adults ≥21 years of age with symptoms compatible with LRTI (ie, admission diagnosis of pneumonia, acute exacerbations of chronic obstructive pulmonary disease [COPD], bronchitis, asthma, influenza, viral syndrome, respiratory failure, and congestive heart failure [CHF]) were identified by reviewing the daily admission census. Patients with characteristics indicative of a high risk for bacterial infection (ie, ICU requirement, active chemotherapy or radiation, immunosuppression, definitive infiltrate on chest radiograph, enrollment systolic blood pressure of <90 mm Hg, and ≥15% band forms in peripheral blood) were excluded. Infiltrates were considered definitive if they were characterized as "unequivocal" on a radiology report. Persons with a clinical diagnosis of pneumonia on admission but ambiguous chest radiograph findings, such as a "possible infiltrate" or "infiltrate versus atelectasis," were not excluded. Patients who had conditions known to increase PCT levels (ie, trauma, renal failure, and pancreatitis) or who received antibiotics prior to admission were excluded. Subjects or their healthcare representative provided written informed consent and the study approved by the RGH and University of Rochester institutional review boards. Enrollment Procedures Enrollment was conducted in the morning within 24 hours after admission. At enrollment, demographic, clinical, and laboratory information and nose and throat swab specimens for PCR were collected. Serum samples were collected at admission and at least 12 hours later for PCT testing. Subjects were stratified by the presence of COPD and were randomly assigned at a ratio of 1:1, using blocks of 4, to receive standard care or the intervention. Notation was placed in the EMR for all subjects to indicate their study participation. Subjects randomly assigned to the intervention group had PCT and viral testing performed immediately. Subjects in the standard care group had samples frozen and tested at study termination. Standard Care Standard of care testing (bacterial and viral cultures of respiratory samples, hospital influenza/RSV duplex PCR [hospital PCR], and urine legionella antigen analysis) were obtained at the discretion of the providing team. The turnaround time for the hospital PCR was generally 1-2 hours after the sample was received. Urine pneumococcal antigen was not available in the hospital during the study period. Antibiotic decisions were made by the attending physician without intervention by investigators. Intervention All standard of care diagnostic tests (bacterial and viral cultures of respiratory samples, hospital PCR, and urine legionella antigen analyses) were ordered at the discretion of the careproviding team. Serum PCT and viral/atypical pathogen PCR testing were performed as soon as possible. Two serum PCT levels were obtained, and the higher of the 2 was used for algorithm interpretation and data analysis. Results were reported in the EMR 2-3 hours after enrollment, and the treating team notified by a text sent via page and by a simultaneous email providing the PCT algorithm. The algorithm was also available on the hospital website and previously distributed pocket cards. The following information was communicated: "Serum PCT is a biomarker associated with bacterial infection, and can be used to guide therapy according to the highest value on admission OR at 12-24 hours. However, PCT is not a substitute for clinical judgment. For PCT values of ≤0.1 ng/mL, initiation of antibiotic treatment is strongly discouraged; for values of 0.11-0.24 ng/mL, initiation is discouraged; for values of 0.25-0.49 ng/mL, initiation is encouraged; and for values of ≥0.5 ng/ mL, initiation is strongly encouraged." Hospital Course Evaluation and Illness Follow-up Study personnel reviewed the EMR daily until discharge, with attention to antibiotic use and safety outcomes ( progression or new pneumonia, lung abscess, empyema, ICU care, respiratory failure, and death). A serious adverse event (SAE) was defined as ICU transfer or death within 30 days of enrollment. All SAEs were reviewed by an independent data safety monitoring board. Subjects were contacted by phone at 30 days and 3 months by personnel blinded to randomization, who collected information about healthcare utilization, antibiotic use and complications, and return to baseline health. Serum PCT Level Serum PCT levels were measured by VIDAS BRAHMS (bio-Merieux), using the enzyme-linked fluorescent assay technique. The assay range is 0.05-200 ng/mL. End Points The feasibility end point for the trial was the ability to recruit and randomly assign 300 subjects in a 12-month period. The primary impact end point was duration of antibiotic therapy. A day of antibiotic therapy was defined as any day in which any doses of antibiotics were administered. Other measures of antibiotic exposure included discontinuation of antibiotics within 48 hours and discharge receiving antibiotics. Seven subjects discharged receiving long-term antibiotic therapy for antiinflammatory properties were excluded from the analyses comparing the number of subjects discharged receiving antibiotics. Safety was assessed during the hospital stay, at 1 and 3 months. Safety end points included length of hospital stay and, 1 and 3 months after discharge, respiratory complications, ICU care, death, and healthcare utilization. Statistical Analysis Sample size was chosen to ensure the feasibility of a 1-year pilot study, and therefore formal sample size and power calculations were not performed. Categorical variables were summarized by counts and proportions and compared using the Fisher exact test. Medians and interquartile ranges (IQRs) were used to describe continuous variables, with comparisons performed using the nonparametric Wilcoxon test. For intervention patients with a PCT level of; ≤0.24 ng/mL, logistic regression was used to model algorithm compliance as a function of clinical covariates, including age, length of symptoms prior to admission, sputum culture results, signs and symptoms of illness, admission diagnosis, chest radiograph results, PCT level, and viral testing. SAS 9.4 was used for all analyses, with tests performed at the 2-sided 0.05 level. Subject and Illness Characteristics The planned sample size of 300 subjects was achieved in 7 months. From October 2013 through April 2014, 685 hospitalized patients were assessed; 151 eligible patients were randomly assigned to the intervention group, and 149 were randomly assigned to the nonintervention group ( Figure 1). The 2 groups were well matched for demographic characteristics, underlying medical conditions, admission diagnoses, and severity of illness, with the exception that a significantly greater percentage patients with CHF was randomly assigned to the nonintervention group ( Table 1). The most common admission diagnoses in both groups were acute exacerbations of COPD (38%-39%), asthma exacerbation (18%-21%), pneumonia (19%), CHF (6%-11%), and influenza (6%-7%). Laboratory Data Overall, 132 viral diagnoses (64 in the intervention group and 64 in the nonintervention group) were made by any means in 128 subjects (4 subjects had 2 distinct viruses). FilmArray testing was performed for all intervention subjects during the study period and for all nonintervention subjects after study completion. In the intervention group, hospital PCR detected influenza virus or RSV in 13%, whereas the FilmArray test detected viral RNA in 42%. Notably, 19% of nonintervention subjects had positive results of hospital PCR, which was available to treating physicians; the most common viruses detected were influenza A virus (in 33% of cases), rhinovirus (19%), RSV (17%), human coronavirus (14%), and human metapneumovirus (8%). The majority of patients in both groups had low admission PCT values (≤0.24 ng/mL; Table 1). All 151 intervention patients had data on their serum PCT level at admission, and 139 had data from a subsequent measurement performed on day 2 of hospitalization. Of the 126 intervention subjects with an initial low PCT value, 5 had a higher second level, resulting in 121 intervention subjects with PCT values of ≤0.24 ng/mL at both time points. Bacterial diagnoses were made among 9%-10% of patients, with most made on the basis of sputum culture results and 25% associated with high PCT levels (>0.24 ng/mL). Three patients had positive results of a FilmArray assay for atypical bacteria, and illnesses in all 3 were associated with low serum PCT levels. There were 216 bacterial blood cultures performed for subjects in both arms, and all but 1 had negative results. The exception involved 1 nonintervention patient, who had Streptococcus pneumoniae bacteremia. This illness was associated with a high serum PCT level on admission. Primary Outcomes Antibiotic exposure was measured on the basis of discontinuation of antibiotic treatment within 48 hours, discharge receiving oral antibiotics, and total duration of therapy (Table 2). There were no significant differences in duration of antibiotic therapy between intervention and nonintervention patients, although there was a trend toward a decreased number of intervention patients discharged receiving antibiotics (35% vs 44%; P = .09; Table 2A). When antibiotic exposure in the intervention subgroup with presumably the lowest risk for bacterial infection (ie, patients who tested positive for virus and had a low PCT level) was compared to that in the nonintervention group, we noted a trend toward fewer days of antibiotics prescribed (median, 2 days [IQR, 1-6 days] vs 4 days [IQR, 0-8 days]; P = .11), with Figure 1. Flow of patients through the study. Reasons for exclusion from the study included intensive care unit (ICU) stay, antibiotic use for >24 hours prior to enrollment, active chemotherapy, conditions known to increase procalcitonin level (eg, renal failure, pancreatitis, and trauma), definite infiltrate on a chest radiograph (according to radiology report), >15% bands on a peripheral blood smear, and a systolic blood pressure (SBP) of <90 mm Hg at enrollment. Abbreviation: PCT, procalcitonin. Consistent with these findings, intervention subjects with low PCT values were less likely to receive antibiotics for ≥48 hours or at discharge than those with high PCT levels (Table 3A). The total duration of antibiotic therapy were also shorter for subjects with low PCT levels (median, 2 days [IQR, 0-6 days] vs 7.5 days [IQR, 5-10 days]; P < .001; Table 3A). However, patients with high PCT values also had higher CURB-65 scores (median, 2 [IQR, 1-3] vs 1 [IQR, 0-2]; P = .001), and a greater percentage showed possible infiltrates on chest radiograph (40% vs 25%; P = .12). Additional subgroup analyses assessing the added value of viral testing to PCT algorithms were performed. Patients with a known viral diagnosis during hospitalization (64 in the intervention group and 28 in the nonintervention group) were less frequently discharged receiving antibiotics (28% vs 45%; P = .01), with a trend toward a shorter duration of therapy (median, 2 days [IQR, 1-6 days] vs 4 days [IQR, 1-8 days]; P = .07; Table 3B), compared with those without a viral diagnosis. We also investigated the possibility of a study effect on the nonintervention arm resulting from prestudy educational sessions and from subjects in both arms receiving care from the same providers. Primary outcomes for the nonintervention group were compared to those for matched historical control subjects who were hospitalized at RGH with LRTI during a 2008-2011 surveillance study and were deemed, using the identical criteria, to be at low risk for bacterial infection [15]. There were no significant differences in demographic characteristics, admission diagnoses, Table 2D). Secondary Outcomes Safety was assessed during the hospital stay, at 1 and 3 months. No deaths occurred during hospitalization, and 4 SAEs occurred in each arm of the study, with none judged to be related to the intervention. One intervention patient and 4 nonintervention patients developed a new cases of pneumonia within 30 days. The median length of hospital stay was 4 days in both groups, and the number of posthospitalization healthcare visits was similar. Of note, the length of illness (assessed from the onset of symptoms prior to hospitalization to the subject's report of a return to baseline) was shorter in intervention patients (median, 16 days [IQR, 12-24 days] vs 20 days [IQR, 13-28 days]; P = .03). There were 27 potential antibiotic adverse events (rash, gastrointestinal symptoms, fungal overgrowth, or resistant flora) in the intervention arm and 28 in the nonintervention arm. Three cases of Clostridium difficile colitis occurred in nonintervention patients, compared with none in the intervention group. At 3 months, there were no significant differences in any of the clinical safety outcomes (20) for the intervention arm or the subgroup of algorithmadherent intervention subjects, compared with nonintervention patients. Healthcare Provider Adherence to the Algorithm Overall, algorithm adherence was 64%, although providers were more likely to prescribe antibiotics for patients with high PCT values than to withhold antibiotics for patients with low PCT values ( Figure 3). The majority of providers (77%) followed algorithm recommendations to continue antibiotic treatment when PCT values were high, although it was notable that antibiotic therapy was discontinued in 5 subjects with high PCT values. In contrast, providers followed algorithm recommendations to discontinue antibiotics in 61% of subjects for whom a low PCT value was detected. The only variable associated with algorithm nonadherence in subjects with low PCT values was an admission diagnosis of pneumonia (26% vs 7%; P = .01). DISCUSSION This is the first study combining PCT measurement with molecular viral diagnostic testing, as well as the first randomized clinical trial performed solely in the United States that evaluated PCT-guided care for patients hospitalized with respiratory illnesses. Although overall antibiotic exposure was similar in the intervention and nonintervention arms, subgroup analyses and comparison with historical controls were encouraging because they suggested that US physicians will respond to viral and biomarker data to inform antibiotic use. The primary goal of our study was to evaluate the feasibility of conducting randomized clinical trials of PCT-guided antibiotic recommendations in the United States. Our data clearly indicate that such trials are likely to be well received, since complete enrollment was achieved 5 months early. Evidence is mounting that PCT-guided treatment of respiratory infections can safely reduce antibiotic use [10,[16][17][18][19][20]. European trials have been performed in a variety of settings, including emergency departments, primary care offices, hospitals, and ICUs [10,16,17,[19][20][21][22][23][24][25]. In all prior studies, antibiotic use was decreased without harm, as measured by composite adverse event outcomes. The largest trial to date was a Swiss multicenter randomized trial of PCT-guided therapy in 1359 hospitalized patients with LRTI [16]. A 35% decrease in overall antibiotic exposure was reported without notable differences in . Provider response to the procalcitonin (PCT)-guided treatment algorithm. A circle represents an individual PCT value for each intervention study subject. The horizontal bar represents the threshold for PCT values (0.24 ng/mL), which defines levels as either low or high. The algorithm discourages antibiotic use below this threshold and recommends antibiotics for values above this level. Results are segregated by provider response to the algorithm and are designated as "algorithm followed" and "algorithm rejected." short-term or long-term outcomes. Currently, use of PCT levels in the United States is only approved for management of sepsis [11,26]. The only US data on PCT-guided care for patients with LRTI comes from the ProREAL study, an observational surveillance of antibiotic prescribing practices when PCT testing was made available in one US hospital, 10 Swiss centers and 3 French centers [22,23]. For the 295 patients evaluated in the United States, adherence to the algorithm was 35%, which was significantly lower than the percentage among algorithmexperienced European centers, suggesting that experience increases confidence and compliance with this approach. In contrast, studies examining the benefits of viral testing to reduce unnecessary antibiotic use have largely been observational [27][28][29][30][31][32]. The only randomized clinical trial to date was performed in children and demonstrated significantly less antibiotic use and diagnostic testing for children whose rapid influenza test results were made available to providers, compared with those whose information was withheld [33]. Although limited by the small sample size, our study suggests that testing for viral pathogens in addition to influenza virus may reduce antibiotic use. Because confidence in PCT-guided algorithms among US physicians will likely be an iterative process, we focused on patients with nonpneumonic LRTI who were at low risk for bacterial infection, using standard clinical parameters. Adherence to the PCT algorithm was 64%, which was encouraging, given the low US compliance rate in the study by Albrich et al [22,23]. However, despite relatively good compliance and a population enriched for subjects with a low risk for bacterial complications, we did not see significant differences in antibiotic use between the intervention and nonintervention arms. We suspect that this result was in part due to a significant study effect resulting in decreased antibiotic exposure in nonintervention patients. Supporting this conclusion was the overall decrease in the duration of antibiotic therapy for nonintervention subjects, compared with the duration for similar historical controls. The study effects were likely multifactorial, including the prestudy educational sessions, spillover care resulting from providers caring for patients in both arms, and increased awareness of viral activity in the community. Last, providers were also aware that their behavior was being observed, which may have resulted in a Hawthorne effect [34]. Our study had a number of limitations. Because the trial was designed as a feasibility pilot with a small sample size, definitive conclusions cannot be drawn from subgroup analyses, particularly the additive value of viral testing to PCT-guided therapy. Moreover, the study was not powered to determine noninferiority of PCT-guided care, compared with standard treatment algorithms. However, like the European trials, we did not detect any major safety signals prohibitive to proceeding with larger clinical trials. Second, repeat PCT measurements over several days during hospitalization were not performed and may be of added value to decrease antibiotic use. Additionally, routinely available hospital PCR may have diluted the effects of multiplex viral testing. Last, the use of historical controls to evaluate a study effect cannot address other changes in practice that may have occurred unrelated to the study. Nevertheless, the lessons learned from this study should facilitate the design of larger, definitive US trials. Although our study clearly demonstrates the feasibility of performing PCT trials in the United States, a number of factors need consideration. First, hospital stays for nonpneumonic LRTI tend to be brief, and thus the opportunity to intervene is limited. To maximize impact on antibiotic exposure, the intervention should begin early, ideally in the emergency department. Second, algorithm adherence needs to improve, although clinical judgment remains the cornerstone of good medical care, and 100% compliance should not be the goal. Future trials should therefore incorporate strategies to influence physician behavior, such as the use of antibiotic stewardship teams to reinforce PCT algorithm recommendations. Because a study effect appeared to be an important impediment to demonstrating a significant difference in antibiotic exposure or the independent effect of viral testing, future trials may need to use separate but comparable study sites for the intervention and nonintervention groups. Finally, once the safety of PCT-guided care in patients with LRTI at lower risk for bacterial infections has been established, its use should also be explored for subjects with pneumonic disease. In conclusion, despite the modest changes in antibiotic exposure observed in the primary analysis, the results of the subgroup analyses and comparison with historical controls are encouraging because they suggest that US physicians will respond to viral and biomarker data. Ultimately, further studies are needed to assess the value of viral testing and to determine the role of PCT-guided care as a practical and effective treatment protocol for patients admitted with nonpneumonic respiratory illnesses.
2016-05-12T22:15:10.714Z
2015-04-24T00:00:00.000
{ "year": 2015, "sha1": "0b004803f1a4688bfce9f28d13226f69403724f7", "oa_license": null, "oa_url": "https://academic.oup.com/jid/article-pdf/212/11/1692/9540732/jiv252.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "c8260c55e3605f2474cd9adb5fb722675f64bf6e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
247792488
pes2o/s2orc
v3-fos-license
Medical students' views on the value of trigger warnings in education: A qualitative study Abstract Background Trigger warnings—advance notification of content so recipients may prepare for ensuing distress—feature in discussions in higher education. Students' expectations for warnings in some circumstances are recognised, and some educators and institutions have adopted use. Medical education necessitates engagement with potentially distressing topics. Little is known about medical students' expectations regarding warnings in education. Methods All students from a 4‐year graduate‐entry UK medical degree programme were contacted via digital message outlining study details and were openly sampled. Qualitative methodology was chosen to explore participant expectations, experiences and meanings derived from experiences. Students participated in semi‐structured interviews exploring perspectives on functions, benefits and drawbacks of trigger warnings in classroom‐based medical education. We analysed interview transcripts using thematic analysis. Results Thirteen semi‐structured, qualitative interviews were undertaken. Themes in the following areas were identified: (1) students' experiences influence understanding of trauma and trigger warnings, (2) warnings as mediators of learning experiences, (3) professional responsibilities in learning, (4) exposure to content, (5) professional ethos in medical education and (6) how to issue trigger warnings. Students recognised the term ‘trigger warning’, and that warnings are an accommodation for those affected by trauma. Students' conceptualisation of warnings was influenced by personal experiences and peer interactions both within and outside education. Students expressed both support and concerns about use of warnings and their ability to influence learning, assuming of responsibility and professional development. Discussion Diverse student opinions regarding warnings were identified. Most students suggested that warnings be used prior to topics concerning recognised traumas. Incremental exposure to distressing content was recommended. Students should be supported in managing own vulnerabilities and needs, while also experiencing sufficient formative exposure to develop resilience. Greater understanding of trauma prevalence and impacts and underpinnings of warnings amongst students and educators are recommended to optimise education environments and professional development. students and educators are recommended to optimise education environments and professional development. | INTRODUCTION Trigger warnings-prior notification allowing recipients to prepare for or avoid sensitive content and ensuing distress-are widely encountered in communications. 1,2 These advisories are considered to have originated online as an accommodation for survivors of sexual violence or other trauma and who may experience symptoms of posttraumatic stress disorder (PTSD). 3,4 Although associated terminology may have changed, the practice and construct predate their online use. 3 Their use has been widely adopted in relation to diverse settings and topics. 2,5 Discussion continues about their role in education, with evidence of some students expecting warnings in relation to distressing topics. 6,7 In some cases, educators and institutions have shared these sentiments, voicing support and citing rationale for adoption of warnings in practice or policy, 8,9 including desire to curate inclusive learning environments. 5,10,11 Support has not been unanimous, with opposition to the construct, underpinning principles and use of warnings in education noted. 10 Concerns expressed include promotion of avoidance, 12 hypersensitisation of recipients 1,13,14 and censorship effects. 12,15 Despite routine use and relevance to classroom settings, current literature regarding trigger warnings is largely derived from opinion pieces based on individual or few author perspectives and rigorous academic evaluations or empirical evidence regarding trigger warnings remain lacking. 16 As an accommodation for affected individuals, trigger warnings and associated discussions have relevance to clinical education contexts where discussions of trauma, suffering and inequalities are integral and commonplace. 17,18 Graduating professionals need to regularly encounter and manage these subjects, while maintaining own well-being. Medical students' perspectives in this area remain relatively unexplored. Inquiry may provide insights into students' experiences of distressing content, how best to prepare graduates for managing distressing experiences and whether warnings may have a role. Without student consultation, educators risk maintaining inconsistent approaches, outwith a framework for practice, culminating in suboptimal learning environments. Sensitive or trauma-related subjects in medical education may hold personal relevance for students. [17][18][19] Experiences in medical education themselves have the potential to traumatise, irrespective of personal histories. 20,21 Although currently there is limited literature regarding secondary traumatisation in medical students, 20 there is substantial evidence regarding depression and burnout amongst medical students, [22][23][24] entities that are more prevalent amongst medical student than general and other student populations. 22,23,25 These issues are further compounded by reported stigmatisation of medial students experiencing mental illness. 26,27 Incorporation of trigger warnings could promote accessibility by enabling reasonable accommodations for students with trauma histories or mental health difficulties 10,11 and signal that well-being is valued in the organisation and profession, as previously described by medical educators. 28 Diversity of medical student populations is increasing internationally, in response to measures to ensure representation of the served patient populations. 29,30 Significant increases in numbers of students admitted from educationally and socially disadvantaged backgrounds as well as groups from ethnic minorities and students experiencing disability 31 are noted-all groups noted to experience higher incidence of adversity. 17 Consideration of trauma-informed approaches to distressing content, including use of warnings, appears increasingly justified and necessary in this context. 17,32 Given established impacts of emotion 33 and PTSD 3 on learning, efforts to identify evidence for efficacy of trigger warnings in general education literature have explored impacts on arousal and distress. One experimental study identified that trigger warnings' had 'trivial' effects on participants' ratings of negative material and distress symptoms, 14 while acknowledging that warnings may have other effects not assessed by their study. Bellett et al.'s larger replication study overturned their original finding that TWs affect some domains of resilience, leading them to conclude that warnings are 'inert'. 13 They acknowledged that educators may use warnings for other reasons not explored in their study. Recruitment from non-traumatised populations limits generalisability of these findings. Results of each of these studies 13,14 were also limited by reliance on participant self-reporting of symptoms. Evidence regarding trigger warnings in medical education is limited. Our previous semi-structured interview study exploring the views and practice of medical educators identified that educators regularly employed warnings in classrooms settings. 28 They cited various rationales beyond mitigating hyperarousal as well as a number of concerns relating to use of warnings. A single study of medical student perspectives, a subsection of a larger survey, exploring students' views suggested warnings may have a role in teaching about trauma but did not establish clear consensus regarding support for warnings. 34 Survey methodology, however, prohibited depth of discussion. Despite clear relevance of trauma-related content to medical student populations and experiences, and impetus to consider appropriateness of warnings in managing related impacts in education settings, medical student perspectives remain underexplored. This current interview study aimed to explore medical students' experiences and perspectives regarding the role of trigger warnings in classroom-based medical education. Noting that experiences of traumatising content are personal to the individual, 17 we wished to broadly explore students' perspectives and constructs of trigger warnings, including both within and outwith education experiences. As trigger warnings may have pedagogical function beyond preventing hyperarousal for individuals identifying as affected by trauma, we wished to explore perspectives of students identifying with varying personal experiences of adversity. We formulated the following research questions: Do medical students perceive value in the use of trigger warnings? What are medical students' perspectives on the function, benefits and drawbacks of trigger warnings in classroom-based medical education? | METHODS In this study of medical students' perspectives and expectations regarding trigger warnings, we wished to explore social behaviours and experiences, meanings derived from experiences and factors underlying expectations of an educational phenomenon; thus, a qualitative methodology was used. 35 Individual semi-structured interviews facilitated deeper discussion of more complex questions allowing participants to share detailed accounts of experiences, interpretations and perspectives. 36 The study was approved by University of Warwick Biomedical Research Ethics Committee. | Participants Students on a 4-year medical degree programme (MBChB) were recruited. Eligible participants needed to have completed at least 3 months of the programme, ensuring adequate experience of classroom-based teaching, including lectures, case-based learning and small group sessions. At the time of recruitment, the most junior students were nearing completion of first year, so were eligible to participate. Additionally, senior students could offer reflections on the appropriateness of early teaching as preparation for clinical practice. Unlike previous studies in this area, we did not limit participation to individuals who identified as not having experienced past trauma. 13 Personal trauma history may influence perspectives and value participants assign to warnings. However, we also recognised that trigger warnings may be viewed as serving wider pedagogical function, including development of understanding and empathy towards trauma amongst non-affected individuals, as suggested by previous studies 10,28,34 or, conversely, impeding learning experiences; thus, all students on the 4-year programme were openly sampled, capturing diverse perspectives and their evolution through programme progression. Students were contacted via a digital message and provided with information outlining study details. The participant and recruitment information highlighted that responses would be anonymized and decisions regarding participation had no bearing on academic progression. We stated that we did not intend to directly explore distressing personal experiences, but that participants may refer to previous experiences in responses. Participants were informed of steps that would be taken if at any stage they needed support including discontinuing interviews and signposting to appropriate services. After addressing any questions, participants provided written consent to participate. | Context Our programme is atypical in UK medical education as a graduateentry programme, compared with standard school-leaver entry. Degree holders from any academic background are admitted. These criteria widen participation in medical education by traditionally underrepresented groups and in relation to student sociodemographic profile. Students are older, come from more varied backgrounds and have greater life experience. As an accelerated programme, ours is intensive and academically demanding; thus, student well-being is emphasised in curricular development and delivery. Guidelines for practice in relation to warnings have been developed, based on a student feedback and disseminated amongst educators, and discipline teams have in some cases determined best approaches in their subject area. However, there is not currently an overarching institutional or departmental mandated policy on use of trigger warnings. | Data collection: Interviews We developed interview questions aligned with our overarching research questions. As this study builds on a previous study of educators' views in this programme setting, 28 these findings, in addition to existing literature and researcher discussion, influenced focus. Nonetheless, we wished to identify novel, unanticipated areas of priority for student participants. A semi-structured interview approach with open-ended questions was adopted to enable discussion of participants' experiences, with clarification and probing to qualify responses. Interview guide (Appendix S1) provides detail of questions. As the study was conducted during the COVID-19 pandemic and In the previous study of this subject with medical educators, the term trigger warning was not used initially, to accurately explore educators' use of warnings. As students may have experienced personally impactful content and, due to perceived power differential between students and faculty, it was ethically imperative to highlight intended discussion of trigger warnings and related circumstances during recruitment. Further, it was anticipated that students would readily recognise the term as it had been noted in student feedback. Students had an opportunity to clarify views on what constituted a trigger warning. The interview guide was structured to initially explore participants' experiences of trigger warnings in day-to-day life and then in classroom-based education. This initial, general approach sought to establish general familiarity with the subject in a less personalised way, before the possibly more contentious subject of warnings in education. Semi-structured interviews allowed an approach that was often more fluid, enabling participants to freely discuss experiences and areas of greater priority to them. After initial interviews, a further question exploring previously unanticipated areas was added. | Data analysis Thematic analysis was used to identify, analyse, organise, describe and report themes within the dataset. Here, a theme is a notable feature of the data relating to the research questions and adds meaning. 37 Thematic analysis was chosen to assess participants' perspectives and identify similarities and differences in responses and unanticipated perspectives, creating a rich account of the data. 38 HN transcribed verbatim audio recordings. We both read and reread transcripts for immersion and familiarisation. Analysis of initial interviews commenced as further interviews progressed. Any striking features and patterns noted in the data during collection, transcription and analysis were recorded and discussed. These were then incorporated in identifying preliminary codes. Further codes were identified, developing a full coding framework. HN iteratively coded all transcripts using Nvivo12. LR read all interview transcripts to triangulate and establish agreement. As new codes were identified, these were applied to previously coded transcripts. We reached consensus on suggested codes and how these were assigned to data. All data assigned a particular code were collated. The complete codebook was reviewed, searching for relationships and patterns, thereby identifying themes inductively. Diagrams were used to organise themes and subthemes. Proposed themes were compared with the dataset, confirming key findings had been reported. Titles of themes were then revised, ensuring appropriateness. Detailed notes were maintained throughout analysis, beginning in the transcription phase. Features and patterns in the dataset and codes and development and hierarchies of themes were documented in a reflexive diary, providing an audit trail. | Reflexivity, positionality We both have leadership roles with oversight of student feedback and programme enhancement, including issues of accessibility and duty of care. LR has a senior educational leadership position and is known to students. HN has a quality leadership role and is directly known to fewer students. While involved in curriculum development, we are not substantively involved in programme delivery. Noting that roles may impact students' willingness to share experiences, HN conducted all interviews. Regular researcher meetings occurred throughout the study, discussing reflexive notes and observations. Discrepancies in analysis, interpretation and reporting were considered, including regarding coding and theme titles. We maintained awareness of potential bias during analysis and actively sought evidence of contrary views, ensuring that our individual perspectives did not disproportionately influence interpretation or reporting. | RESULTS We conducted 13 semi-structured qualitative interviews with students (six males and seven females). Data collection occurred between June and October 2021. Average interview duration was 65 min (range 47-101 min). Participants from various backgrounds and from each of the four programme years participated (Table 1), sharing diverse perspectives and experiences and providing data relevant to the research questions and unanticipated areas. Noting the quality of dialogue, variety of participant perspectives and insights shared, we identified that we had appropriate data to address the research questions after completion of 13 interviews. 39 In addressing research questions to explore student perceptions of value, function, benefits and drawbacks of trigger warnings, six thematic areas were identified, shown in Table 2. Quotes are presented by participant number and year of study. Support for warnings varied; individuals who identified personal need for warnings were exclusively supportive of use, whereas individuals who did not identify own need fell into two groups of supporters or opponents (fully or in part). Participant number Year of study Gender There was also recognition that beyond specific topic areas, trauma was not uncommon and that this may be associated with increasing self-awareness and emotion recognition. My generation are known as the snowflake generation. | Responsibilities in learning Participants frequently discussed educators' and students' professional responsibilities in the educational process, which included respective responsibilities to provide and engage with content that could be distressing. There were a range of views in relation to expectations of individuals and where the balance of responsibility lay within that relationship. A small number of participants appeared to view the duty of care and accountability for student well-being to be predominately the remit of the medical school, with students being recipients of educa- There was value in students learning to tolerate negative emotional reactions that would be relevant to experiences such as professional failure or poor patient outcomes. Others similarly discussed the need for students' self-awareness and self-management of vulnerabilities. However, they did not view warnings as being antagonistic to students assuming responsibility and developing self-awareness. Where warnings were provided as overview statements of content, these fulfilled the function of informing recipients, prompting them to reflect and determine their readiness or need for support. A gradual approach to withdrawal of warnings could scaffold students' self-awareness and self-reliance, allowing development and assumption of responsibility over time and enabling self-identification and action planning regarding sensitivities. There's a certain element of autonomy and self- The classroom was primarily associated with learning and students identified their personal learning as being of greatest priority. It was therefore expected to be a safe, supportive environment. Where exposure to distressing content featured, warnings could allow students to prepare themselves. Within an education setting … you are in an environment which is supposed to be nurturing and supporting you as a student. When you are in placement … it's a professional capacity … When you are sitting in a classroom, you are expecting to … be able to think … reflect | How to provide warnings Participants discussed the principle of providing advance notification and how warnings were given. Some students who expressed limited support for warnings and underlying principles conceded that use may be appropriate in limited circumstances related to recognised traumas. Wide variation was noted regarding the most appropriate way to do this and the primary rationale. Several participants expressed disapproval for the term due to connotations associated with trigger warnings: the expanded use to encompass content of widely varying severity and as a tool that could permit avoidance or stifle debate. Others, noting that warnings presupposed or could precipitate harmful responses, proposed alternative terminology such as content statements, providing objective information. This measure, coupled with signposts to supports, delegated responsibility to students as partners in the education process to consider own needs. Where someone just says, "we are going to be discussing this today" … so it does not really come as much of a surprise. "If you find this content upsetting then please feel free to step out of the room and rejoin when you feel comfortable". I think that's a bit more sensitive to people … giving them information and ground rules, as opposed to just announcing, "Trigger warning!", because it's … a thing that needs to be done. (P1, year 2) Others expressed expectation for directive warnings in some circumstances, demonstrating educators' compassionate acknowledgment regarding the impact of content on students. Some participants did not feel that regular forewarnings were a justifiable expectation, preferring to be informed during induction regarding the breath and nature of content that would be encountered and how to access support, if required. | DISCUSSION This study sought to explore medical students' experiences, perspectives and priorities regarding use and value of trigger warnings in classroom-based education. In contrast to our previous study with educators, all participants readily recognised the term from contexts including media, therapeutic and educational settings. In keeping with previous findings, arguments both in favour of and against use of warnings were presented. 3,28,34,40 Opponents expressed concerns that use of warnings made inferences regarding students' limited coping abilities. Others cited drawbacks similar to those identified by educators, 28 including disrupting flow of discussion and hypersensitising recipients. Warnings could be primarily used as defence against anticipated complaints. Although warnings were a concession required by a minority, they were imposed on all students. Some suggested that enrolment to medical education implied readiness to encounter routine content and that the need for warnings was overstated, implying trauma was not a concern for this population. More participants recognised a role for warnings, due to personal need or as reasonable concessions for others. Trauma-related content was recognised as causing adverse emotional reactions, impeding learning, which could be ameliorated by advance warnings. The role of warnings appeared to have been debated amongst some students, leading to occasional discordance and polarisation in views. A key aspect of contention surrounding warnings has been whether they promote avoidance of distressing content or facilitate engagement, 5 Participants discussed responsibility for learning and managing individual sensitivities in education settings. Students, as developing professionals, recognised the realities of medical education and the requirement for self-awareness in managing their own needs. 42 Duty to acknowledge and take responsibility for this was noted. Elsewhere, evidence was noted of students' identifying these responsibilities as lying with educators, suggesting an external locus of control that may be at odds with readiness for self-directed learning. 43,44 Warnings could thereby contribute to disempowerment. Others highlighted that educators typically controlled session content, further contributing to the power differential between educator and student and placing a responsibility on educators to consider recipients' needs. 5 In undertaking a journey to professionalisation, students commenced in a novice role. Models of self-directed learning highlight that learners' progress through different stages of self-directedness and that selfdirectedness traits can be acquired, nurtured and developed. 45 Professional preparedness could be acquired through incremental, scaffolded approaches, where warnings accommodated individual needs. It has previously been positioned that future professional responsibilities should not be prioritised over students' own current needs 28 and self-care should be enshrined in professionalisation. 46,47 Participants who identified a personal need for warnings desired compassion and acknowledgement from educators and that classrooms be preserved as nurturing environments, as indicated by differential expectations in classrooms and clinical environments. Where practice conveyed educators' consideration towards students, this demonstrated acceptance of students' needs and circumstances, a suggestion also proposed by educators. 28 Professional identities are influenced by the culture of learning environments and processes of socialisation. [48][49][50] Caring attributes, expected in future professionals, should be upheld and role-modelled in these contexts. 49 Participants explained how educators' treatment of trauma-related content was indicative of organisational attitudes and professional norms. These experiences impacted sense of belonging and identity and relationships with peers and the organisation, factors noted as inherent to professional identity formation. 49 These hidden curriculum experiences echoed those of students from previously underrepresented backgrounds. 51 Intersections of the experiences of traditionally underrepresented students, including minority group experiences, power hierarchies and social inequalities, with trauma provide further impetus for consideration of trauma-informed approaches, 17 including content warnings, for increasingly diverse student cohorts. Participants shared perspectives regarding personal growth arising from exposure to challenge and stress. An optimal degree of stress-'eustress'-can be motivational and performance enhancing. 52,53 Trauma, a related but distinct experience, characterised as more severe and with persisting adverse effects, 54 and positioned further along this stress spectrum, was also discussed. Stress and trauma are opportune areas to explore in medical education, an experience widely acknowledged as intensive and having potential to harm practitioner well-being. 20,21 Trauma-informed medical education advocates for integration of trauma-informed approaches in curricular development, delivery and learning environments. 17,19 This includes teaching about science underpinning trauma and its effects, acknowledgement of potential impacts of trauma-related content on students, and accommodation for this through use of content overviews and advance warnings, thereby promoting understanding amongst students and educators, 17 a recommendation shared by some participants. Greater evidence-based understanding of stress and trauma, as both affordances and hazards, may be achieved through use of warnings, promoting empathy and avoiding marginalisation of either supporters or opponents. Considerable variation in views was noted regarding the primary intended purpose of warnings and the best way to provide these. 55 -impacts academic outcomes when employed by students. 56 Regulation by cognitive or arousal reappraisal, which aims to change the type of stress response, encourages individuals to reconceptualize stress as a coping tool. 57 This technique has been explored in both therapeutic contexts, 58 resulting in decreased PTSD symptom severity, 59 and academic contexts, 41 showing effectiveness in improving student outcomes. 60 Compared with other regulation strategies, specifically suppression, cognitive reappraisal was associated with lesser symptom severity in PTSD 58 and lower levels of academic burnout. 56 | LIMITATIONS The study was conducted at a single UK graduate-entry medical school that may limit generalisability of findings to school-leaver entry populations. However, typical, younger school-leaver entrants may not have had sufficient experience to develop professional maturity and resilience, meaning issues of vicarious traumatisation are pertinent. Furthermore, diversity in graduate-entry populations, with students from traditionally underrepresented backgrounds, enhances representativeness, meaning that a diverse population was sampled. Experiences of adversity and trauma may be more likely in this population, allowing these participants to provide richer, more nuanced insights. Unlike previous studies regarding trigger warnings, we did not limit participation to individuals identifying as having no past history of trauma. 13 Arising from ethical considerations, we did not explicitly inquire about individual trauma histories but facilitated discussion of such experiences when volunteered by participants. Absence of trauma history categorisation of participants may be considered a limitation. However, noting suggested broader pedagogical functions or drawbacks, 10,28 and pervasiveness of trauma in medical education and practice, 17,21 we recognised the relevance of trigger warnings to all students. Students with more severe traumatic histories may have been reluctant to participate, despite assurances in recruitment information regarding confidentiality and that past traumatic experiences would not be explicitly explored by the researcher. Further, both researchers have education leadership roles at the study setting and were aware these roles could lead to student reticence to participate or discuss experiences. However, we captured a variety of participant perspectives in relation to the research questions, including reflections on both trauma and resilience required of clinicians. Participants also shared experiences of trigger warnings in both therapeutic and educational contexts. These two points provided assurance of an adequate sample. Expanding the study to additional settings and increasing the sample size would capture further perspectives and enhance generalisability of findings. ACKNOWLEDGMENTS We wish to thank the students at WMS MB ChB who shared their time and ideas in participating in this study. We also wish to thank the reviewers for their constructive comments and suggestions. CONFLICTS OF INTEREST We declare that we have no competing interests. AUTHOR CONTRIBUTIONS HN conceived and developed the idea for the project, interviewed participants and analysed data and identified themes. HN drafted the early versions of the manuscript and made subsequent critical revisions for important intellectual content. LR developed the idea for the project, analysed data and identified themes. LR reviewed the early versions of the manuscript and made substantial contributions to the content and direction of the manuscript. Both authors approve the final version and agree to be accountable for all aspects of the work including questions related to the accuracy or integrity of the work. ETHICS STATEMENT This study was reviewed and approved by University of Warwick Bio-
2022-03-31T06:22:57.178Z
2022-03-30T00:00:00.000
{ "year": 2022, "sha1": "b99e5df7896fe8799fea00d2020528c95ce14349", "oa_license": "CCBY", "oa_url": "http://wrap.warwick.ac.uk/164448/7/WRAP-Medical-students-views-value-trigger-warnings-education-study-2022.pdf", "oa_status": "GREEN", "pdf_src": "Wiley", "pdf_hash": "88deb551aa8e9d8072b5a2e21f552f677bcc7fd8", "s2fieldsofstudy": [ "Medicine", "Education" ], "extfieldsofstudy": [ "Medicine" ] }
225403443
pes2o/s2orc
v3-fos-license
Elision of the Lateral Sound Sun Laam in Definite Article in Arabic (AL) Abstract—This study investigates: what kind of sound change in the lateral sound (sun laam) before the coronal sound of Arabic(/∫/, /ð/, /ð /, /ṣ/, /s/, /d/, /d/, /n/, /ẓ/, /z/, /Ѳ/, /t/, /t /, and /r/).; the extent to which the coronal and the vowel sound cause the elision of the lateral sound and whether the elision of sun laam is the main indicator of geminate the coronal sound. The sample of the study is a list of Arabic words containing the coronal sound of Arabic initially and preceded by a definite article. The significance of this study shows the benefit of describing and analyzing the distinctive features of the immediate sounds within continuant speech for finding out what exactly causes changes in a phoneme in such speech. A descriptive analytic approach is used to describe the distinctive features of the sun laam and the coronal sounds, as well as to analyze the linguistic environment (the sound pattern including the definite article /لا/ /al/ before the coronal sound).The most important results are: the sun laam is completely elided before the coronal sounds. The elision of Sun Laam and the intensity of the vowel sound shape the geminate of the coronal sound. I. INTRODUCTION In Arabic, pronunciation often conforms to the spelling, but in connected speech, one sound may influence or be influenced by the preceding or following sounds. Some oral sounds are changed to nasal sounds because of /n/ or /m/ occurring before or after the oral sounds. A voiced sound is changed to voiceless, or a tense sound is changed to a lax sound. These changes occur to these sounds because they lose some of their phonological distinctive features. This phenomenon is also common in English and is called by different terms due to the nature of the sound change, such as assimilation and elision (Ofulue et al., 2010). Assimilation is a phonological process in which two neighbouring sounds one is changed to another because of some degree of similarity between their distinctive features. Elision is viewed by Roach (2009) as a phonological process that leads to producing zero vowel or consonant sounds. Rishidi and Shokrollahi (2010) report that the occurrence of elision is conditioned by the intervocalic position, coda position, and final position. Elision involves blocking or fortition environments of word-initial position or onset of stressed syllables where a consonantal change increases the degree of stricture. A. Research Problem Many Arabic studies have indicated that the lateral sound sun laam in a definite article in Arabic is completely assimilated by the coronal sounds after thoroughly reading many questions that have been raised about this phenomenon. The answers to these questions may show what kind of sound changes occur to the sun laam in a definite article and what causes motivate speakers of Arabic to change these sounds. B. Research Questions In the light of the above results from different related studies, many questions must be addressed to find the reasons why the sun laam is changed before the coronal sounds. (1)Why is the sun laam in the Arabic definite article Al ‫/ال/‬ not assimilated by a coronal sound? (2) Why is the sun laam in the article Al ‫/ال/‬ elided before a coronal sound? (3) Do the stressed coronal consonants of Arabic after the sun laam indicate any essential evidence of the sun laam being elided before them? II. REVIEW OF RELATED LITERATURE The sun laam sound /l/ in the definite article ‫ال‬ / / in Arabic has two different pronunciations: one prominent and one assimilated by the neighbouring coronal sounds. According to ALmusawee (2007), the sound /l/ will be assimilated if the sun laam sound /l/ in the definite article /al/ precedes one of the coronal sounds (/∫/, /ð/, /ð /, /ṣ/, /s/, /d/, /d/, n, /ẓ/, /z/, /Ѳ/, /t/, /t /, and /r/). This occurs because both sounds are very close in articulation. Here, laam is called the sun laam. Hall (1997) claims that such assimilation happens completely when the sun laam is before one of the coronal consonants. On the other hand, Heselwood and Watson (2013) reject the kind of assimilation of sun laam before coronal consonants of Arabic. To support their claim, they use illustrative acoustic and electropalatographic data. The results show there is not any evidence that Arabic speakers assimilate the sun laam to /z/, as in al-zaffa. They conclude that the articulation of the sun laam in the definite article before a coronal sound does not have the sun laam /l/, claiming that the existence of the sun laam /l/ is orthographically based and that it is risky to base any phonological analysis on orthographic evidence. Interestingly, they argue that the sun laam is not assimilated by the following coronal consonant. They conclude that the sun laam sound is elided and is not pronounced when followed by coronal consonants. They claim that the stress that occurs in a definite article plus a coronal consonant is not the result of simultaneous assimilation and that they should be considered as 'true' geminates, not assimilatory geminates. Many questions have been addressed after reviewing the different conclusions by ALmusawee (2007) and Hall (1997). They do not base their claims on phonological analysis but on Sibawayh's assumption, which states that in the production of coronal sounds, one or both of the two rims of the tongue glide and touches the point of articulating the sun laam /l/ and mixes with it. This phonological description neglects the different distinctive features of both the coronal and the sun laam sounds of Arabic. The two rims of the tongue take different shapes while articulating either sun laam or the different coronal sounds of Arabic. Heselwood and Watson (2013) reject the assimilation of sun laam because their study shows that there is no sun laam before articulating a coronal sound. They confirm that the sun laam in the definite article ‫ال/‬ / is assimilated by the coronal sound, as in word alzam /alzam/ ('most necessary'), as well as the optional assimilation of word-final /l/ and in the word-initial /r/ in ḥabil rafī ('a thin rope'). III. METHODOLOGY The sample examined in this study is a list of words that start with coronal sounds and are preceded by the definite Arabic article al ‫.)/ال/(‬ The study procedures are mainly based on the description of the distinctive features of the sound pattern of /al/ + coronal sound individually and in the articulation of continuant speech. All conclusions are obtained by means of a description comparison and analysis (a descriptive analytical method). To answer the research questions, a descriptive analytic approach must be used to: • Describe the distinctive features of the lateral sound the sun laam /l/ and the coronal sounds to determine whether the literal sound shares some of the distinctive features of the coronal sounds • Compare the distinctive features of the coronal sounds of Arabic to the distinctive features of the lateral sound sun laam /l/ so as to find out what reason makes speakers change the sound before the coronal sounds of Arabic • Describe the components of the linguistic environment where the sound /l/ is changed from sun laam /l/ to another sound • Describe the articulation of each sound pattern including the definite article al / ‫/ال‬ + a coronal sound in continuant speech • Analyse the continuant articulation of the whole sound pattern including the definite article al / ‫/ال‬ + the coronal sound to describe how the sun laam changes the articulation of the whole pattern • Introduce the precise causes that lead the sun laam to lose its quality in its continuant articulation in the sound pattern of definite article + coronal sound IV. RESULTS AND DISCUSSION Assimilation is defined as replacing a sound by an adjacent sound because of the degree of similarity between them. To arrive at concrete answer, we should review the different distinctive features of the sounds al + coronal sounds individually. Furthermore, we must examine the different distinctive features of the sun laam in al / ‫/ال‬ to know whether there is similarity between them or not and to verify or reject the claim that /l/ is completely assimilated by the succeeding coronal sound. A. Description, Analysis, and Discussion of the Result of Question 1 Table 1 presents the different distinctive features of the coronal sounds and lateral sounds in Arabic (the sun laam). The table shows that the different distinctive features of the coronal sounds /t/ and /t/are alveolar stops, as are /d/ and /d/. /s/ and /s/ are alveolar fricative. /ʃ/ is palate-elveolar. /z/ and /z/ are alveolar fricative. /θ/ and /ð/ are dental fricative, and /n/ is alveolar nasal. The first group of sounds shows no similarity in their manner of articulation, place of articulation, and voicing features. The second group are the sun laam /l/ and /r/, which are alveolar and lateral. The sun laam sounds /l/ and /r/ show similarity in their place, manner, and voicing features, so the first kind of assimilation to the sun laam is caused by /r/ because this coronal sound has the same manner of articulation as the lateral sound /l/. THEORY AND PRACTICE IN LANGUAGE STUDIES The comparison shows big differences between the distinctive features of the first group of coronal sounds and the second group of literal coronal sounds. Therefore, if the sun laam sound /l/ in the definite article /al/ ‫/ال/‬ is followed by coronal sounds, it will not be assimilated by them in continuant speech because assimilation occurs when two sounds have the same manner of articulation. Examples include as in the Arabic words ) ‫ا‬ ‫لت‬ ‫مر‬ / /altamr/ (date)), ‫ا‬ ‫لط‬ ‫ير‬ (alteir (birds), ‫ا‬ ‫لد‬ ‫يك‬ (aldeek (rooster)), ‫ا‬ ‫لض‬ ‫ب‬ (aldhb (lizard)), ‫ا‬ ‫لس‬ ‫ماء‬ (alsama (sky)), ‫ا‬ ‫لص‬ ‫بر‬ (alsaber (patience)), ‫ا‬ ‫لش‬ ‫هد‬ (alshahd (honey)), ‫ا‬ ‫لث‬ ‫من‬ (althaman (price)), ‫الزمن‬ (alzaman ((time)), ‫الذئب‬ (althib (wolf)), ‫ا‬ ‫لظ‬ ‫ريف‬ (aldharf (circumstance)). These results are in agreement with those of Heselwood and Watson (2013) in that no assimilation occurs for the sun laam /l/ in the definite article before the coronal sound. But the sun laam /l/ is assimilated by the coronal sound /r/ only because the two sounds have the same place and manner of articulation, as in the example used by Heselwood and Watson (2013) in ḥabil rafī (a thin rope). Heselwood and Watson (2013) claimed that the sun laam sound /l/ disappears when followed by a coronal sound in Arabic. It seems there that a complete elision occurs to the sun laam before the coronal sounds. The question arises of what causes the sun laam disappear after the coronal sounds. The cause does not concern the orthographical system of Arabic because almost Arabic spelling conforms to Arabic pronunciation. To arrive at concrete causes, the sound patterns of the sun laam + the coronal sounds must be analysed in the definite article in Arabic. The phonological analysis focuses on the distinctive features of the sun laam and the coronal sound. The following table shows the environment and circumstances where the sun laam is elided before coronal, alveolar, or stop sounds. The same results are not confirmed by the results of Amusawee (2007), who states that the sun laam is assimilated by the coronal sound because it is the sound adjacent to /l/. B. Description and Analysis of the Result of Question 2 The following table presents a description of a continuant articulation of the sound pattern /al/ + coronal sound. It also shows how the sun laam /l/ loses its distinctive features when preceding a coronal sound and how the coronal sounds influence the preceding /l/ regressively. Referring to table 2, in the patterns /al+t/, /alt/, /ald/, and /ald/, /t/, /t/, /d/, and /d/ are described as stops and alveolar sounds, and the sun laam /l/ is a lateral, alveolar, or voiced sound. In the production of these patterns, the tip of the tongue touches the alveolar ridge firmly, but here, the air stream either escapes from one or both holes made by the two sides of the rims of the tongue and the upper molars to produce the sun laam. Otherwise, it will be completely blocked by the closure made by the two lips before the expected sudden release for producing the stop sounds /t/, /t/, /d/, and /d/. THEORY AND PRACTICE IN LANGUAGE STUDIES 875 If the air stream escapes through the hole between the upper molars and side rims of the tongue, the vocal tract will produce /l/, and there must be a vowel sound before producing the stop sounds because the production of the stop sound requires a great amount of air to be blocked by the two lips. To articulate the same alveolar stops and alveolar lateral sounds, /i/ is inserted. This kind of insertion may lead to a change in the meaning of the word or a strange word. To avoid such a vowel insertion between the sun laam /l/ and the stop sounds, we need to compare the resistance feature (the force of articulation) of the sun laam /l/ and stop sounds. Stop sounds need a great amount of air and muscular tension to block air behind the closure for sudden release (explosion), while the lateral sound does not need such energy to release air through the hole made by the upper molar and the side rims of the tongue. Thus, in terms of the force of articulation, the priority of the sound production will be for the stops, and the lateral sound will be elided to avoid inserting any kind of a vowel sound. The following table shows the linguistic environment where the sun laam is changed because of the succeeded sound in an Arabic word starts with the definite article followed by /s/, /z/, /ð/, or /θ/. For the production of the sun laam before /s/, /z/, /ð/, or /θ/, the tip of the tongue touches the alveolar ridge firmly, making holes between the upper molars and the two side rims of the tongue, and air stream escapes through them. The coronal sounds /s/, /z/, /ð/, and /θ/, are described as alveolar fricative, and during their production, the tip of the tongue comes in contact with either alveolar, making a narrow passage for the production of /s/, /z/, and /ð /, or with the upper teeth to produce /θ/. If the air stream escapes through these narrow passages, the sound /l must be elided because the sounds /s/, /z/, /ð/, and /θ/ are more fortis and articulated and are parts of words, while al ‫/ال/‬ is considered as a prefix. The tip of the tongue plays an essential role in the production of sun laam and the coronal sounds /s/, /z/, /ð/, and /θ/. THEORY AND PRACTICE IN LANGUAGE STUDIES The problem is that if the air stream escapes through the hole for producing the sun laam /l/, the speaker is not immediately able to produce the sun laam + one of the coronal unless /i/ is inserted between the consonants (the sun laam /l/ + coronal sound). This insertion makes another phonological problem. The sun laam in this combination of sounds must be elided to ease the pronunciation of the succeeded sound. As reported by Rishidi and Shokrollahi (2010), the entire linguistic environment where the sun laam is elided is conditioned by the intervocalic position 'Al + coronal = (a) V + elided /l/ + coronal consonant' (blocked by forition environment). 'altmar' (date) in the presence and absence of the sun laam to determine the extent that the elision of the sun laam influences germination of the coronal sounds. If a speaker of Arabic makes the sun laam, the sound /i/ is inserted to ease the pronunciation of the two consonant sounds /l/ + coronal sound. The pronunciation of the pattern a+l+t in altamar will be ALtimer. This occurs because in Arabic phonology, there is a rule that governs this sequence of two consonants by inserting the front half close vowel /i/, as mentioned before, and such an insertion leads to a change in the meaning of the word. Is there evidence that the sun laam is elided before them? The condition of the coronal sound /t/ after the elision of the sun laam is that the coronal sound /t/ is under the influence of the vowel sound. The sound pattern of the a+Ø (elided sound) + t was analysed using the following table. In the production of a vowel sound, the air stream passes freely in the oral cavity without any obstruction, but this amount of air passes without escaping through the hole that is made by the rims of the tongue and the upper molars because the lateral sound is elided. The immediate blockage that is made by the two lips already blocks an amount of air. The extension of scraping air for producing the vowel /a/ before the elided sun laam /l/ adds force to the articulation feature to the stop sound /t/ and makes it more intense than the articulation of a normal stop. The result of this analysis was confirmed by O'Leary (1963), who states that the elided sound loses its diacritical mark, it becomes 'silent', and the second sound (the coronal sound) becomes geminated. V. CONCLUSION The main findings of the study are as follows: (1) The sun laam is assimilated by the coronal sound /r/ because the two sounds have the same manner and place of articulation, as well as being located in the pattern /alrajul/ (man) as adjacent sounds. (2) The disappearance of the sun laam in the definite article of Arabic before an Arabic word that starts with one of the coronal sounds /∫/, /ð/, /ð / /ṣ/ /s/, /d/ /d/, n, /ẓ/, /z/, /Ѳ/, /t/, or /t/ does not belong to any kind of assimilation. (3) The loss of the sun laam in the definite article of Arabic before words that start with one of the coronal sounds is a process of sound elision. This occurs because of the influence of the distinctive features of the coronal sound after the sun laam because they are stronger than the sun laam. An Arabic speaker needs to use more muscular tension to pronounce them. (4) The geminated coronal sound appears after the elision of the sun laam in the definite article because the extension of the vowel sound's intensity adds more force to the coronal sound. Furthermore, it makes the sound more intensive (the geminated sound) than the normal coronal sound.
2020-07-30T02:05:25.883Z
2020-08-01T00:00:00.000
{ "year": 2020, "sha1": "ac368e02a15685bf80d46b70781fb612c2f83d13", "oa_license": null, "oa_url": "https://doi.org/10.17507/tpls.1008.04", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "354cecf2a3b9c33f1803e6ea2ce1513c941c7e8a", "s2fieldsofstudy": [ "Linguistics" ], "extfieldsofstudy": [ "Mathematics" ] }
15784220
pes2o/s2orc
v3-fos-license
Comparisons of Modeling and State of Charge Estimation for Lithium-ion Battery Based on Fractional Order and Integral Order Methods In order to properly manage lithium-ion batteries of electric vehicles (EVs), it is essential to build the battery model and estimate the state of charge (SOC). In this paper, the fractional order forms of Thevenin and partnership for a new generation of vehicles (PNGV) models are built, of which the model parameters including the fractional orders and the corresponding resistance and capacitance values are simultaneously identified based on genetic algorithm (GA). The relationships between different model parameters and SOC are established and analyzed. The calculation precisions of the fractional order model (FOM) and integral order model (IOM) are validated and compared under hybrid test cycles. Finally, extended Kalman filter (EKF) is employed to estimate the SOC based on different models. The results prove that the FOMs can simulate the output voltage more accurately and the fractional order EKF (FOEKF) can estimate the SOC more precisely under dynamic conditions. Introduction Currently, lithium-ion batteries have attained substantial attention due to their high safety, long life, and high energy density [1,2].It now becomes the main energy storage medium in electric vehicles (EVs), power grid and consumer electronic devices.In order to safely and effectively utilize lithium-ion batteries, it is necessary to build a precise model to capture the battery inner dynamic and static performance, thereby precisely estimating battery statuses including state of charge (SOC) and state of health (SOH). The lithium-ion battery can be regarded as a highly nonlinear, time-variant system, which brings certain difficulty to model and simulate the battery performance.The commonly accepted modeling methods include electrochemical model method [3][4][5][6], black box model method [7,8], and equivalent circuit model (ECM) method [9][10][11].Among all the candidates, the electrochemical model can precisely describe the inner chemical reaction process based on the measured values of battery current, voltage and surface temperature.The black box model usually adopts artificial neural network (ANN) and supported vector machine (SVM) to generate the nonlinear relationship with respect to the measurements.This kind of method relies on huge experimental data to train the model sufficiently thus ensuring its precision and adaption.Therefore, it needs considerable experiment and computation labor, which is not applicable in real implementation.Compared with these two model methods, the ECM method is usually composed of an open circuit voltage (OCV) source, one or two resistor-capacitor networks, as well as a resistor connected in series topology.This kind of model can be easily adapted for different driving cycles, and has been widely adopted in battery management system (BMS) of EVs. The order of ECM may vary due to the different chemical characteristics of battery materials.In [9,12], different structures of ECM are compared, which state that the one order Thevenin model is relatively simple and has high precision, and two and higher order models are more precise while inducing costs of more parameters and more dimensional matrix calculations.By contrast, a partnership for a new generation of vehicles (PNGV) model [13], which is based on the Thevenin model, connects in series with a capacitor to express the variation of OCV induced by accumulation of load current, thereby improving the model precision.Based on the above merits, these two kinds of methods, i.e., Thevenin and PNGV models, have been widely employed in the BMS. Traditionally, the widely adopted modeling method of lithium-ion battery is based on the centralized integral order calculus (IOC).This method is relatively simple, however, the battery inner parameters including capacitance, resistance, which are with diffused and decentralized characteristics [6,14,15], may vary during battery operation.In addition, the variation of current and voltage not only relates to the current status, but also can be influenced by the past statuses.This is the so-called memory effect [16].Therefore, it is quite difficult to adopt the traditional IOC model to accurately depict the battery inner distributed performance and memory effect.The fractional order model (FOM) can extend the application of integral order model (IOM), and can more precisely describe the gradually varied quantity and distributed parameters.Hence, the FOMs have been more and more widely applied in modeling [15,16] and controlling nonlinear systems [17][18][19], including the lithium-ion battery system. According to the electrochemical impedance spectroscopy (EIS) of the lithium-ion battery, the low frequency section can be described by a constant phase element (CPE), and a parallel connection of a CPE and a resistor [20] can be employed to describe spectroscopy of the medium frequency section, which is usually a half compressed circle.In [21], the CPE has been accurately presented by the fractional order calculus (FOC) method.Based on the FOC method, many researchers have made efforts to build electrochemical models and ECM models, and have gained certain achievements.In [5,6], the electrochemical model of the lithium-ion battery has been built based on FOC.In [20,22,23], the fractional order ECM of different structures for the lithium-ion battery is built considering the CPE, and the experiments show acceptable precision.Moreover, in [5], the fractional differential equation is employed to partially describe the electrochemical characteristics of the lithium-ion battery, and thus reduces the quantity of model parameters.Furthermore, only three parameters are adopted to build the FOM for lithium-ion battery in [6], of which the error is declared within 0.5%.By this way, this model can greatly simplify the electrochemical model complexity. In order to determine the structure and composition of FOM for lithium-ion battery, it is imperative to accurately identify model parameters.In [6], the voltage response curve when the battery is discharged with a step current has been classified into three sections, and the voltage drop, curvature and gradient are respectively introduced to fit three parameters of the fractional order electrochemical model.This method relies on appropriate section partition of voltage response curve and high fitting precision.For ECM based on the FOC method, the model orders as well as the resistor and capacitor parameters need to be identified step by step [20,22].First, the angular of CPE is analyzed based on the impedance spectroscopy to determine the order.There exists a defect that the measurement of impedance scope needs particular equipment with appreciable cost [22,23].In [20], a least square identification method is applied to identify the ohm resistance and resistance-capacitance parameters, while this method can only be feasible in premise of commensurate orders [24].In [22], a Levenberg-Marquardt optimal algorithm is introduced to solve the gradient.In [23], the FOM of the lithium-ion battery is proposed, however, the parameter identification of FOC model is not yet finished. The main purpose of modeling the battery is to estimate the battery inner statuses, including SOC and SOH.The SOC estimation is an essential task of the BMS, which represents the available battery capacity ratio, compared with the current rated capacity.The widely accepted estimation method is the coulomb counting method, which is also called ampere-hour counting method [8,25].This method is easy to implement, however, this method can be easily affected by the current measurement and system noise, and also needs to know the initial SOC value.The accumulation of these issues can deteriorate the SOC estimation.A simple method based on the relationship between OCV and SOC is direct and easy to implement; however, it needs a highly precise OCV estimation or measurement [26,27].Essentially, the coulomb counting method and the interpolation method based on OCV are both open-loop methods [28], and cannot be regulated with the help of output measurement.Extended Kalman filter (EKF) is an optimal state estimation for nonlinear systems, and has been widely applied to estimate the battery SOC [29][30][31][32][33].In [29,30], EKF together with an optimization algorithm for identifying model parameters is applied to estimate the SOC on board.In [31][32][33], a double EKF based algorithm is proposed to realize parameters identification and status estimation simultaneously.Actually, its internal principle is that it can filter the system state noise and measurement noise, and simultaneously utilize the feedback of output voltage to regulate the SOC estimation obtained by the coulomb counting method [28,[34][35][36].Thus, the SOC estimation precision can be improved to some extent.In addition, this estimation method is independent of initial SOC value, which is an essential and difficult task for the coulomb counting method.Another merit is that the recursive formulations can be easily applied in the embedded computer system, and therefore have been substantially employed in real application [35,36]. In this paper, based on the widely adopted Thevenin and PNGV models, the fractional order Thevenin (FOThevenin) and fractional order PNGV (FOPNGV) models have been built and compared.Genetic algorithm (GA) is applied to simultaneously identify the orders of FOM and resistor and capacitor parameters for the lithium-ion battery.The precisions of these two models are compared and analyzed.Finally, EKF is applied to estimate the SOC, and the results based on different FOM and IOM are compared and summarized. FOC Definition Actually, there are a variety of definitions for FOC [37].Among all of them, Grünwald-Letnikov defined it with a discrete form, which is widely applied in numerical solution for FOC.Here, the α-order FOC for state x at time step k can be defined where ∆ is the differential operator, N ´1 ă α ă N, T s is the sample time, k P N `.According to the definition, the α-order FOC of x during sample k is the weighed sum of initial state till end state, and the weighed coefficient is related with sample time T s , the calculus order and the distance j. When the distance is closer, the absolute value of weighed coefficient is larger, and vice versa.In real applications, when the distance from state x is longer, the absolute value of weighed coefficient is smaller.To decrease memory storage size and computation burden, the sum of the items can be reduced under the premise of meeting the calculation precision.Therefore, a recursive sum window, of which the length is L, can be considered, and thus Based on Equation (3), it can be observed that by using a recursive window of length L, the weighed sum of states, which are near the current state, is selected to determine the FOC.In order to determine the recursive length L, the weighed coefficient in Equation (1) can be defined, where the recursive calculation can be expressed as As shown in Figure 1, it can be observed that when α is selected at 0.5 and 0.99 respectively, which are near the mean and maximum values of the fractional order of the FOMs, the amount of calculation decreases rapidly.When the fractional order becomes larger, the calculation declines faster.Moreover, when j is larger than 70, the calculation is less than 0.001, and the corresponding voltage is 1 mV, thereby being enough to ensure the precision.Considering the voltage sample precision and amount of calculation, L is set to be 70, thus it can improve the calculation speed and decrease the demand of memory capacity without compromising the precision. Energies 2016, 9, 184 4 of 15 where the recursive calculation can be expressed as ( As shown in Figure 1, it can be observed that when  is selected at 0.5 and 0.99 respectively, which are near the mean and maximum values of the fractional order of the FOMs, the amount of calculation decreases rapidly.When the fractional order becomes larger, the calculation declines faster.Moreover, when j is larger than 70, the calculation is less than 0.001, and the corresponding voltage is 1 mV, thereby being enough to ensure the precision.Considering the voltage sample precision and amount of calculation, L is set to be 70, thus it can improve the calculation speed and decrease the demand of memory capacity without compromising the precision. Fractional Order Models of Thevenin and PNGV During the SOC calculation of lithium-ion battery, Thevenin and PNGV models have been widely adopted due to the limited parameters quantity and high precision.Their ECM schematics are shown in Figure 2 Fractional Order Models of Thevenin and PNGV During the SOC calculation of lithium-ion battery, Thevenin and PNGV models have been widely adopted due to the limited parameters quantity and high precision.Their ECM schematics are shown in Figure 2   where the recursive calculation can be expressed as As shown in Figure 1, it can be observed that when  is selected at 0.5 and 0.99 respectively, which are near the mean and maximum values of the fractional order of the FOMs, the amount of calculation decreases rapidly.When the fractional order becomes larger, the calculation declines faster.Moreover, when j is larger than 70, the calculation is less than 0.001, and the corresponding voltage is 1 mV, thereby being enough to ensure the precision.Considering the voltage sample precision and amount of calculation, L is set to be 70, thus it can improve the calculation speed and decrease the demand of memory capacity without compromising the precision. Fractional Order Models of Thevenin and PNGV During the SOC calculation of lithium-ion battery, Thevenin and PNGV models have been widely adopted due to the limited parameters quantity and high precision.Their ECM schematics are shown in Figure 2 For Thevenin model, the voltage equations can be formulated, where α is the fractional derivation order of the medium frequency section for the EIS of the battery, 0 ă α ď 1.When α is equal to 1, Equation ( 6) turns into integral order of Thevenin (IOThevenin) model.Here, we suppose x " rU p s, and y " V oc ´Ut , the FOThevenin model can be expressed, where A " ´1{pR p C p q, B " 1{C p , C " ´1, D " ´R0 .According to Equation ( 1), its discrete form can be written as Thus, Equation ( 7) can be further formulated, Now, x k can be solved, where I is identity matrix of 1 ˆ1, and correspondingly, the output equation can also be discretized as Hence, Equations ( 10) and ( 11) together determine the FOC discrete state equation and output equation of Thevenin model. Similarly, the voltage equations of PNGV model can be written where α and β are the fractional derivation orders of low frequency section and medium frequency section for the EIS of the battery respectively.When α and β both equal 1, the model is transferred into the integral order PNGV (IOPNGV) model.Like the above process, the PNGV FOC discrete state space function can be formulated with U p and U b as state variants, and output variant y of V oc ´Ut . where I are 2 ˆ2 identity matrices, and A " The output equation of FOPNGV is mostly same as that of IOThevenin model, and the only difference is that C " " ´1 ´1 ı .After building these two kinds of FOMs, parameters identification is carried out to estimate the model parameters and validate the model precision. Parameter Identification GA is an intelligent optimization algorithm that simulates the evolution process.It can be applied to identify the orders and values of resistance and capacitance simultaneously.GA has been successfully applied in parameters identification [38,39] and optimal control [40,41] by means of a series of actions including crossover, elitism selection and mutation.In this paper, GA is employed to identify the model parameters offline with global optimal solution during the whole SOC range. In order to identify the model parameters, the hybrid pulse power characterization (HPPC) experiment is usually carried out to provide the extreme characterization of battery.The research object in this paper is a lithium-ion polymer battery, consisting of Li (NiCoMn) O 2 -based cathode and graphite-based anode.The energy density is 174 Wh/kg and the nominal voltage and maximum charging voltage are, respectively, 3.65 V and 4.15 V.Meanwhile, the OCV curve can be determined based on the voltage measurement after the recommended standstill release.In this paper, the calibrated battery capacity is 20 Ampere-hour (Ah) for the research.After fully charged, the battery is left to be standstill to measure the OCV, followed by discharging the battery of 10% capacity.Then, the battery is set to be standstill after the battery inner electrochemical reaction reach balanced.Now, the OCV value with respect to 90% SOC can be quantified.The steps can be repeated until the battery is fully discharged.During the experiment, a combined current pulse test, which includes 5C current charge and 5C current discharge, is followed after measuring the OCV, where C denotes the battery rated capacity value with unit Ah.The main purpose of this pulse test is to excite the battery dynamic performance.In this paper, both the pulse durations are 10 s, respectively, and there exist an interval of 40 s between them.The current and voltage response curves are shown in Figure 3.It can be clearly observed that the voltage ranges from around 4.13 V to 2.98 V when the battery is discharged from 100% to 0% SOC.Finally, the voltage returns back to 3.50 V after fully discharged. where I are 2 2  identity matrices, and . The output equation of FOPNGV is mostly same as that of IOThevenin model, and the only difference is that   1 1 C    .After building these two kinds of FOMs, parameters identification is carried out to estimate the model parameters and validate the model precision. Parameter Identification GA is an intelligent optimization algorithm that simulates the evolution process.It can be applied to identify the orders and values of resistance and capacitance simultaneously.GA has been successfully applied in parameters identification [38,39] and optimal control [40,41] by means of a series of actions including crossover, elitism selection and mutation.In this paper, GA is employed to identify the model parameters offline with global optimal solution during the whole SOC range. In order to identify the model parameters, the hybrid pulse power characterization (HPPC) experiment is usually carried out to provide the extreme characterization of battery.The research object in this paper is a lithium-ion polymer battery, consisting of Li (NiCoMn) O2-based cathode and graphite-based anode.The energy density is 174 Wh/kg and the nominal voltage and maximum charging voltage are, respectively, 3.65 V and 4.15 V.Meanwhile, the OCV curve can be determined based on the voltage measurement after the recommended standstill release.In this paper, the calibrated battery capacity is 20 Ampere-hour (Ah) for the research.After fully charged, the battery is left to be standstill to measure the OCV, followed by discharging the battery of 10% capacity.Then, the battery is set to be standstill after the battery inner electrochemical reaction reach balanced.Now, the OCV value with respect to 90% SOC can be quantified.The steps can be repeated until the battery is fully discharged.During the experiment, a combined current pulse test, which includes 5C current charge and 5C current discharge, is followed after measuring the OCV, where C denotes the battery rated capacity value with unit Ah.The main purpose of this pulse test is to excite the battery dynamic performance.In this paper, both the pulse durations are 10 s, respectively, and there exist an interval of 40 s between them.The current and voltage response curves are shown in Figure 3.It can be clearly observed that the voltage ranges from around 4.13 V to 2.98 V when the battery is discharged from 100% to 0% SOC.Finally, the voltage returns back to 3.50 V after fully discharged.Based on the measured OCV curve, shown in Figure 4, a six-order polynomial equation is employed to simulate the voltage variation, Based on the measured OCV curve, shown in Figure 4, a six-order polynomial equation is employed to simulate the voltage variation, Here, z represents SOC, and k 0 , k During the identification process, a fitness value of GA is introduced to evaluate the model precision based on the root mean square error of output voltage, The parameter identification results of these four types of lithium-ion battery models, i.e., FOPNGV, FOThevenin, IOPNGV and IOThevenin models, are shown in Figure 6, respectively.It is necessary to mention that the parameters listed in the figure are interpolated with SOC.From Figure 6a, it can be seen that 0 R varies from 2 milliohm to 8 milliohm when the SOC ranges from 10% to 90%.It can also be observed that 0 R of the FOMs is larger than that of IOMs when the SOC is more than 20%.For the FOPNGV model, 0 R varies from 8 milliohm to 3 milliohm.For the IOPNGV model, 0 R decreases from 6.5 milliohm to 3 milliohm during 10% to 20% SOC, and varies from 2 milliohm to 3 milliohm when the SOC ranges from 20% to 90%.The variation of Rp is shown in Figure 6b, and it can be summarized that Rp is always less than 1 ohm, and is less than 0. During the identification process, a fitness value of GA is introduced to evaluate the model precision based on the root mean square error of output voltage, where e k " y k ´ŷ k and y k " V oc ´Ut pkq is the voltage difference between OCV and terminal voltage.ŷk is the estimated value of y k .The parameter identification process is illustrated in Figure 5.This parameter identification method based on GA can be both applied in the lithium-ion battery FOM and IOM.During the identification process, a fitness value of GA is introduced to evaluate the model precision based on the root mean square error of output voltage, The parameter identification results of these four types of lithium-ion battery models, i.e., FOPNGV, FOThevenin, IOPNGV and IOThevenin models, are shown in Figure 6, respectively.It is necessary to mention that the parameters listed in the figure are interpolated with SOC.From Figure 6a, it can be seen that 0 R varies from 2 milliohm to 8 milliohm when the SOC ranges from 10% to 90%.It can also be observed that 0 R of the FOMs is larger than that of IOMs when the SOC is more than 20%.For the FOPNGV model, 0 R varies from 8 milliohm to 3 milliohm.For the IOPNGV model, 0 R decreases from 6.5 milliohm to 3 milliohm during 10% to 20% SOC, and varies from 2 milliohm to 3 milliohm when the SOC ranges from 20% to 90%.The variation of Rp is shown in Figure 6b, and it can be summarized that Rp is always less than 1 ohm, and is less than 0.2 ohm during 20% to 50% SOC, while it varies obviously during 50% to 70% SOC and can reach 0.4 ohm.The changing characteristics of Cp and Cb are shown in Figure 6c,d respectively.When the SOC is more than 20%, Cp maintains at around 80 kF, and is almost same for these four models.Cb ranges from 20 kF to 100 kF within the SOC of 10% to 90%, and shows varying consistency between the FOPNGV and IOPNGV models.It is relatively steady with a higher capacitance value when the SOC The parameter identification results of these four types of lithium-ion battery models, i.e., FOPNGV, FOThevenin, IOPNGV and IOThevenin models, are shown in Figure 6, respectively.It is necessary to mention that the parameters listed in the figure are interpolated with SOC.From Figure 6a, it can be seen that R 0 varies from 2 milliohm to 8 milliohm when the SOC ranges from 10% to 90%.It can also be observed that R 0 of the FOMs is larger than that of IOMs when the SOC is more than 20%.For the FOPNGV model, R 0 varies from 8 milliohm to 3 milliohm.For the IOPNGV model, R 0 decreases from 6.5 milliohm to 3 milliohm during 10% to 20% SOC, and varies from 2 milliohm to 3 milliohm when the SOC ranges from 20% to 90%.The variation of R p is shown in Figure 6b, and it can be summarized that R p is always less than 1 ohm, and is less than 0.2 ohm during 20% to 50% SOC, while it varies obviously during 50% to 70% SOC and can reach 0.4 ohm.The changing characteristics of C p and C b are shown in Figure 6c,d respectively.When the SOC is more than 20%, C p maintains at around 80 kF, and is almost same for these four models.C b ranges from 20 kF to 100 kF within the SOC of 10% to 90%, and shows varying consistency between the FOPNGV and IOPNGV models. It is relatively steady with a higher capacitance value when the SOC is above 20%, and there is an obvious decline when the SOC is below 20%.The order α of the fractional-order is shown in Figure 6e.For the FOPNGV model, α is steady and its mean value is around 0.56, while for the FOThevenin model, α fluctuates largely and its mean value is 0.61.For the FOPNGV model, β is shown in Figure 6f.Its steady value is 0.56 when the SOC is above 20%, and there is a considerable increment when the SOC is below 20%.Therefore, α and β can be seen as the commensurate order when the SOC is above 20% for the FOPNGV model, while β of the FOPGNV model varies obviously when the SOC is below 20% and α of the FOThevenin model fluctuates from 0.95 to 0 during 0% to 100% SOC.Thus, α and β cannot be treated as the commensurate orders. Energies 2016, 9, 184 8 of 15 is above 20%, and there is an obvious decline when the SOC is below 20%.The order α of the fractional-order is shown in Figure 6e.For the FOPNGV model, α is steady and its mean value is around 0.56, while for the FOThevenin model, α fluctuates largely and its mean value is 0.61.For the FOPNGV model, β is shown in Figure 6f.Its steady value is 0.56 when the SOC is above 20%, and there is a considerable increment when the SOC is below 20%.Therefore, α and β can be seen as the commensurate order when the SOC is above 20% for the FOPNGV model, while β of the FOPGNV model varies obviously when the SOC is below 20% and α of the FOThevenin model fluctuates from 0.95 to 0 during 0% to 100% SOC.Thus, α and β cannot be treated as the commensurate orders.The current profile of a hybrid cycle is shown in Figure 7, which can be supplied to justify the model parameters.The hybrid cycle can be used to simulate the step-pulse current charge, constant current discharge, standstill test and Urban Dynamometer Driving Schedule (UDDS) dynamic condition.Firstly, a multi-step pulse current inspiration is used to charge the battery from 60% SOC until full and thus the SOC calibration can be finished.Then, the battery discharges to 83.5% with 1C The current profile of a hybrid cycle is shown in Figure 7, which can be supplied to justify the model parameters.The hybrid cycle can be used to simulate the step-pulse current charge, constant current discharge, standstill test and Urban Dynamometer Driving Schedule (UDDS) dynamic condition.Firstly, a multi-step pulse current inspiration is used to charge the battery from 60% SOC until full and thus the SOC calibration can be finished.Then, the battery discharges to 83.5% with 1C current, followed by a standstill interval, until the battery reaches inner balanced.Finally, two UDDS cycle experiments are subsequently conducted to verify the model dynamic performance.From Figure 7, it can observed that the test cycles during 3200 s to 5000 s, 6000 s to 7600 s, and 9000 s to 10,600 s are in static test conditions and the test cycles during 5200 s to 5700 s, 7600 s to 9000 s, and 10,800 s to 12,200 s are in dynamic test conditions. Energies 2016, 9, 184 9 of 15 current, followed by a standstill interval, until the battery reaches inner balanced.Finally, two UDDS cycle experiments are subsequently conducted to verify the model dynamic performance.From Figure 7, it can observed that the test cycles during 3200 s to 5000 s, 6000 s to 7600 s, and 9000 s to 10,600 s are in static test conditions and the test cycles during 5200 s to 5700 s, 7600 s to 9000 s, and 10,800 s to 12,200 s are in dynamic test conditions.Figure 8 shows the voltage estimation errors of these four types of models.It can be observed that, under static test conditions, the output voltage errors including mean absolute error (MAE) and standard deviation (SD) of the IOMs are 0.063, 0.064, 0.397, and 0.610 respectively, which are obviously less than those of the FOMs, as listed in Table 1.Under dynamic conditions, as shown in Table 2, the output voltage MAE and SD of the FOMs are, respectively, 0.019, 0.091, 2.311 and 3.758, which are less than those of the IOMs.Under the same driving conditions, the voltage output errors of the PNGV models, independent of FOM or IOM, are less than those of Thevenin models, as shown in Tables 1 and 2, respectively.The reason why the PNGV models are with less voltage error is that the capacitor Cb of the PNGV model can describe the OCV variation induced by the accumulation of the load current and the characterization of low frequency variation for the battery, thereby bringing higher precision.The experiments state that under dynamic driving condition test, the variation of the orders of the FOM reflects the tracking performance for the historical voltage of the capacitor.Therefore, it can describe the memory effect of the voltage of the capacitor and can improve the precision of the voltage variation for the capacitor, thereby bringing the improvement of tracking the battery terminal voltage.To sum up, the FOMs can better capture the dynamic performance, compared with the IOMs.It is necessary to note that from Tables 1 and 2, the MAEs of the terminal voltage for the FOPNGV and FOThevenin models are less than 1.5 mV, and even less than 0.1 mV under dynamic conditions, proving that the accuracies of the FOMs calculated with recursive length L = 70 can satisfy the modeling demand.Figure 8 shows the voltage estimation errors of these four types of models.It can be observed that, under static test conditions, the output voltage errors including mean absolute error (MAE) and standard deviation (SD) of the IOMs are 0.063, 0.064, 0.397, and 0.610 respectively, which are obviously less than those of the FOMs, as listed in Table 1.Under dynamic conditions, as shown in Table 2, the output voltage MAE and SD of the FOMs are, respectively, 0.019, 0.091, 2.311 and 3.758, which are less than those of the IOMs.Under the same driving conditions, the voltage output errors of the PNGV models, independent of FOM or IOM, are less than those of Thevenin models, as shown in Tables 1 and 2 respectively.The reason why the PNGV models are with less voltage error is that the capacitor C b of the PNGV model can describe the OCV variation induced by the accumulation of the load current and the characterization of low frequency variation for the battery, thereby bringing higher precision.The experiments state that under dynamic driving condition test, the variation of the orders of the FOM reflects the tracking performance for the historical voltage of the capacitor.Therefore, it can describe the memory effect of the voltage of the capacitor and can improve the precision of the voltage variation for the capacitor, thereby bringing the improvement of tracking the battery terminal voltage.To sum up, the FOMs can better capture the dynamic performance, compared with the IOMs.It is necessary to note that from Tables 1 and 2 the MAEs of the terminal voltage for the FOPNGV and FOThevenin models are less than 1.5 mV, and even less than 0.1 mV under dynamic conditions, proving that the accuracies of the FOMs calculated with recursive length L = 70 can satisfy the modeling demand. Energies 2016, 9, 184 9 of 15 current, followed by a standstill interval, until the battery reaches inner balanced.Finally, two UDDS cycle experiments are subsequently conducted to verify the model dynamic performance.From Figure 7, it can observed that the test cycles during 3200 s to 5000 s, 6000 s to 7600 s, and 9000 s to 10,600 s are in static test conditions and the test cycles during 5200 s to 5700 s, 7600 s to 9000 s, and 10,800 s to 12,200 s are in dynamic test conditions.Figure 8 shows the voltage estimation errors of these four types of models.It can be observed that, under static test conditions, the output voltage errors including mean absolute error (MAE) and standard deviation (SD) of the IOMs are 0.063, 0.064, 0.397, and 0.610 respectively, which are obviously less than those of the FOMs, as listed in Table 1.Under dynamic conditions, as shown in Table 2, the output voltage MAE and SD of the FOMs are, respectively, 0.019, 0.091, 2.311 and 3.758, which are less than those of the IOMs.Under the same driving conditions, the voltage output errors of the PNGV models, independent of FOM or IOM, are less than those of Thevenin models, as shown in Tables 1 and 2, respectively.The reason why the PNGV models are with less voltage error is that the capacitor Cb of the PNGV model can describe the OCV variation induced by the accumulation of the load current and the characterization of low frequency variation for the battery, thereby bringing higher precision.The experiments state that under dynamic driving condition test, the variation of the orders of the FOM reflects the tracking performance for the historical voltage of the capacitor.Therefore, it can describe the memory effect of the voltage of the capacitor and can improve the precision of the voltage variation for the capacitor, thereby bringing the improvement of tracking the battery terminal voltage.To sum up, the FOMs can better capture the dynamic performance, compared with the IOMs.It is necessary to note that from Tables 1 and 2, the MAEs of the terminal voltage for the FOPNGV and FOThevenin models are less than 1.5 mV, and even less than 0.1 mV under dynamic conditions, proving that the accuracies of the FOMs calculated with recursive length L = 70 can satisfy the modeling demand. FOC EKF Application The calculation process of FOC EKF can be formulated based on the following seven equations, including FOC state space equation, output equation, time-variant update of state, time domain estimation of estimation error covariance, calculation of Kalman gain, the state update of measurement, and the measurement update of estimated error covariance [30][31][32][33]. Fractional-order state equation, Output equation, ŷk Time domain update of the state, Time domain update of the estimation error variance, P ḱ " pA k´1 `γ1 q P k´1 pA k´1 `γ1 q T `Qk´1 `k ÿ j"2 Kalman gain matrix calculation, Measurement update of the state, xk " x ḱ `Kk py k ´ŷ k q (21) Measurement update of the estimation error variance, where pq ´and pq ˆare, respectively, a priori and a posteriori estimations of the state x, pq T indicates the matrix transpose, pq ´1 denotes the inverse matrix, w k represents the noise of the system state, v k expresses the measurement noise, w k and v k are white noise which is independent of each other and their mean value is 0, Q k and R k are the variance of w k and v k , and Especially, the discrete equation of SOC estimation based on the coulomb counting method can be shown as where C n is the battery rated capacity.According to Equations ( 10) and ( 13), the SOC estimation based on FOThevenin and FOPNGV models can be determined.In the FOThevenin model, the state x is set as x " rU p zs T , of which the corresponding order is rα 1s, 0 ă α ď 1.In addition, % In the FOPNGV model, the state x is rU p U b zs T , of which the corresponding order is rα β 1s, In the next step, these algorithms are implemented to verify their performance by experiments. Experiment Validation Under hybrid cycle tests presented in Figure 7, the SOC estimation based on EKF is shown in Figure 9a.Since the initial SOC has been calibrated in advance, the SOC estimation based on the coulomb counting method can be regarded as the reference value in view of the highly precise measurement.It can be observed that compared with IOPNGV and IOThevenin estimation, the SOC estimation based on FOPNGV and FOThevenin can have larger oscillation during convergence phase due to the error accumulation, as shown in Figure 9b.During the beginning period of pulse current, i.e., when t is less than 4000 s, the estimation error of FOC Kalman filter is within 6%, of which largest error occurs during the first pulse current excitation, and the estimation error of IOC Kalman filter is less than 1%.After tracking the referred SOC value (t > 4000 s), the estimation MAEs based on the FOPNGV and FOThevenin models reduces by 36.9% and 92.0%, compared with those based on the IOPNGV and IOThevenin models.The enlarged SOC estimation curve under UDDS cycle test (t > 7000 s) is shown in Figure 9c.The estimation error of FOC extended Kalman filter is less than 0.5%, and the IOC extended Kalman filter for SOC estimation error is less than 2% (t > 4000 s).It can be concluded that under the same driving conditions, the SOC estimation error based on FOPNGV and IOPNGV models is less than the corresponding Thevenin model (t > 4000 s). Energies 2016, 9, 184 12 of 15 In the next step, these algorithms are implemented to verify their performance by experiments. Experiment Validation Under hybrid cycle tests presented in Figure 7, the SOC estimation based on EKF is shown in Figure 9a.Since the initial SOC has been calibrated in advance, the SOC estimation based on the coulomb counting method can be regarded as the reference value in view of the highly precise measurement.It can be observed that compared with IOPNGV and IOThevenin estimation, the SOC estimation based on FOPNGV and FOThevenin can have larger oscillation during convergence phase due to the error accumulation, as shown in Figure 9b.During the beginning period of pulse current, i.e., when t is less than 4000 s, the estimation error of FOC Kalman filter is within 6%, of which largest error occurs during the first pulse current excitation, and the estimation error of IOC Kalman filter is less than 1%.After tracking the referred SOC value ( t > 4000 s), the estimation MAEs based on the FOPNGV and FOThevenin models reduces by 36.9% and 92.0%, compared with those based on the IOPNGV and IOThevenin models.The enlarged SOC estimation curve under UDDS cycle test (t > 7000 s) is shown in Figure 9c.The estimation error of FOC extended Kalman filter is less than 0.5%, and the IOC extended Kalman filter for SOC estimation error is less than 2% (t > 4000 s).It can be concluded that under the same driving conditions, the SOC estimation error based on FOPNGV and IOPNGV models is less than the corresponding Thevenin model (t > 4000 s).In order to examine the convergence performance with different initial values, the initial SOC values are set to be 30% when four EKF are taken into effect.The SOC estimation curves are shown in Figure 9d.It can be found that the FOMs have faster convergence speed than the IOMs.After 53 and 158 samples, the SOC estimation based on FOPNGV model and FOThevenin model can converge to the real value.The convergence samples amount for IOPNGV and IOThevenin models are 200 and 204, respectively.However, there exist obvious oscillations for IOC model estimation in the initial period due to the accumulated error. Conclusions In this paper, the FOThevenin and FOPNGV models of lithium-ion batteries are built based on conventional IOMs, and GA is employed to identify the model parameters and fractional order simultaneously.Based on four FOMs and IOMs, EKF is applied to estimate the SOC and experiments are performed to verify the model precision by hybrid cycle test.The results prove that the PNGV model, independent of FOC method and IOC method, is with higher precision compared with the Thevenin model.Correspondingly, the SOC estimation is also more accurate.The order of FOM varies with the SOC, and FOMs can therefore simulate the battery terminal voltage variation more precisely.The SOC estimation based on FOMs can converge to real value with faster speed and has with less errors under dynamic cycles. To sum up, the findings in this paper supply a new way to dynamic modeling and SOC estimation for the lithium-ion battery in the BMS.As our next step for this work, the research will focus on the influence of temperature and aging to FOMs of the battery and the corresponding fractional order EKF application for SOC estimation.In addition, hardware implementation of the proposed algorithms in EVs will be taken into account. , where oc V is open circuit voltage source, p R is an equivalent polarization resistor, p C denotes the equivalent polarization capacitor, 0 R represents the equivalent immediate resistor, b C is the capacitor which represents the variation of oc V induced by load current I , and p U , 0 U , b U and t U are the voltage drop of p R , 0 R , b C and terminal voltage of the battery, respectively.(a) (b) , where V oc is open circuit voltage source, R p is an equivalent polarization resistor, C p denotes the equivalent polarization capacitor, R 0 represents the equivalent immediate resistor, C b is the capacitor which represents the variation of V oc induced by load current I, and U p , U 0 , U b and U t are the voltage drop of R p , R 0 , C b and terminal voltage of the battery, respectively.Energies 2016, 9, 184 4 of 15 Figure 5 . Figure 5.This parameter identification method based on GA can be both applied in the lithium-ion battery FOM and IOM. 2 ohm during 20% to 50% SOC, while it varies obviously during 50% to 70% SOC and can reach 0.4 ohm.The changing characteristics of Cp and Cb are shown in Figure6c,d respectively.When the SOC is more than 20%, Cp maintains at around 80 kF, and is almost same for these four models.Cb ranges from 20 kF to 100 kF within the SOC of 10% to 90%, and shows varying consistency between the FOPNGV and IOPNGV models.It is relatively steady with a higher capacitance value when the SOC Figure 5 . Figure 5.This parameter identification method based on GA can be both applied in the lithium-ion battery FOM and IOM. Figure 7 . Figure 7.The current profile of the hybrid cycle test. Figure 8 . Figure 8. Terminal voltage estimation error under hybrid cycle. Figure 7 . Figure 7.The current profile of the hybrid cycle test. Figure 7 . Figure 7.The current profile of the hybrid cycle test. Figure 8 . Figure 8. Terminal voltage estimation error under hybrid cycle. Figure 8 . Figure 8. Terminal voltage estimation error under hybrid cycle. Figure 9 .Figure 9 . Figure 9. EKF for SOC estimation of the lithium-ion battery: (a) estimation under hybrid driving cycles; (b) estimation error of (a); (c) estimation under UDDS driving cycles; (d) convergence of EKF.In order to examine the convergence performance with different initial values, the initial SOC values are set to be 30% when four EKF are taken into effect.The SOC estimation curves are shown in Figure 9d.It can be found that the FOMs have faster convergence speed than the IOMs.After 53 and 158 samples, the SOC estimation based on FOPNGV model and FOThevenin model can converge 1 , k 2 , k 3 , k 4 , k 5 , and k 6 are equation coefficients, which equal 8.408, 39.03, ´22.05, 5.175, 0.05808 and 3.501, respectively.−22.05, 5.175, 0.05808 and 3.501, respectively. Table 1 . Terminal output errors under static conditions (Unit: mV). Table 2 . Terminal output errors under dynamic conditions (Unit: mV).
2016-03-14T22:51:50.573Z
2016-03-10T00:00:00.000
{ "year": 2016, "sha1": "38230f158a1385a4ed2040d1965d85d62415968f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1073/9/3/184/pdf?version=1457597715", "oa_status": "GOLD", "pdf_src": "Crawler", "pdf_hash": "38230f158a1385a4ed2040d1965d85d62415968f", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Engineering" ] }
254626337
pes2o/s2orc
v3-fos-license
Fluid Shear Stress Regulates Osteogenic Differentiation via AnnexinA6-Mediated Autophagy in MC3T3-E1 Cells Fluid shear stress (FSS) facilitates bone remodeling by regulating osteogenic differentiation, and extracellular matrix maturation and mineralization. However, the underlying molecular mechanisms of how mechanical stimuli from FSS are converted into osteogenesis remain largely unexplored. Here, we exposed MC3T3-E1 cells to FSS with different intensities (1 h FSS with 0, 5, 10, and 20 dyn/cm2 intensities) and treatment durations (10 dyn/cm2 FSS with 0, 0.5, 1, 2 and 4 h treatment). The results demonstrate that the 1 h of 10 dyn/cm2 FSS treatment greatly upregulated the expression of osteogenic markers (Runx2, ALP, Col I), accompanied by AnxA6 activation. The genetic ablation of AnxA6 suppressed the autophagic process, demonstrating lowered autophagy markers (Beclin1, ATG5, ATG7, LC3) and decreased autophagosome formation, and strongly reduced osteogenic differentiation induced by FSS. Furthermore, the addition of autophagic activator rapamycin to AnxA6 knockdown cells stimulated autophagy process, and coincided with more expressions of osteogenic proteins ALP and Col I under both static and FSS conditions. In conclusion, the findings in this study reveal a hitherto unidentified relationship between FSS-induced osteogenic differentiation and autophagy, and point to AnxA6 as a key mediator of autophagy in response to FSS, which may provide a new target for the treatment of osteoporosis and other diseases. Introduction Bone remodeling is a dynamic process in which continuous bone resorption and bone formation adapt to mechanical stimuli in the environment of bones, and gives rise to a mature, intact, and stable bone structure [1]. Mechanical signals have profound impacts on bone mass regulation, bone homeostasis, and skeleton adaptation [2][3][4]. As an example, the loss of mechanical stimulation leads to the disuse osteoporosis [3,4]. Generally, mechanical forces are delivered to the bone tissues and sensed by mechanosensitive cells (osteocytes, osteoblasts, osteoclasts, and their progenitors), resulting in osteogenesis via cell proliferation, differentiation, and apoptosis [5][6][7]. In native bone tissues, bone cells reside within a complex microenvironment consisting of different mechanical stimuli, such as fluid shear stress (FSS), matrix stiffness, or mechanical loads, which lead to specific biological functions [1]. The daily movement process can facilitate the generation of fluid flow in the marrow cavity and on the endosteal surface by dynamic intramedullary pressurization and by applying FSS, ranging from 0.5 to 3 Pa, to bone cells [7,8]. Many in vitro studies also verified that osteoblasts couldsensitively respond to a wide range of different shear stress, of which 0.5-2 Pa is the most commonly used range [7,9]. The appropriate stimulation of FSS greatly contributes to the development and reconstruction of bone tissues through the activation of the gene expression of Runx2 and Osterix, and the secretion of collagen type I (Col I), osteocalcin (OC), and alkaline 2. 1 . FSS Induces Osteogenic Differentiation To investigate the effect of FSS on bone formation, we first explored how FSS influences osteogenic differentiation of mechanosensitive cells MC3T3-E1. FSS with different intensities of 0 (static control), 5, 10, and 20 dyn/cm 2 , was applied to MC3T3-E1 cells for 1 h. The Western blot results show that FSS significantly promoted the expression of osteogenic markers, Col I and ALP in comparison with the control group ( Figure 1A,B). Col I expression was upregulated as FSS intensity increased, with the highest expression observed on the group of 20 dyn/cm 2 , while the expression of ALP increased to its peak under the condition of 10 dyn/cm 2 . In order to optimize the effect of FSS on osteogenic differentiation, different loading times of 0 (static control), 0.5, 1, 2, or 4 h under an FSS intensity of 10 dyn/cm 2 were applied to MC3T3-E1 cells. We observed that the 10 dyn/cm 2 intensity of FSS provoked the expression of osteogenic proteins Runx2, ALP, and Col I, which increased to the highest levels after 1 h FSS treatment ( Figure 1C,D). In addition, ALP staining and ALP activity in MC3T3-E1 cells were performed and monitored, respectively, to further explore the effect of FSS on osteogenic differentiation. As shown in Figure 1E,F, compared with the static group, ALP-positive cells increased by 1.8-fold when exposed to 10 dyn/cm 2 FSS for 1 h. Consistently, ALP activity also showed a significant enhancement up to 3.6-fold ( Figure 1G). These results suggest that the application of FSS with 10 dyn/cm 2 intensity for 1 h is optimal for promoting the osteogenic differentiation of MC3T3-E1 cells. which increased to the highest levels after 1 h FSS treatment ( Figure 1C,D). In addition, ALP staining and ALP activity in MC3T3-E1 cells were performed and monitored, respectively, to further explore the effect of FSS on osteogenic differentiation. As shown in Figure 1E,F, compared with the static group, ALP-positive cells increased by 1.8-fold when exposed to 10 dyn/cm 2 FSS for 1 h. Consistently, ALP activity also showed a significant enhancement up to 3.6-fold ( Figure 1G). These results suggest that the application of FSS with 10 dyn/cm 2 intensity for 1 h is optimal for promoting the osteogenic differentiation of MC3T3-E1 cells. Western blot analysis and quantification of osteogenic protein expression in MC3T3-E1 cells when exposed to 0 (static control), 5, 10, pr 20 dyn/cm 2 FSS for 1 h. GAPDH served as an internal control (n = 3). (C,D) Western blot analysis and quantification of osteogenic protein expression in MC3T3-E1 cells when exposed to 10 dyn/cm 2 FSS for 0 (static control), 0.5, 1, 2, or 4 h. GAPDH served as an internal control (n = 3). (E) ALP staining after 5 additional days of osteogenic induction when exposed to 10 dyn/cm 2 FSS for 1 h. (F) Statistical bar graph showing the percentage of ALP-positive cells in (E) (n = 3). (G) Quantification of ALP activity in MC3T3-E1 cells with or without 1 h of 10 dyn/cm 2 FSS stimulation (n = 3). All data are presented as mean ± SEM. * p < 0.05 versus static control group. FSS Promotes the Expression of AnxA6 in MC3T3-E1 Cells AnxA6, as an important member of the annexin protein family that are abundant in bone tissues, plays a key role in ECM mineralization [28]. Previous studies showed a strong correlation between annexins and mechanical factors [14,15]. However, they failed to elucidate whether AnxA6 can respond to FSS during the process of osteogenic differentiation. In order to investigate the response of AnxA6 to FSS, 0, 5, 10, and 20 dyn/cm 2 FSS were separately loaded on MC3T3-E1 cells for 1 h. As determined via Western blot, AnxA6 displayed a significant elevation under 5 and 10 dyn/cm 2 FSS, but it decreased markedly when exposed to 20 dyn/cm 2 FSS (Figure 2A,B). To further explore the effect of FSS-loading duration on AnxA6 expression, different loading times of 0, 0.5, 1, 2 and 4 h Western blot analysis and quantification of osteogenic protein expression in MC3T3-E1 cells when exposed to 0 (static control), 5, 10, pr 20 dyn/cm 2 FSS for 1 h. GAPDH served as an internal control (n = 3). (C,D) Western blot analysis and quantification of osteogenic protein expression in MC3T3-E1 cells when exposed to 10 dyn/cm 2 FSS for 0 (static control), 0.5, 1, 2, or 4 h. GAPDH served as an internal control (n = 3). (E) ALP staining after 5 additional days of osteogenic induction when exposed to 10 dyn/cm 2 FSS for 1 h. (F) Statistical bar graph showing the percentage of ALP-positive cells in (E) (n = 3). (G) Quantification of ALP activity in MC3T3-E1 cells with or without 1 h of 10 dyn/cm 2 FSS stimulation (n = 3). All data are presented as mean ± SEM. * p < 0.05 versus static control group. FSS Promotes the Expression of AnxA6 in MC3T3-E1 Cells AnxA6, as an important member of the annexin protein family that are abundant in bone tissues, plays a key role in ECM mineralization [28]. Previous studies showed a strong correlation between annexins and mechanical factors [14,15]. However, they failed to elucidate whether AnxA6 can respond to FSS during the process of osteogenic differentiation. In order to investigate the response of AnxA6 to FSS, 0, 5, 10, and 20 dyn/cm 2 FSS were separately loaded on MC3T3-E1 cells for 1 h. As determined via Western blot, AnxA6 displayed a significant elevation under 5 and 10 dyn/cm 2 FSS, but it decreased markedly when exposed to 20 dyn/cm 2 FSS (Figure 2A,B). To further explore the effect of FSS-loading duration on AnxA6 expression, different loading times of 0, 0.5, 1, 2 and 4 h were applied to MC3T3-E1 cells under an intensity of 10 dyn/cm 2 FSS. Compared to the static group, AnxA6 increased to the highest level after 1 h of FSS treatment, which is consistent with the previous results of the highest expression of osteogenic proteins, whereas AnxA6 expression decreased with 0.5 and 4 h of treatment ( Figure 2C,D). Similar to the results of Western blot analysis, elevated immunofluorescence intensity of AnxA6 in MC3T3-E1 cells was visualized after treating with 1 h of 10 dyn/cm 2 FSS ( Figure 2E). AnxA6 accumulated to one side of the cytoplasm and showed a preference to the plasma membrane when exposed to FSS, which was different from the homogeneous distribution of AnxA6 in the cytoplasm observed in the control group ( Figure 2E). These results demonstrate that 1 h of 10 dyn/cm 2 FSS to cells can promote the expression of AnxA6, which is consistent with the expression of Runx2, ALP, and Col I shown in Figure 1, indicating a potential correlation between AnxA6 expression and osteogenic differentiation. were applied to MC3T3-E1 cells under an intensity of 10 dyn/cm 2 FSS. Compared to the static group, AnxA6 increased to the highest level after 1 h of FSS treatment, which is consistent with the previous results of the highest expression of osteogenic proteins, whereas AnxA6 expression decreased with 0.5 and 4 h of treatment ( Figure 2C,D). Similar to the results of Western blot analysis, elevated immunofluorescence intensity of AnxA6 in MC3T3-E1 cells was visualized after treating with 1 h of 10 dyn/cm 2 FSS ( Figure 2E). AnxA6 accumulated to one side of the cytoplasm and showed a preference to the plasma membrane when exposed to FSS, which was different from the homogeneous distribution of AnxA6 in the cytoplasm observed in the control group ( Figure 2E). These results demonstrate that 1 h of 10 dyn/cm 2 FSS to cells can promote the expression of AnxA6, which is consistent with the expression of Runx2, ALP, and Col I shown in Figure 1, indicating a potential correlation between AnxA6 expression and osteogenic differentiation. Western blot analysis and quantification of AnxA6 expression in MC3T3-E1 cells when exposed to 0 (static control), 5, 10, or 20 dyn/cm 2 FSS for 1 h. GAPDH served as an internal control (n = 3). (C,D) Western blot analysis and quantification of AnxA6 expression in MC3T3-E1 cells when exposed to 10 dyn/cm 2 FSS for 0 (static control), 0.5, 1, 2, or 4 h. GAPDH served as an internal control (n = 3). (E) Representative immunofluorescence images showing the expression and distribution of AnxA6 (labelled by red fluorescence, indicated by yellow arrows) inside the cell and F-actin organization (labelled by green fluorescence) when exposed to 10 dyn/cm 2 FSS for 1 h. Nuclei were stained with DAPI (blue), scale bar = 50 μm. All data are presented as mean ± SEM. * p < 0.05 versus static control group. AnxA6 Involves in FSS-Induced Osteogenic Differentiation A stable AnxA6 knockdown MC3T3-E1 cell line was constructed to study the effect of AnxA6 on osteoblastic differentiation. Compared with the shCtrl group, the AnxA6 Western blot analysis and quantification of AnxA6 expression in MC3T3-E1 cells when exposed to 0 (static control), 5, 10, or 20 dyn/cm 2 FSS for 1 h. GAPDH served as an internal control (n = 3). (C,D) Western blot analysis and quantification of AnxA6 expression in MC3T3-E1 cells when exposed to 10 dyn/cm 2 FSS for 0 (static control), 0.5, 1, 2, or 4 h. GAPDH served as an internal control (n = 3). (E) Representative immunofluorescence images showing the expression and distribution of AnxA6 (labelled by red fluorescence, indicated by yellow arrows) inside the cell and F-actin organization (labelled by green fluorescence) when exposed to 10 dyn/cm 2 FSS for 1 h. Nuclei were stained with DAPI (blue), scale bar = 50 µm. All data are presented as mean ± SEM. * p < 0.05 versus static control group. AnxA6 Involves in FSS-Induced Osteogenic Differentiation A stable AnxA6 knockdown MC3T3-E1 cell line was constructed to study the effect of AnxA6 on osteoblastic differentiation. Compared with the shCtrl group, the AnxA6 protein and gene were greatly downregulated in the shAnxA6 group indicating the successful knockdown of AnxA6 ( Figure 3A-C). In addition, the deficiency of AnxA6 led to less mineral deposition ( Figure 3D), fewer ALP-positive cells ( Figure 3E),and lower ALP activity ( Figure 3F) in MC3T3-E1 cells. The suppression of osteogenic differentiation from the absence of AnxA6 was further confirmed by the decreased expression of osteogenesisassociated proteins ALP and Col I ( Figure 3G,H). To further explore the role of AnxA6 in FSS-regulated osteogenesis, 1 h of 10 dyn/cm 2 FSS was applied to shCtrl and shAnxA6 MC3T3-E1 cells. As shown in Figure 3G,H, the expressions of ALP and Col I in shAnxA6 cells recovered to high levels when exposed to 1 h of 10 dyn/cm 2 FSS treatment, but they still could not exceed those in shCtrl cells under the same FSS loading. protein and gene were greatly downregulated in the shAnxA6 group indicating the successful knockdown of AnxA6 ( Figure 3A-C). In addition, the deficiency of AnxA6 led to less mineral deposition ( Figure 3D), fewer ALP-positive cells ( Figure 3E),and lower ALP activity ( Figure 3F) in MC3T3-E1 cells. The suppression of osteogenic differentiation from the absence of AnxA6 was further confirmed by the decreased expression of osteogenesisassociated proteins ALP and Col I ( Figure 3G,H). To further explore the role of AnxA6 in FSS-regulated osteogenesis, 1 h of 10 dyn/cm 2 FSS was applied to shCtrl and shAnxA6 MC3T3-E1 cells. As shown in Figure 3G,H, the expressions of ALP and Col I in shAnxA6 cells recovered to high levels when exposed to 1 h of 10 dyn/cm 2 FSS treatment, but they still could not exceed those in shCtrl cells under the same FSS loading. . GAPDH served as an internal control (n = 3). All data are presented as mean ± SEM. * p < 0.05 versus static shCtrl group; # p < 0.05 versus FSS group. AnxA6 Knockdown Impairs Autophagy under FSS Condition Autophagy, identified as a necessary part of bone remodeling, can regulate osteoblast differentiation and mineralization [24]. Previous studies have shown that AnxA6 is involved in the process of autophagy, especially in the early stages of autophagosome biogenesis [26]. Primed by these findings, we explored whether FSS-induced AnxA6 could influence autophagy. Compared to the static group, cells exposed to 5, 10 and 20 dyn/cm 2 FSS displayed higher expressions of autophagic markers Beclin1, ATG5 and ATG7, and were accompanied with more LC3B-I_to_LC3B-II transitions, indicating the accelerated occurrence of autophagy ( Figure 4A,B). We further studied the influence of FSS loading time on autophagy by regulating the FSS duration. Compared with the static control, the expressions of Beclin1, ATG5, and ATG7 increased in MC3T3-E1 treated with 0.5 and 1 h of 10 dyn/cm 2 FSS, while p62, which can be absorbed and destroyed in autolysosomes [29], decreased greatly after 1 h of FSS treatment ( Figure 4C,D). These results strongly suggest that the application of 1 h FSS with 10 dyn/cm 2 intensity to cells leads to a significant occurrence of autophagy. AnxA6 Knockdown Impairs Autophagy under FSS Condition Autophagy, identified as a necessary part of bone remodeling, can regulate osteoblast differentiation and mineralization [24]. Previous studies have shown that AnxA6 is involved in the process of autophagy, especially in the early stages of autophagosome biogenesis [26]. Primed by these findings, we explored whether FSS-induced AnxA6 could influence autophagy. Compared to the static group, cells exposed to 5, 10 and 20 dyn/cm 2 FSS displayed higher expressions of autophagic markers Beclin1, ATG5 and ATG7, and were accompanied with more LC3B-I_to_LC3B-II transitions, indicating the accelerated occurrence of autophagy (Figure 4A,B). We further studied the influence of FSS loading time on autophagy by regulating the FSS duration. Compared with the static control, the expressions of Beclin1, ATG5, and ATG7 increased in MC3T3-E1 treated with 0.5 and 1 h of 10 dyn/cm 2 FSS, while p62, which can be absorbed and destroyed in autolysosomes [29], decreased greatly after 1 h of FSS treatment ( Figure 4C,D). These results strongly suggest that the application of 1 h FSS with 10 dyn/cm 2 intensity to cells leads to a significant occurrence of autophagy. Western blot analysis and quantification of autophagic protein expression in MC3T3-E1 cells when exposed to 0 (static control), 5, 10, or 20 dyn/cm 2 FSS for 1 h. GAPDH served as an internal control (n = 3). (C,D) Western blot analysis and quantification of autophagic proteins expression in MC3T3-E1 cells when exposed to 10 dyn/cm 2 FSS for 0 (static control), 0.5, 1, 2, or 4 h. GAPDH served as an internal control (n = 3). All data are presented as mean ± SEM. * p < 0.05 versus static control group. In order to reveal the role of AnxA6 in FSS-induced autophagy, we applied 1 h of 10 dyn/cm 2 FSS to shCtrl and shAnxA6 MC3T3-E1 cells. We found that Beclin1 and ATG5 were significantly increased by FSS in the shCtrl group, and greatly suppressed in AnxA6 Western blot analysis and quantification of autophagic protein expression in MC3T3-E1 cells when exposed to 0 (static control), 5, 10, or 20 dyn/cm 2 FSS for 1 h. GAPDH served as an internal control (n = 3). (C,D) Western blot analysis and quantification of autophagic proteins expression in MC3T3-E1 cells when exposed to 10 dyn/cm 2 FSS for 0 (static control), 0.5, 1, 2, or 4 h. GAPDH served as an internal control (n = 3). All data are presented as mean ± SEM. * p < 0.05 versus static control group. In order to reveal the role of AnxA6 in FSS-induced autophagy, we applied 1 h of 10 dyn/cm 2 FSS to shCtrl and shAnxA6 MC3T3-E1 cells. We found that Beclin1 and ATG5 were significantly increased by FSS in the shCtrl group, and greatly suppressed in AnxA6 knockdown cells, regardless of FSS loadings ( Figure 5A,B). Transmission electron microscope (TEM) is an effective tool to observe the morphology of autophagosomes and identify the occurrence of autophagy. The results of TEM in Figure 5C demonstrate similar trends with those of the Western blot analysis above. More autophagosomes were formed via FSS induction in shCtrl cells, whereas AnxA6 knockdown cells showed a decreased number of autophagosomes. In addition, an AdPlus-mCherry-GFP-LC3B infection assay was applied to track autophagosomes in MC3T3-E1 cells. The results in Figure 5D show more LC3B punctate dots in the cytoplasm when exposed to FSS, indicating the occurrence of autophagic flux, although the knockdown AnxA6 group impeded the process. All these results suggest that FSS can induce the occurrence of autophagy via AnxA6 induction. knockdown cells, regardless of FSS loadings ( Figure 5A,B). Transmission electron microscope (TEM) is an effective tool to observe the morphology of autophagosomes and identify the occurrence of autophagy. The results of TEM in Figure 5C demonstrate similar trends with those of the Western blot analysis above. More autophagosomes were formed via FSS induction in shCtrl cells, whereas AnxA6 knockdown cells showed a decreased number of autophagosomes. In addition, an AdPlus-mCherry-GFP-LC3B infection assay was applied to track autophagosomes in MC3T3-E1 cells. The results in Figure 5D show more LC3B punctate dots in the cytoplasm when exposed to FSS, indicating the occurrence of autophagic flux, although the knockdown AnxA6 group impeded the process. All these results suggest that FSS can induce the occurrence of autophagy via AnxA6 induction. FSS Promotes Osteogenic Differentiation by Activating AnxA6-Mediated Autophagy To further dissect the role of autophagy in osteogenic differentiation and matrix mineralization, we modulated the status of autophagy using an autophagic inhibitor (chloroquine) and an autophagic activator (rapamycin) during the in vitro mineralization process. As shown in Figure 6A,B, after culturing MC3T3-E1 cells with an osteogenic medium for 7 or 14 days, the inhibition of autophagy by chloroquine greatly reduced the number of ALP-positive staining cells, and inhibited the formation of mineralized nodules, while autophagic activation via rapamycin showed the exact opposite effect, promoting the mineralization process. Moreover, chloroquine addition decreased the expression of Beclin1 and ATG7, and the expression of ALP and Col I in MC3T3-E1 cells. On the other hand, rapamycin upregulated autophagy-and osteoblastic-differentiation-associated protein expression ( Figure 6C,D). These results indicate that alterations in osteoblast differentiation and mineralization were evoked by autophagic regulation. We next investigated whether FSS-induced AnxA6 enhanced osteoblast differentiation via activating autophagy. Since knocking AnxA6 down resulted in the suppression of autophagy and osteogenic differentiation under both static and FSS conditions, rapamycin was used to restore the autophagic flux suppressed by the AnxA6 knockdown and subsequently detect the expression of osteogenic markers in order to identify the effect of autophagy in this process. As shown in Figure 6E,F, the precondition of rapamycin recovered the expression of ALP and Col I, which had been inhibited in the shAnxA6 group, especially under FSS loading conditions. Moreover, the expression of ALP and Col I showed a positive correlation with the expression of autophagy markers, including Beclin1, ATG5, and ATG7. Overall, FSS contributes to the differentiation and mineralization of osteoblasts through AnxA6-mediated autophagy activation. FSS Promotes Osteogenic Differentiation by Activating AnxA6-Mediated Autophagy To further dissect the role of autophagy in osteogenic differentiation and matrix mineralization, we modulated the status of autophagy using an autophagic inhibitor (chloroquine) and an autophagic activator (rapamycin) during the in vitro mineralization process. As shown in Figure 6A,B, after culturing MC3T3-E1 cells with an osteogenic medium for 7 or 14 days, the inhibition of autophagy by chloroquine greatly reduced the number of ALP-positive staining cells, and inhibited the formation of mineralized nodules, while autophagic activation via rapamycin showed the exact opposite effect, promoting the mineralization process. Moreover, chloroquine addition decreased the expression of Beclin1 and ATG7, and the expression of ALP and Col I in MC3T3-E1 cells. On the other hand, rapamycin upregulated autophagy-and osteoblastic-differentiation-associated protein expression ( Figure 6C,D). These results indicate that alterations in osteoblast differentiation and mineralization were evoked by autophagic regulation. We next investigated whether FSS-induced AnxA6 enhanced osteoblast differentiation via activating autophagy. Since knocking AnxA6 down resulted in the suppression of autophagy and osteogenic differentiation under both static and FSS conditions, rapamycin was used to restore the autophagic flux suppressed by the AnxA6 knockdown and subsequently detect the expression of osteogenic markers in order to identify the effect of autophagy in this process. As shown in Figure 6E,F, the precondition of rapamycin recovered the expression of ALP and Col I, which had been inhibited in the shAnxA6 group, especially under FSS loading conditions. Moreover, the expression of ALP and Col I showed a positive correlation with the expression of autophagy markers, including Be-clin1, ATG5, and ATG7. Overall, FSS contributes to the differentiation and mineralization of osteoblasts through AnxA6-mediated autophagy activation. Discussion Bones undertake a lifelong mechanical-loading associated remodeling process by balancing bone-forming osteoblasts and bone-absorbing osteoclasts. As one of the key components of the bone multicellular units, osteoblasts are specifically responsible for the mineralization of the bone matrix by responding to mechanical changes from body weight, movement and gravity [2]; for example, appropriate exercises contribute to enhancing bone density and preventing bone loss [30]. Sun et al. revealed that the application of FSS to osteoblasts in vitro increased the expression of osteogenic-differentiationrelated proteins, and that mechanical loading from exercise could promote osteogenesis in vivo [4,31]. Microgravity and mechanical unloading strongly influence bone structures, leading to the dysfunction of osteoblasts and decreased bone mineral density. In this Discussion Bones undertake a lifelong mechanical-loading associated remodeling process by balancing bone-forming osteoblasts and bone-absorbing osteoclasts. As one of the key components of the bone multicellular units, osteoblasts are specifically responsible for the mineralization of the bone matrix by responding to mechanical changes from body weight, movement and gravity [2]; for example, appropriate exercises contribute to enhancing bone density and preventing bone loss [30]. Sun et al. revealed that the application of FSS to osteoblasts in vitro increased the expression of osteogenic-differentiation-related proteins, and that mechanical loading from exercise could promote osteogenesis in vivo [4,31]. Microgravity and mechanical unloading strongly influence bone structures, leading to the dysfunction of osteoblasts and decreased bone mineral density. In this study, we found a clear link between the properties of FSS (density and duration) and osteogenic differentiation; 1 h of 10 dyn/cm 2 FSS loading optimal for the differentiation and mineralization of osteoblasts (Figure 1). However, FSS-induced osteogenic differentiation is not always of FSS magnitude or duration-dependent. Higher stresses or a long duration may lead to damaging or negative responses, which was also reported in previous studies [32]. Accordingly, the proper magnitude and duration of FSS are vital for osteogenesis. Since osteoblasts sensitively respond to a certain range of FSS and contribute to bone formation, the underlying molecular mechanisms that greatly impact on the prevention and treatment of bone diseases, such as osteoporosis and fractures, need to be explored. An elevated AnxA6 protein, which can respond to mechanical loading and influence cell proliferation [33], differentiation [28], migration [34], and other activities, was observed in our study under FSS loading (Figure 2). This finding is consistent with those of previous studies regarding the role of the AnxA protein family in mechanoresponses that demonstrated that the disturbance flow condition facilitated the interactions between AnxA2 and integrin α5 to activate integrin in endothelial cells [14], and that AnxA5 mediated mechanotransduction by detecting the calcium response of osteoblasts to oscillating fluid flow [15]. In our study, we found that FSS with different intensities and durations led to different responses of AnxA6. In detail, AnxA6 increased under 5 and 10 dyn/cm 2 FSS loading, while it decreased with 20 dyn/cm 2 , suggesting the existence of an FSS threshold for AnxA6 induction. In general, AnxA is homogeneously distributed in the cytoplasm while responding to certain stimuli, such as glucocorticoids [35,36], inflammation [36], a high concentration of calcium [37], or FSS [14]. Annexins translocate from the cytoplasm to the plasma membrane to participate in signal transduction and perform corresponding functions. Our results in Figure 2E similarly reveal a translocation trend of AnxA6 from the cytoplasm to the plasma membrane under the exposure of 1 h of 10 dyn/cm 2 FSS. AnxA6, as an essential component in extracellular vesicles, is overexpressed in the zones of hypertrophic and terminally differentiated growth plate chondrocytes [38]. It plays a vital role in extracellular mineralization and is highly related to the development of osteoporosis. AnxA6 contributes to the accumulation of Ca 2+ and stabilizes Ca 2+ -binding to phosphatidylserine (PS), leading to a favorable environment for apatite formation [39]. Given the effect of FSS on AnxA6 expression and osteogenic differentiation, we demonstrated here that the genetic knockdown of AnxA6 significantly inhibited osteoblast differentiation induced by FSS loading (Figure 3). However, the expressions of osteogenic differentiation markers were still higher than those of the shAnxA6 group. The possible reasons that may be attributed to that, besides AnxA6, are various other mechanoreceptors, including ion channels, integrins, connexins, G-protein coupled receptors, primary cilia, and cytoskeletons also existing in osteoblasts [1,[40][41][42] and participating in mechanical-stimulus-regulated osteogenesis. Accordingly, AnxA6 is involoved in FSSinduced osteogenic differentiation and may serve as a potential mechanosensitive protein that can directly or synergistically respond to mechanical stimulation. As reported previously, annexins are involved in multiple biological processes, including autophagy [18], epithelial-mesenchymal transition (EMT) [43], and extracellular matrix formation [44]. As an evolutionary biological mechanism, autophagy is closely associated with the metabolism, survival, and differentiation of osteoblasts [45,46]. It can promote the development of bones by stopping the calcification of endplate chondrocytes [47]. According to Liu et al., the inhibition of autophagy impairs osteoblast development and results in osteopenia in mice [48]. Conversely, activating osteocyte autophagy by rapamycin could reduce the severity of age-related bone changes in the trabecular bones of old male rats [49]. Zhang et al. found that FSS-induced autophagy in bone cells could regulate cell survival via the ATP metabolism [50]. Our previous studies also revealed that FSS could promote cell migration and invasion by activating autophagy [23,29,51,52]. In this study, we showed that FSS could stimulate autophagy by enhancing the expression of autophagic markers and the formation of autophagosomes, which was in full agreement with previous studies (Figures 4 and 5). However, it is unknown how AnxA6 participates in FSS-induced autophagy during osteogenic differentiation, even though the contribution of AnxA6 to autophagy during cancer progression was confirmed [53,54]. According to our results, the genetic knockdown of AnxA6 could suppress the expression of autophagic markers even under FSS loading conditions ( Figure 5). Given the fact that autophagy plays a central role in the coordination of bone development, we further proved the contribution of autophagy to osteoblast mineralization by using an autophagic inhibitor and activator ( Figure 6A-D). Autophagic activation serves as an essential part of the cyclic mechanical-stretching-promoted osteoblast differentiation of BMSCs [55]. We demonstrated that the osteogenic differentiation inhibited by AnxA6 knockdown recovered greatly when pretreating with autophagic activator RAPA ( Figure 6E), which is beneficial to ECM mineralization and bone formation. In conclusion, our findings indicate that AnxA6-regulated autophagy plays an important role in FSS-induced osteogenic differentiation, as shown schematically in Figure 7. We found that 1 h of 10 dyn/cm 2 FSS providedan adequate amount of AnxA6 to initiate autophagy and increased ALP and Col I expression, leading to osteogenic differentiation. The knockdown of AnxA6 strongly inhibited autophagy and subsequently decreased osteogenic differentiation, while the restoration of the autophagic flux by an autophagic activator recovered the effect of FSS on osteogenic differentiation. All these results provide novel insights into the mechanical mechanisms underlying FSS-induced osteogenesis. However, one limitation of our study is that it is not clear whether AnxA6 can directly respond to FSS or be indirectly modulated by other mechanosensors. In addition, though the effect of autophagy in AnxA6-mediated osteogenic differentiation was identified in this study, the underlying molecular mechanisms are still obscure. Future studies may help in providing novel strategies for the prevention and treatment of osteoporosis. Given the fact that autophagy plays a central role in the coordination of bone development, we further proved the contribution of autophagy to osteoblast mineralization by using an autophagic inhibitor and activator ( Figure 6A-D). Autophagic activation serves as an essential part of the cyclic mechanical-stretching-promoted osteoblast differentiation of BMSCs [55]. We demonstrated that the osteogenic differentiation inhibited by AnxA6 knockdown recovered greatly when pretreating with autophagic activator RAPA ( Figure 6E), which is beneficial to ECM mineralization and bone formation. In conclusion, our findings indicate that AnxA6-regulated autophagy plays an important role in FSS-induced osteogenic differentiation, as shown schematically in Figure 7. We found that 1 h of 10 dyn/cm 2 FSS providedan adequate amount of AnxA6 to initiate autophagy and increased ALP and Col I expression, leading to osteogenic differentiation. The knockdown of AnxA6 strongly inhibited autophagy and subsequently decreased osteogenic differentiation, while the restoration of the autophagic flux by an autophagic activator recovered the effect of FSS on osteogenic differentiation. All these results provide novel insights into the mechanical mechanisms underlying FSS-induced osteogenesis. However, one limitation of our study is that it is not clear whether AnxA6 can directly respond to FSS or be indirectly modulated by other mechanosensors. In addition, though the effect of autophagy in AnxA6-mediated osteogenic differentiation was identified in this study, the underlying molecular mechanisms are still obscure. Future studies may help in providing novel strategies for the prevention and treatment of osteoporosis. Plasmid and Transfection The shRNA plasmid targeting AnxA6 towards the sequence of 5 -TGTGTGTGCAGCC AATGA TTTCTCGAGAAATCATTGGCTGCACACACA-3 was constructed by Tsingke Biotechnology Co., Ltd.(Beijing, China) (shAnxA6 group). The shRNA plasmid towards the sequence of 5 -CCGGGGTTCTCCGAACGTGTCACGTCTCGAGACGTGACACGTTCGGA GAACCTTTTTGAATT-3 served as the negative control (shCtrl group). MC3T3-E1 cells were transfected with shAnxA6 and shCtrl plasmids using Lipofectamine 8000 (Beyotime, Shanghai, China) according to the manufacturer's protocol. After 72 h of transfection, puromycin (5 µg/mL) was used to select stably transfected shAnxA6 cells, and the final transfection efficiency was evaluated with Western blot and qRT-PCR assays. FSS Loading The FSS loading system was adapted from our previously described procedures [22,51]. Briefly, MC3T3-E1 cells were seeded onto a slide (24 × 75 mm) at a density of 5 × 10 5 cells/mL and cultured until confluency. The cells were then immediately exposed to laminar FSS with different densities (5, 10, and 20 dyn/cm 2 ) for 1 h or to 10 dyn/cm 2 FSS for different durations (0.5, 1, 2, and 4 h) by using a parallel flow chamber that was maintained in a cell incubator with 5% CO 2 at 37 • C. MC3T3-E1 cells without FSS treatment on slides served as the control group. For autophagic induction, MC3T3-E1 cells were preconditioned with 200 nM rapamycin for 12 h before FSS loading. Alizarin Red S Staining The shCtrl or shAnxA6 MC3T3-E1 cells at a density of 5 × 10 5 cells/mL were seeded onto a 6-well plate, cultured with osteogenic medium for 14 days, and then fixed with 95% alcohol for 15 min. Then, the cells were rinsed with ddH 2 O three times, and stained with 2% Alizarin red solution (pH 4.2, Beyotime, Shanghai, China) for 30 min at room temperature. The excess dye was removed by rinsing with ddH 2 O at least five times, and the mineralization nodules were observed with an inverted microscope (CK2, Olympus, Shinjuku, Tokyo, Japan). Each group was performed in triplicate. Alkaline Phosphatase (ALP) Staining The shCtrl or shAnxA6 MC3T3-E1 cells at a density of 5 × 10 5 cells/mL were seeded onto a 6-well plate or a slide, cultured with osteogenic medium for 5 or 7 days, and then fixed with 4% paraformaldehyde for 15 min. After washing three times with ddH 2 O, the cells were treated with a fresh alkaline phosphatase staining work solution that had been prepared according to the manufacturer's instructions (C3206, Beyotime, Shanghai, China) for 60 min. The staining solution was removed afterward and rinsed with ddH 2 O five times to terminate the reaction. ALP-positive cells were observed using an inverted microscope (CK2, Olympus, Shinjuku, Tokyo, Japan). Alkaline Phosphatase Activity Assay The shCtrl or shAnxA6 MC3T3-E1 cells were seeded onto slides or 6-well plates at a density of 5 × 10 5 cells/mL, cultured with osteogenic medium for 5 days [56,57], and then lysed with a cell lysis solution (free of phosphatase inhibitors) for 30 min. The lysis solution was collected and centrifuged at 14,000× g rpm for 10 min, followed by determining the ALP activity of the supernatant with an Alkaline Phosphatase Assay Kit (P0321S, Beyotime, Shanghai, China) according to the manufacturer's instructions. Then, the microplate reader was used to detect the absorbance of each group at 405 nm. Each group was performed in triplicate. Western Blot Analysis A kit was used to determine the protein concentration (P0010, Beyotime, Shanghai, China). Equal amounts of protein (20 µg) from each sample were electrophoresed on 10% or 12% sodium dodecyl sulfate-polyacrylamide gels, transferred onto polyvinylidene difluoride (PVDF) membranes and blocked with 5% skim milk for 2 h at room temperature. Then the membranes were incubated with primary antibodies (1:1000, diluted in 5% skim milk) overnight on a roller bank at 4 • C and treated with specific HRP binding secondary antibodies for 1 h at room temperature. The results were visualized via enhanced chemiluminescence and acquired with a Molecular Image ® ChemiDocTM XRS+system with Image Lab TM 3.0 software. The ratios of the protein intensities to that of glyceraldehyde-3phosphate dehydrogenase (GAPDH) were calculated using Image Lab 3.0 software. Quantitative Real-Time PCR (qRT-PCR) The total RNA of each sample was extracted using an RNAeasy mini Kit (RE-03111, FOREGENE, Chengdu, China) and quantified using Nanodrop 2000 (Thermo Fisher Scientific, Waltham, MA, USA). Then, 1 µg of total RNA was reversely transcribed into cDNA using the Evo M-MLV RT Mix Kit (AG11728, Accurate, Changsha, China) following the manufacturer's instructions. SYBR @ Premix Ex Tag TM II (TaKaRa, Kusatsu, Shiga, Japan) was used to perform qRT-PCR analysis with a Bio-rad real-time PCR system (CFX96, Bio-rad, Hercules, CA, USA). Each PCR reaction included 0.4 µM of forward and reverse primers, 100 ng of DNA, and 12.5 µL 2× SYBR Premix Ex Taq TM II in a total reaction volume of 25 µL. The qPCR program included an initial denaturation step at 95 • C for 30 s, followed by 40 cycles of denaturation at 95 • C for 5 s, annealing at 60 • C for 30 s, with a final step for melting curve analysis. The mRNA expression was normalized to GAPDH and calculated using the 2 −∆∆Ct formula. Each experiment was performed in triplicate. The primer sequences are listed in Table 2. Table 2. Primer sequences for qRT-PCR detection. Name Forward Reverse Immunofluorescence Staining A total of 1 × 10 5 cells were seeded onto a coverslip placed in 24-well plates and exposed to different FSS loadings. After different treatments, the cells were washed three times with PBS and fixed with 4% paraformaldehyde for 10 min, followed by blocking with 5% Bovine Serum Albumin (BSA) for 30 min at room temperature. Then, the samples were incubated with primary antibodies (1:500, diluted in 5% BSA) overnight at 4 • C and stained with corresponding fluorochrome-labeled secondary antibodies (1:1000, diluted in 1% BSA) for 60 min. For additional F-actin staining, FITC-phalloidin (1:200, CA1620, Solarbio, Beijing, China) was co-incubated with the cells for 30 min. The cells were subsequently stained with 4 , 6-diamidino-2-phenylindole (DAPI, 1:800) for 10 min at room temperature and rinsed five times with PBS to remove excess staining solution. The fluorescent images were captured using a Zeiss (LSM710, Oberkochen, Baden-Württemberg, GER) confocal microscope. AdPlus-mCherry-GFP-LC3B Infection AdPlus-mCherry-GFP-LC3B (C3012, Beyotime, Shanghai, China), an adenovirus expressing the mCherry-GFP-LC3B fusion protein, was used to monitor the autophagic flux in the targeted cells. The shCtrl or shAnxA6 MC3T3-E1 cells were seeded onto a coverslip placed in 24-well plates at a density of 1 × 10 5 cells/mL and incubated with a complete DMEM medium to 50% confluence. Immediately following this, cells were treated for 24 h with a pre-configured viral solution (MOI = 20) of AdPlus-mCherry-GFP-LC3B adenovirus infection. At the end of treatment, the virus-containing medium was replaced with a fresh complete DMEM medium for another 48 h culture. After that, the transfected cells were exposed to 10 dyn/cm 2 FSS for 1 h, followed by being rinsed three times with PBS and fixed with 4% paraformaldehyde for 15 min. Lastly, the nuclei were stained with DAPI (1:800) for 10 min and imaged with a Zeiss (LSM710, Oberkochen, Baden-Württemberg, GER) confocal microscope. Statistical Analysis Statistical analysis was performed using GraphPad Prism 9 software (GraphPad Software, San Diego, CA, USA). The data obtained in this study are presented as the mean ± standard error of the mean (SEM). Two groups were compared using the two-tailed Student's t-test. Multiple groups were compared using the one-way ANOVA, followed by Tukey's test. p < 0.05 was considered to be statistically significant. Author Contributions: T.P. and G.S. contributed equally to this manuscript. Data curation, J.Y. and W.G.; writing-original draft preparation, T.P. and G.S.; writing-review and editing, Y.S. and X.L.; visualization, X.Y., Y.Z. and J.R.; project administration, X.L. All authors have read and agreed to the published version of the manuscript.
2022-12-14T16:18:02.253Z
2022-12-01T00:00:00.000
{ "year": 2022, "sha1": "bd29ad4d9522bb39d567cd41e85387dd82cc0acc", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/23/24/15702/pdf?version=1670741445", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fad42636841220fcba668233c59e397508ba3e85", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
265308911
pes2o/s2orc
v3-fos-license
nach0: multimodal natural and chemical languages foundation model Large Language Models (LLMs) have substantially driven scientific progress in various domains, and many papers have demonstrated their ability to tackle complex problems with creative solutions. Our paper introduces a new foundation model, nach0, capable of solving various chemical and biological tasks: biomedical question answering, named entity recognition, molecular generation, molecular synthesis, attributes prediction, and others. nach0 is a multi-domain and multi-task encoder–decoder LLM pre-trained on unlabeled text from scientific literature, patents, and molecule strings to incorporate a range of chemical and linguistic knowledge. We employed instruction tuning, where specific task-related instructions are utilized to fine-tune nach0 for the final set of tasks. To train nach0 effectively, we leverage the NeMo framework, enabling efficient parallel optimization of both base and large model versions. Extensive experiments demonstrate that our model outperforms state-of-the-art baselines on single-domain and cross-domain tasks. Furthermore, it can generate high-quality outputs in molecular and textual formats, showcasing its effectiveness in multi-domain setups. Introduction Large-scale pre-training of language models (LMs), such as BERT 1 , T5 2 , BART 3 and GPT 4 , on vast amounts of text data has yielded impressive results on a variety of natural language processing (NLP) tasks.These models' success can be attributed to their ability to learn deeply contextualized representations of input tokens through self-supervision at scale 1 .Recently, foundation models have built upon the concept of self-supervised learning by pre-training a single model over unlabeled data that can be easily adapted to any task 5 . The application of neural network architectures and LMs has significantly advanced the field of chemistry, particularly in domain-specific information retrieval, drug development, and clinical trial design [6][7][8][9][10][11][12][13][14][15] .These developments include neural molecular fingerprinting, generative approaches to small molecule design [11][12][13] , prediction of pharmacological properties, ‡ Email: alex@insilicomedicine.comand drug repurposing 13,14 .The clinical development of a drug is a time and money consuming process that typically requires several years and a billion-dollar budget to progress from phase 1 clinical trials to the patients 16 .The use of state-of-the-art neural network approaches and language models has the potential to facilitate the drug development process considerably. A number of LMs have been proposed for the biomedical domain, utilizing a variety of model families: for instance, researchers have developed BioBERT 17 , based on BERT with 110 million parameters, and SciFive, based on T5-base and T5-large with 220 and 770 million parameters respectively, using biomedical literature from PubMed.NVIDIA has also developed BioMegatron models in the biomedical domain using a more extensive set of PubMed-derived free text, ranging from 345 million to 1.2 billion parameters.However, the datasets used in these models cover mainly biomedical natural language texts and contain biomedical named entities like drugs, genes, and cell lines names but omit important chemical structure descriptions in SMILES format.Enriching biomedical datasets with chemical structures is an important and challenging task.Recently, LMs such as Galactica 18 , based on Transformer architecture in a decoder-only setup 19 with 120 billion parameters in its largest setup, and MolT5 20 , based on T5-base and T5-large, were proposed to address this limitation.Both modes were pre-trained with natural language and chemical data, creating a shared representation space, yet were not fine-tuned on a diverse set of chemical tasks Fig. 1 A Venn diagram that shows the relationships between fine-tuning data used in our study and related work.It is important to highlight that the majority of models typically treat the chemical space and the semantic space in the natural language domain independently.Novel cross-domain datasets such as Mol-Instructions 25 and MolT5 data 20 have asked whether it is possible to unify representations of natural language and molecules for NLP and molecule generation tasks within a single model.In this work, we seek to answer this question. with instruction tuning in a multi-task fashion.The Venn diagram in Fig. 1 provides a summary of the existing LMs.Furthermore, simple language models trained with molecular structures can reproduce complex molecular distributions 21 , and even their 3D structure of molecules, materials and proteins using a GPT framework 22 . In this paper, we propose a unified encoder-decoder transformer named nach0 for natural language, chemical generalization and cross-domain tasks.We pre-train on both natural language and chemical data using Self Supervised Learning and employ nach0 as the foundation model for a wide range of downstream tasks (Fig. 2).The tasks include well-known NLP problems such as information extraction, question answering, textual entailment, molecular structures and description generation, chemical property prediction, and reaction predictions.Inspired by Raffel et al. 2 , Chung et al. 23 , we follow the intuition that tasks can be described via natural language instructions, such as "What reactants could be used to synthesize O=C(NC1CCN(Cc2ccccc2)CC1)c1c(Cl)cccc1 [ N+](=O)[O-]" or "describe a molecule C1=CC(=CC=C1C[C@H](C(=O)[O-])N)O". Prompt design and instruction tuning are employed for model training using NVIDIA's Neural Modules (NeMo) framework 24 , which provides scientists with a way to train and deploy LLMs using NVIDIA GPUs.Extensive evaluation in both in-domain and cross-domain setup demonstrates that nach0 is a powerful tool for the chemistry domain. Contribution Our contributions are three-fold: 1. We introduce a biochemical foundation model nach0 and pre-train base and large versions of nach0 on molecular structures and textual data from scientific articles and patents. 2. We fine-tune nach0 in a supervised and multi-task manner, using a combination of diverse tasks specified through natural language prompts.datasets, focusing on both single-domain and cross-domain tasks, we show that our model achieves competitive results with state-of-the-art encoder-decoder models specialized for single domain. Framework nach0 The aim of nach0 is to create a unified transformer capable of performing natural language, chemical generalization, and translation tasks simultaneously.Fig. 3 shows a diagram of our framework with several input/output examples.The model's representations are learned from extensive and diverse chemical SMILES data and related textual data from scientific articles and patents.Similar to Raffel et al. 2 , Chung et al. 23 , nach0 follows an encoder-decoder architecture that takes textual input and generates target responses.To train the model on a mixture of datasets partitioned into different tasks, we formulate all the tasks in a "text-to-text" format, where the model is given some text as a context or condition and produces the output in a text format.Each dataset is associated with multiple prompt templates used to format datasets' instances into input and target pairs.In particular, we train nach0 on three types of tasks (Fig. 2): • NLP tasks: named entity recognition (NER), PICO extraction, textual entailment, relation extraction, sentence similarity, document classification, question answering (yes/no, multi-choice, open); • chemistry-related (CHEM) tasks: molecular property prediction, molecular generation, forward reaction prediction, reagent prediction, retrosynthesis; • cross-domain (NLP↔CHEM) tasks: description-guided molecule design, molecular description generation; Fig. 3 shows our model and prompt format.Details on train/test splits are presented in Table 1.Datasets' descriptions Given the presence of textual and molecular modalities, different tokenization technique is a crucial aspect of dataset design.One way to represent molecular structures is a simplified molecular-input line-entry system (SMILES) string 41 .SMILES describe a molecule as a sequence of atoms in a depth-first traversal order and uses special symbols to depict branching, cycle opening/closing, bond types, and stereochemistry.We use the following tokenization: • Textual domain sub-word tokens adopted from FLAN-T5 23 for natural language sequences; • Tokenization for SMILES: we annotate each SMILES token with special symbols: <sm_{token}> and extend the vocabulary with such tokens. Model and Training Configuration In our study, we predominantly employ a model featuring the default T5 architecture, which is derived from Raffel et al.For both models, we conduct pre-training with a language modeling (LM) objective and subsequent fine-tuning.The base models were trained using NVIDIA A4000 and A5000 GPUs, while the larger models were trained on NVIDIA's DGX cloud platform.Both the pre-training and fine-tuning stages were executed using the subsequent hyperparameters: a batch size of 1024, a learning rate set to 1e-4, and a weight decay of 0.01.The pre-training stage lasted for a single epoch, whereas the fine-tuning stage for 10 epochs. To execute the pre-training phase of our model with the LM objective, we leveraged two textual data sources in addition to one chemical data source.These textual data sources encompassed abstract texts extracted from Pubmed and patent descriptions derived from USPTO.All the textual data underwent a filtering process, eliminating documents that were not related to the chemistry domain.Consequently, the number of documents was curtailed to 13M for abstracts and 119K for patents.The chemical data component was sourced from the ZINC dataset, encompassing approximately 100 million documents.In aggregate, the textual data set contained 355M tokens for abstracts and 2.9B tokens for patents, whereas the chemical data encompassed 4.7B tokens. The entirety of the investigations in this paper was conducted using the multi-task model, with the exception of the ablation part.Each multi-task model underwent fine-tuning by leveraging the entire spectrum of available datasets, encompassing all domains, as elucidated in Sec. 1.For data mixing and balancing we followed the "Examples-proportional mixing strategy" from Raffel et al. 2 .The outcomes of these models are explicitly detailed in Sec. 3. Conversely, in the context of ablation studies, fine-tuning was specifically performed utilizing only those datasets relevant to the corresponding domain, as detailed in the discussion. The training was performed using NVIDIA NeMo Toolkit 42 , which consists of pre-built modules for end-to-end workflows in Automatic Speech Recognition (ASR), NLP, and Text-to-Speech (TTS) synthesis.NeMo uses PyTorch Lightning for optimized multinode/multi-GPU (MNMG) mixed-precision training.In this work, we leveraged the NeMo NLP collection to train and evaluate our LMs.We trained our model on a variety of tasks such as information extraction, question answering, molecular property prediction, and description-guided molecule design using the NeMo toolkit.A custom connector was added to extend the vocabulary size of the pre-trained model when continuing the training of the model with chemistry and biomedical datasets.The original vocabulary was extended to match the target vocabulary which was larger.The corresponding embedding matrix was initialized with learned embeddings of the original model.The extra tokens were initialized by re-using the first embeddings. Data was parsed using Mem-Map Datasets from the NeMo toolkit to allow efficient data handling.The mem-map dataset relies on memory mapping directly to files, allowing the handling of very large datasets with small memory footprints and optimal reading speed.The data was loaded as raw text files and the tokenization occurred on-the-fly.Pre-fetching of the data mitigated the effects of online tokenization when compared to pretokenized data.The model was trained using tensor and pipeline parallelism 43 , both of which are model parallel methods for distributed training and are implemented in the NeMo toolkit for efficient scaling of large language model training. Use case: End-to-end drug discovery In the first case study, we generate molecular structures against Diabetes mellitus (DM) using just one model, nach0: discover biological targets with potential therapeutic activity, analyze the mechanism of action, generate molecular structure, propose one-step synthesis, and predict molecular properties.In a series of questions, we generate the model's responses using top-p sampling with values from 0.3 to 0.7 and step equals 0.05 and ask an expert chemist to pick the best response (Fig. 4).In total, we generate 200 SMILES on the molecule generation prompt and select one structure, CC(C)(C)NC(=O)CN1CCC(C(=O)Nc2cccc(-c3nc4ccccc4n3Cc3cc ccc3)c2)CC1, as the most promising based on a chemical expert knowledge perspective.This semi-automated approach is efficient for discovering novel molecules and assessing their properties.We predict that further iterations of this model will require less supervision, and medicinal chemists will start using it as a side-car for generating and validating ideas. Use case: Chemistry42 generative model Chemistry42 is Insilico Medicine's AI drug discovery platform that efficiently generates novel active molecules using 42 generative models 44 .In this experiment, we apply nach0 to one of the published case study setups available on demand at demo.chemistry42.com-Structure-BasedDesign of Janus Kinase 3 Inhibitors.In Chemistry42, we use 3LXK crystal structure, pharmacophore hypothesis, and a set of physicochemical properties to set up the search space for the generative models.All generative models search the chemical space to find the best possible structures. Chemistry42 provides a set of filters ans reward modules.The 2D modules comprise of various tools including Medicinal Chemistry Filters (MCFs), Lipinski's Rule of Five (Ro5), and descriptors for Drug-likeness, Weighted atom-type portion, Drug-likeness and Novelty, the synthetic accessibility (SA) scores.Additionally, Chemistry42 use the Self-Organizing Maps (SOM) Classifier Module to navigate the generation of molecular structures towards a specific target class in the chemical space.The Structure Morphing module, another integral part of 2D modules, is utilized to tackle metabolic instability issues. The 3D modules include the ConfGen Module, which is responsible for generating conformational ensembles for each molecular structure.Subsequently, these molecules are ranked based on their intrinsic rigidity using a flexibility assessment tool.The 3D similarity between the generated structures and a reference molecule is evaluated using the 3D-Descriptors Module.The Pharmacophore Module is then used to find any matches with the specified pharmacophore hypothesis.The Shape Similarity Module plays its part in evaluating the 3D shape similarity to a reference molecule.Lastly, the Pocket Module and the Pocket-Ligand Interaction (PLI) modules are used to assess how well the molecules fit the chosen binding site. In this experiment, we replaced all 42 generative models with nach0 and generated a set of structures using a prompt "Generate a random druglike small inhibitor molecule for the Janus Kinase Table 2 Comparison between nach0 and Chemistry42 models on JAK3 inhibitors generation.nach0 can discover multiple molecules passing all constraints, even though it only uses implicit knowledge about the protein target.Discovery rate (percentage of good molecules from all generated molecules) indicates that our models acts better than random combinatorial generator when solving the problem.3 JAK3 that contains a classic kinase hinge binding motif".Note that nach0 does not have access to the specific crystal structure and other required properties, so the model generated molecules using solely its knowledge about JAK3. Combinatorial generator nach0 Chemistry42 In Tab. 2, we compare generation results using a combinatorial generator 45 , Chemistry42 44 , and our model.In just 45 minutes (consisting of 15 minutes for generation and 30 minutes for scoring in Chemistry42), our model discovered 8 molecules satisfying all the 2D and 3D requirements; see Ivanenkov et al. 44 for more details on requirements.All these structures have a hinge binder and properly bind in the active site.While our model can discover multiple molecules satisfying all constraints, the discovered structures are currently worse than those found in 72 hour generations in Chemistry42, since nach0 does not yet learn from the reinforcement learning feedback during generation and because it does not have exact knowledge of the experiment setup.In future work, we will expand our model with reinforcement learning capabilities to improve generation quality. Comparison of multi-task models Table 3 compares nach0 base and large models with two existing NLP encoder-decoder models (general-domain FLAN 23 and domain-specific SciFive 46 ), and a multi-domain encoder-decoder model MolT5 20 .The table contains metrics for each task and model, with the results of the top-performing base model emphasized in bold.First, FLAN base and nach0 base exhibit similar results on NLP tasks on average, demonstrating superior performance on different tasks.With single-domain models for tasks such as NER or NLI, where molecule information is not required, traditional LMs may indeed provide the best results.However, when it comes to molecular tasks that involve molecular data, nach0 has distinct advantages over similar-scale models due to its specialized architecture and ability to effectively incorporate and process molecule-related information.In particular, nach0 benefits from training on diverse datasets and the proposed tokenization approach, outperforming baselines (including FLAN) with a significant gap in molecular tasks.For regression tasks, nach0 shows the best results on both RMSE and R2 scores.Moreover, in the molecular generation task, nach0 substantially surpasses FLAN by the FCD metric, which assesses the closeness of the generated molecules distribution to the ground truth.We added this explanation to the manuscript.Second, as expected, large nach0 performed best among all the models.In terms of base models, nach0 base achieved the best results on chemical and crossdomain tasks over existing models, confirming that pre-training on two types of data with different tokens can be effective. Furthermore, we conducted zero-shot experiments involving nach0, FLAN, and SciFive (all base versions) in an information retrieval task.The objective was to detect whether an abstract is relevant to a given disease or gene query.The dataset used for these experiments, along with its specific details, can be found in Tutubalina et al. 47 .In these experiments, we employed the following prompt: "Given the following passage, answer the question: Is the following text related to the synonym?Passage: text".To evaluate the models' performance, we utilized precision (P), recall (R), and F-measure (F1).Our findings indicate that nach0 achieved an F1 score of 82.24% (with a recall of 96.32% and precision of 71.76%), while FLAN and SciFive achieved F1 scores of 82.24% and 77.20%, respectively.However, it is worth noting that the supervised BERT-based pipeline from Tutubalina et al. 47 achieved a higher F1 score of 88.81%.Based on these results, we can conclude that these models exhibit the ability to perform slightly different NLP tasks in a zero-shot setup.However, they still fall significantly behind supervised models in terms of performance. Ablations To examine the impact of cross-domain data on multi-task finetuning, we conducted training on mono-domain data.The results of four pre-trained checkpoints (SciFive, FLAN, MolT5, nach0) fine-tuned exclusively on NLP data are presented in Supplementary Information, Sec. 1.When considering average performance on the NLP group, nach0, SciFive, and FLAN exhibit similar results, MolT5 achieves lower scores compared to the other models. Next, we investigate how chemical tasks groups combination effects on joint model performance in comparison with individ- The results of this ablation study can be found in Tab. 4 and show that nach0 benefits from combining chemical tasks groupmodel trained on the whole set of chemical data without NLP outperforms in total set of metrics models trained on distinct task groups.It is important to mention that despite the joint model showing worse metrics than the model trained only on molecular generation and cross-domain tasks, it works better since it does not overfit on training data-the novelty metric is more prevail here over all other molecule generation metrics. Also, experiments show that the special chemical tokens and pre-training on both natural language and chemical data improve the model quality-nach0 outperforms MolT5 baseline or show equal metrics on each chemical task group.We miss some MolT5 metrics on molecule generation task since it produces non-valid SMILES sequences. Comparison with ChatGPT Recently, a comprehensive benchmark for biomedical text generation and mining problems with ChatGPT was conducted, revealing its poor performance on several biomedical NLP benchmark datasets 48,49 .Chen et al. 49 specifically evaluated ChatGPT on a BLURB benchmark 50 , which encompasses BC5-chem, BC5disease, NCBI-disease, BC2GM, JNLPBA, EMB-PICO, ChemProt, DDI, GAD, BIOSSES, HoC, PubMedQA, BioASQ.In particular, ChatGPT got an average BLURB score of 48.27 on NER, while fine-tuned BERT achieved 86.27.For more details on evaluation scores, please refer to Chen et al. 49 .In our evaluation setup, we focus on three specific datasets: EMB-PICO, MedMCQA-Open, and molecular description generation (Mol-Instructions).The inclusion of EMB-PICO dataset was driven by its practical importance.This dataset involves the task of identifying and extracting specific fragments of text related to the Population/Patient/Problem (P), Intervention (I), Comparator (C), and Outcome (O) elements from unstructured biomedical texts, such as research articles and clinical trial reports.It is worth noting that the clinical trial domain holds particular significance for inClinico, a transformer-based artificial intelligence software platform designed to predict the outcome of Phase II clinical trials 10 .The molecular generation task is relevant to the Chemistry42 platform 44 . To evaluate the zero-shot performance, we had to limit the evaluation to a subset of 2000 samples from the test set for each of the three datasets, considering the computational constraints of ChatGPT.As well we utilized the GPT-3.5-turbomodel through the OpenAI API and multi-task nach0 base for evaluation purposes.In the case of the PICO dataset, ChatGPT achieved a wordlevel F1 score of 64.43%, comparable to the results obtained by fine-tuned nach0 base on this subset (F1 score of 67.60%).For MedMCQA-Open, ChatGPT achieved a BLEU2 score of 1.68%, while the fine-tuned nach0 base attained a BLEU2 score of 6.30%.In the molecular description generation task, ChatGPT achieved a BLEU2 score of 2.23%, whereas the fine-tuned nach0 base excelled with a BLEU2 score of 42.80%.Based on our preliminary findings, it is evident that utilizing ChatGPT directly leads to subpar performance compared to models trained specifically on the domain-specific dataset, how it was done in nach0. Discussion In this study, we pretrained and fine-tuned T5 models, which have an encoder-decoder architecture.Nevertheless, a broad range of model families, including T5, BERT-based BioMegatron 51 , decoder-only PaLM 52 and GPT 4 , exist.To determine the most suitable architecture for pre-training and fine-tuning on chemical-related data, it may be necessary to evaluate these alternatives.We suggest it as a potential topic for future research. There have been several efforts to train large language models (LLMs) on biomedical corpora, particularly on PubMed.Notable examples include BioGPT (347M and 1.5B) 53 , PubMedGPT (2.7B) 54 , and Galactica (120B) 18 .Through our experiments with scaling from a base model (250M) to a large model (780M), we demonstrated the benefits of scale on several datasets.Based on our findings, we can conclude that scaling can further enhance the chemical capabilities of models, particularly in terms of generation and reasoning skills. Key LLM capabilities for chemistry Although our LM was able to reach state-of-the-art performance on several chemistry-related benchmarks, our human evaluations clearly suggested that these models are not at the chemist expert level.In order to bridge this gap, several new LLM capabilities need to be researched and developed including (i) knowledge alignment between textual and chemical sources as well as domain-specific knowledge graphs; (ii) ability to perform chemical reasoning and provide explanations for their predictions; (iii) ability to learn from and adapt to feedback from human experts, (iv) ability to generate novel chemical reactions and materials. Molecular representations One limitation of our LM is its focus on string representations of molecules, specifically the SMILES notation.Although SMILES is a widely used notation for representing molecules, it provides only 2D information of the molecule, missing the 3D geometry and spatial arrangement of atoms and bonds in a molecule.This can result in inaccuracies in predicting molecular properties and interactions.To address these limitations, it would be beneficial to incorporate additional modalities of molecules, such as the molecular graphs in terms of 2D or 3D representations, in the training of the language model.Another significant drawback of the SMILES format is the absence of a one-to-one translation between molecules and SMILES strings.Typically, a molecule can have multiple SMILES representations that differ from each other due to factors such as the starting atom, molecular graph traversal, and kekulization.In practice, SMILES strings are often converted to a canonical form using an unambiguous algorithm.A molecular representation called SELFIES 55,56 was defined from scratch to be attractive as a sequential representation for molecules.All random SELFIES are valid molecular representations.SELFIES was extened to treat molecular groups as well 57 .As SELFIES have been repeatedly shown to have advantages over other representations in the context of generative models, exploring their use as the main representation for a language model is a future potential direction. Prompt design Our language model has a limitation in that it heavily relies on the quality and specificity of the prompts, as well as the potential for biases in both the training data and the prompts themselves.To enhance the performance of the model, incorporating domain-specific and information-rich prompts is essential.One potential approach to achieving this is by leveraging the knowledge of domain experts to design effective biomedical prompts.Yet, over-reliance on domain-specific prompts may lead to a lack of diversity in the model's responses, which can limit its usefulness. Chemical diversity Mol-Instructions includes cross-domain datasets that consist of compounds and their corresponding descriptions collected from PubChem.PubChem is a publicly available database administered by the National Center for Biotechnology Information (NCBI).It is important to note that the datasets primarily encompass current drugs and known chemical probes, representing only a fraction of the vast predicted chemical space.Furthermore, these datasets do not encompass testing on novel chemical diversity distinct from molecules documented in the literature. Conclusion Our study integrates a diverse range of one-domain and multidomain task types and biomolecular text instructions to address the landscape of chemical research on drug design, reaction prediction, and retrosynthesis and leverage the advancements in NLP and LLMs.EThe multi-domain training approach allows our model, nach0, to leverage a broader understanding of both chemical and linguistic knowledge.xtensive experiments and two case studies demonstrate that nach0's capabilities in translating between natural language and chemical language enable it to tackle tasks effectively.Considering the unique training methodology and the broader scope of tasks that our model can effectively handle, we believe our work presents a significant contribution to the field. Based on our findings, we foresee several promising directions for future research.One direction could involve such as protein sequences, which would require adding special tokens into the model similar to SMILES.This task could be easily achieved with Group SELFIES.New modalities require collecting diverse tasks with natural language prompts for fine-tuning.A second direction involves extending NLP datasets and conducting zero-shot evaluations to assess the reasoning and generalization capabilities of nach0.Finally, exploring the fusion of information from textual sequences and relevant knowledge graphs as input in a self-supervised approach remains an area to be explored. NLP Ablation To examine the impact of cross-domain data on multi-task finetuning, we conducted training on mono-domain data.The results of four pre-trained checkpoints fine-tuned exclusively on NLP data are presented in Supplementary Information, Tab. 5. Several noteworthy observations can be made based on these findings. Firstly, when considering average performance, nach0, SciFive, and FLAN exhibit similar results.However, each model demonstrates superior performance on different tasks.FLAN, being a general-domain model, outperforms others in textual entailment, binary QA, and sentence similarity.On the other hand, the domain-specific SciFive shows best results in NER, while nach0in relation extraction, classification, and multi-choice QA. Secondly, MolT5 achieves lower scores compared to the other models.This can be related to the pre-training strategy, where molecules and natural language texts share the same tokens in the semantic space.In contrast, nach0 utilizes specialized tokenization for molecular data, which does not significantly impact overall performance on NLP tasks compared to SciFive and FLAN. Chemistry: Tasks and Datasets We've integrated several chemical domain tasks from widely-used benchmarks and datasets.It covers distribution match, molecular property prediction, reaction prediction and related problems.Where it's possible, we use the provided standard train/validation/test split procedures, otherwise, we employ the random data split.We choose this data preparation strategy to enable comparison with baseline models, however, we don't guarantee that one can't find chemical objects with similar structures in the different subsets. 45is a benchmarking platform that provides a large dataset and set of metrics to compare generative models on an unconditional molecular generation task.The dataset provided by MOSES contains almost 2 million samples filtered by MCF, PAINS, and additional rules.The metrics set estimates the quality of the generative model from several points of view: validity of generated structures, molecular distribution matching quality, and the ability of the model to produce novel, diverse molecules. Evaluation metric: The MOSES benchmark provides established set metrics for assessing the ability of models to produce unique, diverse, valid molecules similar to ground-truth distribution.In our work, we adopt several metrics: uniqueness, validity, novelty, internal diversity, similarity to a nearestneighbor (SNN), fragment similarity, scaffold similarity and FCD 58 .We've generated 30000 new molecules to compute these metrics. Example on molecular distribution matching: input text with prompt: Generate random molecule from MOSES dataset. Mol-Instructions The recently published Mol-Instructions dataset 25 covers three significant modalities: molecule-oriented instructions, proteinoriented instructions, and biomolecular text instructions.In our study, we specifically focus on the first subset, which is the most relevant and contains chemical tasks. Example on descriptor-guided molecule generation: input text with prompt: Synthesize a molecule that matches the given characteristics: The molecule is the (R)-enantiomer of aceprometazine.It is an enantiomer of a (S)-aceprometazine.output text: The molecule is a manufactured chemical that is widely used for dry cleaning of fabrics and for metal-degreasing.It is also used to make other chemicals and is used in some consumer products. Example on forward reaction prediction: input text with prompt: With the provided reactants and reagents, propose a potential product: output text: C#Cc1ccc(C=O)cc1. Example on reagent prediction: input text with prompt: Please propose potential reagents that might have been utilized in the provided chemical reaction: [OH-] Example on retrosynthesis: input text with prompt: Provide a list of potential reactants that may have produced the given product.: Cc1ccc(-c2ccccc2N)cc1 output text: Cc1ccc(B(O)O)cc1.Nc1ccccc1I Property Prediction We adopt several binary classification and regression tasks from the MoleculeNet benchmark to assess the model's ability to predict molecular properties.Evaluation metric: Binary classification tasks include BBBP, HIV, and BACE datasets from MoleculeNet 26 and use balanced accuracy as the main metric.Regression tasks involve ESOL, Free-SOLV and Lipo datasets from MoleculeNet 26 , QM9 dataset from MolInstructions 25 and rely on the R2 metric.In our work, we utilized the code provided by the MoleculeNet benchmark to prepare data splits. Example on the BBBP classification task: input text with prompt: input text with prompt: output text: -0.720000 NLP: Tasks and Datasets 5.3.1 Named entity recognition Named entity recognition (NER) is a fundamental aspect of natural language processing, involving the identification and classification of entities in a given text into predefined categories.In biomedical NER, the focus lies in extracting mentions of diseases, genes, chemicals, and other biologically relevant entity types.To conduct this study, we carefully selected five datasets: • BC2GM 29 ; • BC5CDR-Disease 27 ; • BC5CDR-Chemical 27 ; • JNLPBA 30 ; • NCBI-Disease 28 . BC2GM The BC2GM dataset encompasses an extensive collection of over 20,000 sentences extracted from the MED-LINE database, spanning the years 1991 to 2003.Each document in this dataset is annotated with gene mention spans, amounting to a total of 24,583 mentions. BC5CDR The BioCreative V CDR dataset was specifically designed for named entity recognition tasks involving disease and chemical entity types.It contains 12,850 disease and 15,935 chemical mentions, drawn from 1,500 PubMed articles. JNLPBA The JNLPBA involves gene mention annotations across more than 2,000 PubMed abstracts.The creation of this dataset entailed a meticulous search on the MEDLINE database, using specific MeSH terms such as 'human', 'blood cells', and 'transcription factors'.In total, JNLPBA comprises 59,963 gene mention spans. NCBI-Disease The NCBI-disease corpus, developed by the National Center for Biotechnology Information (NCBI), constitutes a vast collection of 793 PubMed abstracts that have undergone meticulous annotation by domain experts.These annotations include disease names and their corresponding concept IDs, sourced from the Medical Subject Headings (MeSH) vocabulary 59 . In order to train the neural network in a text-to-text format, we designed five prompts.Each prompt asks to highlight the spans corresponding to mentions of specific entity.In order to achieve this, we insert specific tokens before and after the mention of an entity in the text.Evaluation metric: the evaluation of the NER task's quality is performed using the entity level F-measure. Example: input text with prompt: Please find all instances of diseases in the given text.Each mention should be surrounded by "diso*" and "*diso": Identification of APC2, a homologue of the adenomatous polyposis coli tumor suppressor; output text: Identification of APC2 , a homologue of the diso* adenomatous polyposis coli tumour *diso suppressor. Question Answering Question Answering (QA) is an important area of NLP research.The objective of QA is to develop intelligent systems that can understand and accurately answer questions posed in natural language.Within the biomedical domain, QA refers to the specific applications and models designed to address questions related to biomedical and healthcare information.It is required for model to understand and respond to questions pertaining to medical knowledge, clinical data, scientific literature, drug information, and other relevant biomedical topics.In this study, we conducted experiments on four biomedical QA datasets: • BioASQ 40 ; • PubMedQA 39 ; • MedMCQA 60 ; • MMLU 61 . The first two datasets are employed to evaluate the neural network's ability to answer binary Yes/No questions, while the remaining two datasets are used in scenarios that involve multichoice and open question answering. BioASQ and PubMedQA BioASQ (Biomedical Question Answering) is a widely recognized dataset in the biomedical domain, specifically designed for evaluating question answering systems.Following the 50 we restrict the dataset to yes/no questions.We use the official train/dev/test split where each contains 670/75/140 questions respectively. Similar to BioASQ, the PubMedQA dataset as well presents questions with limited number of answers.In contrast to the previous dataset, the answers to the questions in PubMedQA are selected from yes, no, or maybe.We use the original train/dev/test split with 450, 50, and 500 questions, respectively. MedMCQA and MMLU For multiple choice question answering, we employ the concatenation of the MedMCQA and MMLU datasets from 25 , resulting in a total of 12,398 multiplechoice questions.As 25 does not provide train/dev/test partitions, we randomly split the dataset into a ratio of 75: 25.To perform open question answering, we adopted a dataset introduced in 25 , which comprises 27,574 question-answer pairs.This dataset was curated from the MedMCQA dataset. Evaluation metric: to evaluate the performance of yes/no and multiple-choice question-answering tasks, we utilized the accuracy metric.For open-ended question-answering tasks, we adopted the BLEU-2 metric as our evaluation criterion.Yes/No QA example: input text with prompt: Given a passage: De novo DNA methylation in Arabidopsis thaliana is catalyzed by the methyltransferase DRM2, a homolog of the mammalian de novo methyltransferase DNMT3.Here we describe DNA methyltransferase genes from both Arabidopsis and maize that show a high level of sequence similarity to Dnmt3, suggesting that they encode plant de novo methyltransferases.Relative to all known eukaryotic methyltransferases, these plant proteins contain a novel arrangement of the motifs required for DNA methyltransferase catalytic activity.The N termini of these methyltransferases contain a series of ubiquitin-associated (UBA) domains.BLASTX searches and phylogenetic analysis suggested that five cDNAs belonged to four classes (Dnmt1, Dnmt2, CMT and Dnmt3) of DNA methyltransferase genes, answer the question: Are there any DNMT3 proteins present in plants?; Relation Extraction Relation extraction (RE) is a NLP task that involves identifying and classifying the relationships between entities mentioned in a text.In the biomedical domain, RE refers to the specific application of RE techniques and models to extract and classify relationships between biomedical entities mentioned in text.Biomedical RE focuses on identifying and categorizing the associations between various biomedical entities, including genes, proteins, diseases, drugs, and other molecular entities.For experiments, we use three corpora: • ChemProt 34 ; • DDI 35 ; • GAD 36 . ChemProt The ChemProt dataset is a widely used benchmark for the task of chemical-protein RE.The dataset comprises PubMed abstracts that are annotated with chemical-protein interactions, where the chemicals typically represent drug compounds or small molecules, and the proteins denote specific biological targets or enzymes.Each annotated interaction is labeled with the corresponding chemical and protein mentions, along with the following types of relationship: upregulator, downregulator, antagonist, agonist, and substrate.The training set of the dataset contains 9,995 relation pairs, and the test set contains 5,744 relation pairs. DDI The DDI (Drug-Drug Interaction) corpus is a dataset designed for the purpose of identifying drug-drug interactions mentioned in biomedical texts.The corpus consists of annotated sentences or text passages that describe interactions between pairs of drugs.Each annotated interaction is labeled with the names of the drugs involved and the specific type of interaction.We employ the train/test split produced in 50 , where the training set contains 4,021 relation pairs and the test set contains 979 relation pairs. GAD The GAD dataset is a comprehensive collection of genetic association information that was semi-automatically compiled using the Genetic Association Archive.In our study, we utilize an existing preprocessed version of GAD and its corresponding train/test split, which was created by Lee et al. 17 In our experimental framework, we adopt a binary classification approach for relation extraction.Here, the positive class indicates the presence of the specified type of relationship between two entities. Evaluation metric: to evaluate the quality of RE tasks we utilize the F-1 measure of positive class. Example: input text with prompt: does the Chlorprothixene and lithium are said to have mechanism type of interaction in the following passage: Chlorprothixene may increase the plasma-level of concomitantly given lithium.In order to avoid lithium intoxication, lithium plasma levels should be monitored closely.If chlorprothixene is given concomitantly with opioids, the opioid dose should be reduced (by approx.50%), because chlorprothixene amplifies the therapeutic actions and side-effects of opioids massively.Avoid the concomitant use of chlorprothixene and tramadol (Ultram).Massive seizures may be encountered with this combination.Consider additive sedative effects and confusional states to emerge, if chlorprothixene is given with benzodiazepines or barbituates.Choose particular low doses of these drugs.Exert particular caution in combining chlorprothixene with other anticholinergic drugs (tricyclic antidepressants and antiparkinsonian agents): Particularly the elderly may develop delirium, high fever, severe obstipation, even ileus and glaucoma. Textual Entailment Textual entailment (TE) is a natural language processing task that involves determining the logical relationship between two pieces of text: a text fragment known as the "premise" and another text fragment known as the "hypothesis."The task is to decide whether the meaning of the hypothesis can be logically inferred or entailed from the meaning of the premise.For conducting our experiments, we utilize the following corpora: • MedNLI 32 ; • SciTail 33 ; 5.3.4.1 MedNLI MedNLI (Medical Natural Language Inference) is a specialized dataset designed to facilitate research in natural language inference within the medical and healthcare domain.It consists of pairs of sentences, where each pair comprises a premise and a hypothesis.The premise represents a clinical or biomedical context, while the hypothesis is a medical statement or claim that may or may not logically follow from the premise.Each sentence pair is annotated with one of three labels: "entailment," indicating that the hypothesis can be logically inferred from the premise; "contradiction," suggesting that the hypothesis contradicts the information in the premise; and "neutral," signifying that there is no logical relationship between the two sentences.The dataset comprises a total of 12,627 sentence pairs in the training set and 1,422 sentence pairs in the testing set. SciTail The SciTail dataset is similar to the MedNLI dataset was designed for the task of natural language inference.Except that it covers a broader scientific domain.The train part of the corpora contains 24900 sentence pairs and the test part of the corpora contains 2126. Evaluation metric: to evaluate the quality of TE tasks we utilize the Accuracy score. Example: input text with prompt: Given that "At [**Hospital 1456**] Hospital the patient was experiencing 10 out of 10 chest pain and received nitropaste two inches, three sublingual nitroglycerins, morphine 4 mg intravenously, Lopressor 5 mg intravenously."Does it follow that " The patient is asymptomatic."yes or no? output text: No Sentence similarity Textual similarity tasks in the biomedical domain involve assessing the degree of semantic similarity or relatedness between pairs of biomedical texts.The goal of these tasks is to determine how closely two pieces of text, such as sentences or documents, are semantically or conceptually aligned.To conduct our experiments, we employ the BIOSSES dataset 37 . BIOSSES The BIOSSES (Biomedical Sentence Similarity Benchmark) dataset is a specialized dataset designed to evaluate sentence similarity models in the biomedical domain.It contains pairs of biomedical sentences that are carefully selected to represent different levels of semantic similarity.Each sentence pair is annotated with a similarity score that represents the degree of semantic relatedness between the two sentences.The scores are typically on a continuous scale, indicating how similar or dissimilar the sentences are in meaning.The dataset comprises a total of 80 sentence pairs in the training set and 20 sentence pairs in the testing set.Evaluation metric: to evaluate the quality of Textual Similarity tasks we utilize the Pearson corellation score. Example: input text with prompt: Please assess the similarity between these two sentences on a scale of 0.0 (lowest) to 4.0 (highest).First sentence: "It has recently been shown that Craf is essential for Kras G12D-induced NSCLC."Second sentence:"It has recently become evident that Craf is essential for the onset of Kras-driven non-small cell lung cancer." output text: 4.0 Document Classification In the biomedical domain, the document classification task involves categorizing entire documents, such as scientific articles, research papers, or clinical reports, into predefined categories or classes.The goal is to automatically assign each document to the most relevant category based on its content and subject matter.For our experimental purposes, we utilize the Hallmarks of Cancer dataset. Hallmarks of Cancer The Hallmarks of Cancer (HoC) dataset serves as a document classification task, centered around the concept of cancer hallmarks as established in the referenced work 38 .This corpus comprises PubMed abstracts, each labeled with binary annotations, denoting the presence of specific discussions related to individual cancer hallmarks.We utilize the train/test split from 50 which comprises 13917 sentences in train part and 3547 sentences in test part.Evaluation metric: to evaluate the quality of Document Classification tasks we utilize the F-1 score. Example: input text with prompt: Pick one category for the following text.The options are -activating invasion and metastasis, avoiding immune destruction, cellular energetics, enabling replicative immortality, evading growth suppressors, genomic instability and mutation, inducing angiogenesis, resisting cell death, none, sustaining proliferative signaling, tumor promoting inflammation. Biopsy of a skin lesion showed lymphoproliferative infiltration of the dermis with a follicular and angiocentric growth pattern and regional epidermal necrosis. output text: resisting cell death PICO extraction PICO extraction is an essential NLP task that aims to automatically identify and extract specific fragments of text pertaining to the Patient (P), Intervention (I), Comparator (C), and Outcome (O) elements from unstructured biomedical texts, such as research articles and clinical trial reports.Typically, Comparator labels are omitted from the annotations, as they conform to established clinical trial norms, with "placebo" as the passive control and "standard of care" as the active control.To conduct our study, we leveraged the EBM PICO 31 dataset for this purpose. Fig. 2 Fig. 2 Datasets used for training and evaluation.Colour represents the type of tasks.Yellow and blue datasets are single-domain, typically requiring regression/classification losses or generation in the target domain (natural language or SMILES strings).Gradients from yellow to blue represent cross-domain generation tasks that require natural language input and SMILES output, or vise versa. Fig. 3 A Fig. 3 A diagram of nach0 which is a text-to-text framework.The model takes text as input and is trained to generate the desired target text for each specific task.This unified approach enables us to utilize the same model architecture, loss function, hyperparameters, and other components across our diverse range of mono-domain (NLP, CHEM) and cross-domain (NLP↔CHEM) tasks. J o u r n a l N a me , [ y e a r ] , [ v o l .] , 1-15 | 7 output text: CC(=O)c1cc2c(cc1)Sc1ccccc1N2C[C@@H](C)N(C)C.Example on molecular description generation: input text with prompt: What can you tell me about this molecule?: C(=C(Cl)Cl)(Cl)Cl. output text: Yes J o u r n a l N a me , [ y e a r ] , [ v o l .] , 1-15 | 11 5. 3 . 7 . 1 EBM PICO The EBM PICO dataset was specifically created to facilitate PICO extraction tasks.It employs token-level labeling, where each token is categorized into one of the PIO classes (Patient, Intervention, Outcome).The dataset comprises a total of 4,800 labeled abstracts for training purposes and 200 labeled abstracts for testing purposes.To conduct the PICO extraction task in a text-to-text format, we adopted the same prompt style as used for the Named Entity Recognition (NER) dataset.Evaluation metric: to evaluate the quality of PICO extraction tasks we utilize the word-level F-1 score.Example: input text with prompt: Please find all instances of Interventions in the given text.Each mention should be surrounded by "Intervention*" and "*Intervention": Study protocol : Rehabilitation including Social and Physical activity and Education in Children and Teenagers with Cancer ( RESPECT ) output text: Study protocol : Intervention* Rehabilitation including Social and Physical activity and Education *Intervention in Children and Teenagers with Cancer ( RESPECT ) . Table 1 25st of datasets used in our study.We note that ESOL, FreeSolv, Lipophilicity, BBBP, HIV, BACE are included in the MoleculeNet benchmark26; QM9, MoleculeNet and USPTO_500MT data are collected from Mol-Instructions25. with example instances are reported in Supplementary Information, Sec. 2. 2. Our experimentation involves two model sizes: a base model consisting of 250 million parameters, characterized by 12 layers, a hidden state of 768 dimensions, a feed-forward hidden state of 3072 dimensions, and 12 attention heads; and a larger model with 780 million parameters, consisting of 24 layers, a hidden state of 1024 dimensions, a feed-forward hidden state of 4096 dimensions, and 16 attention heads. Table 3 Full results of nach0 on NLP, CHEM and cross-domain tasks in comparison with FLAN (250M parameters), SciFive (220M parameters), MolT5 (220M parameters).All models are trained in a multi-task fashion.Bold number is the highest score on each dataset and the underscore stands for the second best result over base models only.We mark the results of Nach0 Large with a green color to indicate improvements over Nach0 Base. ual models trained on each separate chemical tasks group-on predictive tasks group, on reaction tasks group and molecular generation/cross-domain tasks group.We perform the same experiments with MolT5 model to elaborate on how pretraining data and special chemical tokens affect the quality of the model on chemical tasks. Table 4 Performance of nach0 on chemical tasks groups in comparison with MolT5.We list the scores for each task (see Supplementary Information about datasets and metrics).Bold number is the best result on each dataset.All models are base models. Table 5 Performance of nach0 on NLP tasks in comparison with FLAN, SciFive, MolT5.We list the scores for each task (see Sec. 5.3 about datasets and metrics).All models are base models.
2023-11-22T06:43:19.431Z
2023-11-21T00:00:00.000
{ "year": 2024, "sha1": "d733c2d6e08f5b59ccc0af9188f1f86d0aa7a4c5", "oa_license": "CCBYNC", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2024/sc/d4sc00966e", "oa_status": "GOLD", "pdf_src": "ArXiv", "pdf_hash": "7519a6c3811b1fed3a950edc795aaa31f0788060", "s2fieldsofstudy": [ "Chemistry", "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Biology" ] }
364011
pes2o/s2orc
v3-fos-license
Dosimetric accuracy of tomotherapy dose calculation in thorax lesions Background To analyse limits and capabilities in dose calculation of collapsed-cone-convolution (CCC) algorithm implemented in helical tomotherapy (HT) treatment planning system for thorax lesions. Methods The agreement between measured and calculated dose was verified both in homogeneous (Cheese Phantom) and in a custom-made inhomogeneous phantom. The inhomogeneous phantom was employed to mimic a patient's thorax region with lung density encountered in extreme cases and acrylic inserts of various dimensions and positions inside the lung cavity. For both phantoms, different lung treatment plans (single or multiple metastases and targets in the mediastinum) using HT technique were simulated and verified. Point and planar dose measurements, both with radiographic extended-dose-range (EDR2) and radiochromic external-beam-therapy (EBT2) films, were performed. Absolute point dose measurements, dose profile comparisons and quantitative analysis of gamma function distributions were analyzed. Results An excellent agreement between measured and calculated dose distributions was found in homogeneous media, both for point and planar dose measurements. Absolute dose deviations <3% were found for all considered measurement points, both inside the PTV and in critical structures. Very good results were also found for planar dose distribution comparisons, where at least 96% of all points satisfied the gamma acceptance criteria (3%-3 mm), both for EDR2 and for EBT2 films. Acceptable results were also reported for the inhomogeneous phantom. Similar point dose deviations were found with slightly worse agreement for the planar dose distribution comparison: 96% of all points passed the gamma analysis test with acceptable levels of 4%-4 mm and 5%-4 mm, for EDR2 and EBT2 films respectively. Lower accuracy was observed in high dose/low density regions, where CCC seems to overestimate the measured dose around 4-5%. Conclusions Very acceptable accuracy was found for complex lung treatment plans calculated with CCC algorithm implemented in the tomotherapy TPS even in the heterogeneous phantom with very low lung-density. Introduction Image-guided intensity modulated radiation therapy (IG-IMRT) techniques are becoming more popular due to the possibility to create and monitor escalated dose distributions highly conformed to irregular-shaped targets. The implementation of such new technology requires a precise and accurate dose calculation algorithm which can generate reliable dose distributions and dose-volume information for treatment planning calculation and evaluation. An ideal dose calculation algorithm should take into account relative electron density and dimensions of inhomogeneous media, electronic disequilibrium for high energy photon beams and electron transport at interfaces between media of different densities [1]. Monte Carlo (MC) simulation is well known as the most accurate algorithm for dose calculation in the presence of inhomogeneous media [2][3][4]. However, other semi-empirical dose calculation algorithms are generally clinically implemented and used in the treatment planning systems [5,6]. Convolution/superposition models are now commonly used in treatment planning systems [7][8][9]. Although they present major improvements compared to older pencil beam algorithms [10] due to empirical approximations, they may introduce appreciable inaccuracies in the dose distributions, especially in case of small or superimposed small fields (typically found in IMRT treatments) irradiating low density media; comparing the collapsed cone convolution approach to MC, Chow et al [11] reported significant dose deviation with 6 MV photon beam when the electron density is less than 0.3 and small field sizes are used. Fogliata et al [12] investigated the influence of different air filling in lungs on the calculation accuracy of photon dose algorithms compared with MC: with a 6 MV photon beam, all the investigated algorithms had a peak of failures for densities of the order of 0.05 g/cm 3 . Due to the rapid evolution of the available treatment techniques, irregular fields and steep dose gradients are applied in order to achieve highly conformal dose distributions; under these conditions high dosimetric accuracy of any IMRT treatment planning system is of crucial importance for the effectiveness and success of the treatment prescribed [13]. The aim of this paper was to investigate the dose calculation accuracy in (very) low-density lung media for treatments delivered by a Helical Tomotherapy unit (HT), where the calculation dose is performed using a convolution-superposition algorithm (C/S) based on a collapsed cone (CCC) approach [14][15][16]. The CCC superposition (CCC/S) dose algorithm has been shown to accurately predict dose distributions for IMRT techniques, including helical tomotherapy, although most published results refer to water equivalent phantom with simple geometries. Several papers [17][18][19][20] have investigated the accuracy of the CCC/S dose algorithm implemented in HT treatment planning in case of inhomogeneous tissues for some limited cases. Chaudhari et al [17] analyzed only two clinical esophageal cancers simulated in a customdesigned heterogeneous phantom mimicking the mediastinum geometry by considering two different lungequivalent materials with density equal to 0.28 g/cm 3 and 0.16 g/cm 3 , respectively. Zhao et al [18] investigated the accuracy of the algorithm by considering only one clinical lung treatment delivered on a CIRS (Computerized Imaging Reference Systems, Inc) anthropomorphic heterogeneous phantom, where dose distributions calculated from HT treatment planning were compared both with measurements and with MC calculations. Also in the Sterpin et al paper [20] the CCC/S algorithm implemented in the HT unit was compared with MC simulations only for small lung tumors with diameter <3 cm. In this work we focused our analysis by simulating some thorax treatments (mediastinal lesions, single or multiple metastasis) of different geometries. For the considered cases, the dose calculation algorithm accuracy was investigated in both a homogenous (15 plans) and inhomogeneous (4 plans) phantom (where the lungs consisted of material with a density equal to 0.04 g/cm 3 ) by absolute ionization dose measurements, dose profile comparisons and quantitative analysis of dose distributions. A comparison between dose distributions measured on EDR2 and EBT2 films was also reported. Phantoms design Measurements were performed in both a homogeneous and a custom-made heterogeneous phantom mimicking a patient's thorax region. As homogeneous phantom we used the Cheese Phantom, typically employed in our clinic for routine patient QA (DQA) measurements. It is a solid water cylindrical phantom of 15 cm radius and 18 cm length cut into two semi-cylindrical halves to allow the insertion of a film along the central plane. Along the other direction a series of holes, interspaced by 1 cm (one hole is set 0.5 cm from the central plane of the film), allows the insertion of ionization chambers for point measurements. Film and chamber measurements can be performed at the same time by considering both the sagittal and coronal plane. In this paper for all simulated plans the film was set along the coronal plane and the absolute ionization measurements were performed in points along the sagittal direction. A custom designed phantom mimicking the patient's thorax region was defined (Figure 1a). It is composed of six slabs of 30 cm × 40 cm × 3 cm of acrylic (density 1.16 g/cm 3 ) simulating the homogeneous media. Three slabs, two positioned on the top and one on the bottom of the phantom were completely homogeneous; inside one homogeneous slab an aluminum cylindrical insert (2.7 g/ cm 3 ) was considered. The other two slabs simulate the lung region using Styrofoam: two low density (0.04 g/ cm 3 ) inserts were symmetrically positioned and separated by an acrylic area (mediastinum). Fogliata et al [12], showed that the lung mass density varies during respiratory phases; in free breathing and in deep inspiration breath hold the mean densities are 0.27 and 0.16 g/cm 3 respectively with peak densities of 0.17 and 0.09 g/cm 3 . Inside lung volumes, acrylic inserts of various dimensions and positions, simulating the tumor lesions (metastasis), were positioned. They are cylindrical with a radius of 1, 2 or 3 cm, positioned completely inside or in the boundary of the lung; these different geometries are useful to simulate several clinical situations. The phantom was designed in order to allow both planar and point dose measurements. Films can be placed along horizontal planes between the different slabs; absolute point dose measurements can be performed both in all tumor inserts and in the homogeneous mediastinum region, thanks to several inserts created inside the phantom. Treatment planning For homogeneous phantom measurements, specific DQA plans of fifteen patients (pts) previously treated for lung tumour using the Helical Tomotherapy technique were created. The treatment volumes considered can be divided into three groups: mediastinal lesions (9 pts), single lung metastasis (2 pts), multiple lung metastases (4 pts). Single and multiple metastases were treated based on a hypofractionated approach with 9 Gy of daily dose; different fractionated regimes (2 Gy/day; 2.5 Gy/day, 4 Gy/day) were applied for mediastinal tumours. All plans were generated using a 25 mm field width, a pitch equal to 0.287 for conventional fractionation or in the range of 0.2-0.3 for hypofractionated regimes and a modulation factor of approximately 2.5 -3. In all patient treatment plans considered, the aim of the optimisation process was the homogeneous coverage of the PTV, concomitant with organ at risks (spinal cord, heart, lung, oesophagus) sparing. For the heterogeneous phantom, four treatment plans were generated simulating four different clinical volumes: a single lung metastasis, multiple lung lesions and two different mediastinic target volumes; two different mediastinic targets (Med1 and Med2) were considered with two different volumes and with a different target portion in the lung region. Doses and planning parameters used in our clinical practice were adopted for these treatment planning simulations. Coronal dose distributions for each hetereogeneous plan are shown in Figure 2. Film and ionization chamber dosimetry Radiographic (Kodak EDR2) and radiochromic (Gafchromic EBT2) films were used for planar measurements. In both cases a calibration curve was created to correlate the measured film's optical density with the delivered dose, Figure 1 Heterogeneous thorax phantom. 1a) Six slabs of acrylic (density 1.16 g/cm 3 ) simulating the homogeneous media, with an aluminum cylindrical insert (2.7 g/cm 3 ) simulating bone equivalent material and two low density (0.04 g/cm 3 ) inserts symmetrically positioned and separated by an acrylic area (mediastinum), simulating lung region. 1b) For film measurements film 1 is positioned between the second (homogeneous slab) and the third slab (inhomogeneous slab); film 2 is positioned between the two inhomogeneous slabs with lung media. irradiating the film with a static uniform field at 5 cm depth; two sensitometric curves in the dose range from 0.12 Gy to 6.88 Gy for EDR2 films and between 0.12 Gy and 8 Gy for EBT2 respectively were created. Different calibration curves were created for each films batch used. A commercial Vidar film digitizer (DosimetryPro Advantage, Vidar Systems Corp., Herndon, VA) was used to scan EDR2 films. Gafchromic EBT2 films were scanned with EPSON Pro V750 Expression scanner A4 size at least 10 h after irradiation [21]. The software package "EPSON scan" (professional mode with all image adjustments and colour corrections turned off) was used to scan and acquire images. Films were scanned in the 48 bit red-green-blue (RGB) mode with a resolution of 72 dpi. A median filter (3 × 3) was applied to reduce noise. Data were saved in a tagged image file (TIFF). Film sheet orientation was maintained in the centre of the scan to guarantee better response stability. A correction matrix dependent on the pixel position and the different dose levels was applied in order to manage the light scattering of the scanner lamp and its non-uniform response [22]. For absolute point dose measurements, an Exradin A1SL ion chamber (Standard Imaging, Middleton, WI) was used. The A1SL has a small volume of 0.056 cm 3 , which makes it a good candidate for point dose measurements. The absolute dose was defined according to the International Atomic Energy Agency's (IAEA) recommended absolute dosimetry protocol (TSR 398) applying appropriate correction factors for beam quality and environmental conditions [23]. DQA procedure A patient specific DQA plan was generated for each treatment plan by considering the export of the treatment's fluence and the dose distribution recalculation, both on the homogeneous phantom (Cheese Phantom) and on the heterogeneous thorax phantom. For each DQA plan, film and ion chamber measurements were taken in order to verify the agreement between measured and calculated dose distributions, both for absolute dose points and for planar dose distribution. For homogeneous DQA plans, films (EDR2 and EBT2) were set in the coronal plane with concomitant point dose measurements in the sagittal direction. To minimize chamber position uncertainty, dose measurements points were selected in the high dose/low gradient or low dose/low gradient regions; absolute dose measurements were performed in 22 points inside the high dose/low gradient PTV region (15 points for mediastinic lesion, 4 and 3 points for single and multiple metastasis, respectively) and in 27 points (15 for mediastinic lesions, 7 for single and 5 for multiple metastasis) inside the low dose/low gradient OAR structures. Relative EDR2 film dose distributions were normalised to the absolute dose measured with ionization chamber in the PTV points, proximal to the film's coronal plane; EBT2 absolute dose distributions were considered. Similar procedures were performed for the DQA plans in the heterogeneous thorax phantom. Obviously in this case, treatment plan and relative DQA plan haven't any differences, due the same thorax inhomogeneous phantom was used to simulate inhomogeneous treatment plans and for DQA measurements. For each DQA plan two to four absolute dose points were acquired, both in the high dose/low gradient PTV region and in the low dose region corresponding to critical structures or healthy tissue. Two films were used for each DQA plan: the first (reported in the text as Film1) was placed in the interface region between a homogeneous slab and the low density lung slab, the second (Film 2) between the two slabs with the low density lung inserts (Figure 1b). Similarly to homogeneous measurements, relative EDR2 film dose distributions were normalised to the absolute dose measured with the ionization chamber; EBT2 films were used in a relative way by normalising both measured and calculated dose distributions in a point inside the PTV region. Data analysis The agreement among measured and calculated dose distributions was evaluated in terms of percentage difference between absolute point dose measurements, qualitative dose profile comparisons and a quantitative analysis of dose distribution through gamma function analysis [24]. For the point dose measurement the percent discrepancy was calculated according to: %Δ = 100* (Dm-Dc)/Dc, where Dm is the measured point dose and Dc is the calculated dose at the same position. The γ -map analysis is a method that conjugates both the dose difference (ΔDD) and the distance to agreement (ΔDTA) pass/fail criteria. The planar map of γ values gives a qualitative representation of the agreement of two distributions; a quantitative evaluation could be defined based on the analysis of γ -area histograms, defining the percentage of γ -values below a certain threshold. Profiles and dose map comparisons [13] were performed using TomoTherapy Inc. software. We quantitatively analysed the gamma function by considering the γ-area histograms and distribution using the Tomotherapy Inc software (Research station), by considering all the points of the film that are included in the homogeneous/inhomogeneous phantoms. In our analysis the dose difference criteria is defined respect to the prescribed dose calculated in the DQA dose distribution (the calculated dose distribution exported on the phantom). Different acceptance criteria were used for γ analysis: 3%-3 mm and 4%-3 mm for the homogeneous phantom; 3%-3 mm, 4%-3 mm, 4%-4 mm and 5% -4 mm for the heterogeneous thorax phantom. In the clinical practice we consider as acceptance criteria ΔDD = 3% and ΔDTA = 3 mm in case of simple case as spherical lesions and without stressing modulated dose distributions; ΔDD = 4% and ΔDTA = 3 mm in case of more complex geometries including irregularshaped targets, proximity of critical OARs to spare and then dose distributions with very high and deep dose gradients. We were confident that these criteria agree with those suggested in the ESTRO Booklet n°7 [25] 3. Results Homogeneous phantom Absolute point measurements are shown in Table 1, where the average percent discrepancy between measured and calculated dose is reported, respectively for PTV points (22 points) (high dose/low gradient dose points) and for critical structure regions (27 points) (high dose/high gradient, low dose/low gradient dose points), by considering, separately, the three anatomical districts. Excellent agreement (< 2%) between measured and calculated dose was found: an overall average discrepancy equal to 0.7% (1SD = 1.2%) and to 1% (1SD = 0.4%) was found for PTV and for OARs respectively. The largest average difference (1.9%) was found for single metastasis treatment plans, possible due to the more critical positioning in small target volumes. Film data (EDR2 and EBT2) were analyzed in two ways: first, with a qualitative comparison of dose profiles; second by a quantitative gamma index analysis. In Table 2 the percentage of points with gamma values ≤ 0.7, 1.0 and 1.5 were reported for different gamma index criteria, for both EDR2 and EBT2 films and for the three anatomical regions. Excellent agreement was also found for planar dose distributions: on average more than 97% of points passed the gamma test (γ ≤ 1) for EDR2 films with a 3%-3 mm criteria; a slightly worse, but acceptable agreement (94%) was found for EBT2 films; however, this value significantly increases using 4%-3 mm and 4%-4 mm criteria: 95.7% and 98% respectively. Table 3 shows the average percentage discrepancy between ion chamber measurements and TPS calculation for each simulated treatment plan and separately for PTV and OARs. Heterogeneous phantom An average discrepancy equal to -1% (1SD = 2.5%) and 2.3% (1SD = 4.5%) was found for target and OARs respectively. The worst agreement (-3% for PTV and around 9% for OARs) was found for multiple metastasis, probably due to the more stressed modulation applied in the irradiation. Good agreement was qualitatively reported in Figure 3 and 4 where the comparison between measured and calculated isodoses (Figure 3a and 4a) and dose profiles (Figure 3b and 4b) was shown in a coronal plane for all four simulated treatment plans, both for EDR2 ( Figure 3) and EBT2 (Figure 4) In table 4 and 5 the percentage of points with gamma values ≤ 0.7, 1.0 and 1.5 were reported for several acceptable dose/distance criteria, respectively for EDR2 (Table 4) and EBT2 films (Table 5) and for the three anatomical regions, by separately considering the results for two different films, with film1 placed between the second (homogeneous) and the third slab (lung region) and film 2 placed between the third and the fourth slabs (lung/lung region). Comparable results were found for EBT2 films where on average 95% of points satisfy the 4%-4 mm criteria; the percentage of points with γ ≤ 1 was 98% and 92% for film 1 and film 2 respectively. Slightly worse results were found with 3%-3 mm criteria, where on average 91% of points have γ ≤ 1 have, with 95% of points for film 1 and around 87% of points for film 2. Discussion and Conclusions The Helical Tomotherapy treatment planning system uses a relatively accurate collapsed cone convolution/ superposition algorithm for dose calculation and, as with other non -Monte Carlo algorithms, charged particle equilibrium is assumed in the dose calculation. For this reason we can expect inaccuracy in predicting dose distribution in the presence of significant inhomogeneities in patient geometry where this assumption is not satisfied. The dose distribution accuracy of the HT TPS was then tested in case of low density lung lesions. Before the validation of the dose calculation algorithm in inhomogeneous media, the agreement between measured and calculated dose distributions for lung treatments was verified in a homogeneous phantom. Excellent agreement was found for point dose measurements with most of the data within ± 2%; an average percentage discrepancy equal to 0.85% (1SD = 0.5%) was estimated by considering all the points, both in PTV and in OAR regions. Good agreement (3%-3 mm criteria) was also found for planar dose distributions, with 97% and 94% of points with γ ≤ 1, for EDR2 and EBT2 films respectively. The slightly worse results found with EBT2 could be probably correlated with the inaccuracy of the correction matrix applied to manage light scattering and non-uniform response of scanner lamp. The results found with EDR2 are in agreement with data published by Thomas et al [26], where the treatment plans of ten patients (head-neck, prostate, brain, bone metastasis) planned and treated with helical Tomotherapy were checked. An average point dose discrepancy of -1.3% was reported by con sidering high dose (-0.5 ± 1.1%), low dose (-2.4 ± 3.7%) and critical structure points (-1.1 ± 7.3%). By considering the 4 mm/3% criteria for EDR2 films, 92.6% and 99% of the measured points passed the test with γ ≤ 1 for the absolute and normalized planar dose distribution respectively; for these criteria our results were 99%. The quality of the collapsed cone convolution algorithm implemented in the treatment planning of HT for homogeneous media was also confirmed in Zhao's paper [18], where a good agreement among MC simulations, TPS calculations, film and point dose measurements were reported and verified for a helical dose calculation performed on the cheese phantom. Point dose measurements in the PTV agree very well with TPS and MC calculations with deviations of 0.5% and 0.75%, respectively. TPS results agreed very well with MC simulation for 90%-10% Dmax dose levels; good agreement of 30%-90% isodose lines between calculation and film measurements were found for both TPS and MC results with acceptance criteria of 2%-2 mm, with a slightly larger discrepancy in regions with dose lower than 30% Dmax. Analysis of the gamma value distributions shows that for a 3%-3 mm criteria 100% of the points in the PTV pass the test both for MC and TPS calculations; for OARs around 90% and 93.5% of points agree with film measurements for MC and TPS calculations respectively. All the regions agree with film measurements, both for MC and TPS calculations, by considering a 5%-3 mm criteria. In Zhao's paper [19] the accuracy of the CCS implemented in the HT treatment planning was evaluated against MC calculations and measurements in the CIRS anthropomorphic thorax phantom (lung density equal to 0.21 g/cm 3 ), simulating a single helical treatment with a lung PTV containing water/tissue and part of the right lung. Considering points within 33% of the maximum dose, the average percentage discrepancy between ion chamber measurements and calculations was equal to -1.4 ± 2.3% and 0.0 ± 0.81 for CCS HT and MC respectively. A wider difference was reported for planar dose distributions, where MC and TPS dose calculations were compared with relative dose distributions measured with EDR2 films. Using 3%-3 mm acceptance criteria, the MC agreed with measurements in around 90% of points, while the HT TPS is only 50%. With a clinically acceptable 5%-3 mm criterion, the MC agreed with film measurements in most of the phantom plane but the CCS HT failed in some of the high dose low density lung region, low dose boundary regions and high dose gradient regions, where TPS overestimates the PTV dose in the lung region and underestimates the dose in the lung-tissue interface. Similar results were also reported in Sterpin's paper [20], where CCS HT dose distributions may result in an overestimation of the dose to PTVs encompassing lung tissues and/or air cavities. The reported results clearly show that the CCS algorithm predicts higher dose coverage of the target volume compared with MC calculations for small lung tumors; no significant differences were found for most of the other clinical cases. In a recent paper of Chaudhari et al [17], HT calculated dose distributions were compared with the measurements in two treatment plans of oesophageal cancer; a cubic phantom with a mediastinum geometry was used and two different lung-equivalent materials (density equal to 0.28 and 0.16 g/cm 3 ) considered. The agreement between point dose measured values and TPS was in most cases within 1% with an average discrepancy of -0.3 ± 0.8%. For tolerance criteria of 3%-3 mm, using gafchromic films, around 95% and 98% of points passed the test (γ ≤ 1), respectively for Balsa wood (0.16 g/cm 3 ) and for the LN300 ((0.28 g/cm 3 ), the two different media simulating the lung region. These both results were obtained by considering two film planes, both inserted between slabs of inhomogeneous low density media. No measurements were reported in the interface region between homogeneous and low density media. Our results for the inhomogeneous phantom (lung surrogate density equal to 0.04 g/ cm 3 ) and mediastinum clinical situations were worst: using the same criteria we found around 89% of points with γ ≤ 1, if we consider similarly to Chaudhari's paper only the film completely inserted in low density media (film2); better result were found (around 96% of pints) if we consider the film 1 inserted between homogeneous/ inhomogeneous media. In summary, based on the reported situations, the Tomotherapy TPS provides an accurate dose calculation with clinically acceptable results for the pre-treatment verification of all considered thoracic irradiations in (very) low density media. The results, both in terms of point measurements and in terms of profiles and planar dose distribution comparison, were in agreement with the acceptance criteria defined for IMRT verification. A direct comparison with Monte Carlo simulations should be investigated in the future. Percentage of points with γ value ≤ 0.7, 1 and 1.5 was reported, separately for film 1 and film2.
2014-10-01T00:00:00.000Z
2011-02-09T00:00:00.000
{ "year": 2011, "sha1": "49933c8d06d2c4a7f2f4fd331b8c0f2e56f0aed0", "oa_license": "CCBY", "oa_url": "https://ro-journal.biomedcentral.com/track/pdf/10.1186/1748-717X-6-14", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "49933c8d06d2c4a7f2f4fd331b8c0f2e56f0aed0", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
40730228
pes2o/s2orc
v3-fos-license
Particulate Matter (Fine Particle) and Urologic Diseases Particulate matter (PM) has been found to damage vital body organs, including the lungs and heart, through vascular damage and oxidative stress. Recently, renal function and chronic urologic diseases have also been found to be related to PM. To investigate this, we reviewed the characteristics of PM related to renal toxicity, including recent studies on the associations of urologic diseases with PM. PM can include constituents that cause renal toxicity, such as lead, cadmium, arsenic, and crystalline silica, which result in renal tubular or interstitial damage. Since 2008, 7 studies have evaluated the renal effects of PM. Two prospective cohort studies and a quantitative study of consecutive patients showed that PM may be related to decreased renal function, as shown by the estimated glomerular filtration rate of diseased or aged participants. Two cross-sectional studies found an association between PM and chronic kidney disease. One of those studies identified the specific renal diseases of immunoglobulin A nephropathy and membranous nephropathy. Two studies that analyzed renal cancer and PM showed no evidence that renal cancer is related to PM. Nine studies were evaluated regarding the relationship of bladder and prostate cancer with PM. The evidence for an association of PM with bladder and prostate cancer is still inconclusive. Although some recently published studies have shown a significant relationship, the causal relationship is not clear. Further well-designed studies on specific renal diseases are required. INTRODUCTION Particulate matter (PM) has been known to be an important cause of occupational and environmental disorders since the Great Smog of London [1,2], when PM levels were ~3,000 μg/m 3 (December 1952), resulting in high cardiopulmonary mortality. Since then, numerous studies regarding PM and human health have revealed that the cardiopulmonary system is particularly vulnerable to PM exposure [3][4][5]. Recently, the International Agency of Research on Cancer listed PM as a cause of lung cancer, based on studies showing that relatively low concentrations of PM had long-term effects on human health [6]. Since 2012, PM has increased in Korea, resulting in frequent episodes of poor air quality and increased public concern. Several studies have investigated the health consequences of this phenomenon, including ischemic heart disease, asthma, and hospital visits for Ménière disease related to PM [7][8][9][10][11]. INJ The underlying mechanisms of how PM causes or exacerbates cardiopulmonary disorders are not yet fully understood. The major theories supported by scientific evidence are inflammation or oxidative stress in the microenvironment, and vascular endothelial damage by PM [17,27]. These cardiovascular alterations might affect the kidney, because it is a highly vascularized, multifunctional organ of the human circulatory system. The susceptibility of the kidneys to environmental toxins can be explained by 4 factors: high blood flow, the ability of the kidney to concentrate toxic agents, the high metabolic activity of tubular cells, and the capacity of the kidney to dissociate protein-bound substances and to alter the pH of tubular fluid [12]. However, these 4 factors contributing to the susceptibility of the kidney have only been explained from a toxicological viewpoint, and the effect of particle size has not yet been considered. According to the aerodynamic diameter, 2 categories of particles are regulated by the Environmental Protection Agency: coarse particulate matter (PM10) with an aerodynamic diameter of < 10 μm, and fine particulate matter (PM2.5) with an aerodynamic diameter of < 2.5 μm [28]. Particles < 10 μm in diameter can penetrate the nasal cavity to reach the alveoli, thus reaching the lungs and escaping into the blood stream. The smaller a particle is, the longer it will stay in the deeper sites of the lungs; furthermore, particles < 1 μm act like gas molecules and reach the circulatory system [29]. If a small particle transitions into the vascular system, the kidneys theoretically become a direct target of PM. Recent epidemiologic studies have shown that PM2.5 [17,18] affected the decline of renal function and increased membranous nephropathy, meaning that the kidney is potentially susceptible to PM2.5. Although some studies have reported a relationship between PM and bladder and prostate cancer, the pathophysiology has not been proposed. Because most PM contains metals, gases, and various organic chemicals [30], the effects of PM should be considered both in terms of the toxicity of its components and the size of PM. Due to the tremendous renal capacity to compensate for functional loss, an early diagnosis of renal disease is difficult. For this reason, many studies have been done on end-stage renal disease to elucidate the effects of air pollution [31]. In this study, we reviewed the effect of PM on urologic disorders based on recent research in view of PM particle size and its constituents. PARTICULATE MATTER The term PM refers to a mixture of solid particles and liquid droplets in the air [32]. Both PM10 and PM2.5 are composed of inhalable particles of many sizes and shapes that contain hundreds of chemicals that react with each other. The main sources of PM are construction, unpaved roads, smokestacks and fires, power plants, the tire industry, and automobiles [32,33]. Naturally occurring PM comes from volcanoes, dust storms, forest fires, sea spray, and living vegetation [5]. The common chemical constituents of PM include inorganic ions (sulfates, nitrates, ammonium, sodium, calcium, and chloride), metals (cadmium, copper, nickel, vanadium, and zinc), polycyclic aromatic hydrocarbons, and microbial components. A major source of PM is traffic, due to brakes, tires, road dust, and pavement abrasion [34]. Indoor activities and sources, such as cooking, pets, carpet, aerosol cans, and office equipment, also generate PM [35]. As it is composed of particles < 10 μm in diameter, PM10 has the greatest effect on human health. PM with a diameter of between 5 and 10 μm is more likely to be deposited in the tracheobronchial tree, whereas PM between 1 and 5 μm can move down to the respiratory bronchioles and alveoli. PM < 1 μm in diameter can penetrate the alveoli, and can translocate into cellular tissues and the circulatory system [5]. The pathophysiology of PM toxicity in the human body is not fully understood. One hypothesis is that the mechanism involves metal-mediated processes. Metal contained in particles can mediate airway inflammation due to PM. When transitional metals are involved in this process, reactive oxygen species can be generated [36]. Several studies have reported that the elemental components of PM are associated with cell membrane disruption, a strong potential to induce proinflammatory cytokines, and tumor necrosis factor α-induced or mitochondriainduced apoptosis [37]. TOXICITY OF PARTICULATE MATTER CONSTITUENTS ON THE KIDNEY The pathophysiology of PM depends on the size and toxicological properties of its constituents. Lead, cadmium, and arsenic are among the most common constituents of PM, and considerable research has been conducted on the renal effects of these metals [38]. All these metals can cause proximal tubular or interstitial damage and result in albuminuria or proteinuria. The available experimental evidence indicates that reactive oxygen metabolites, which can generate intermediate metals, cause glomerular disease and tubulointerstitial damage [39]. Another environmental risk to the kidneys is crystalline silica, which is known to be related to chronic renal failure. Although the toxic effect of crystalline silica on the kidneys is not yet understood, many studies have investigated chronic kidney disease in exposed populations [40]. It is well-known that environmental lead exposure leads to renal insufficiency, resulting in the major health problems of blood pressure and renal function damage. The environmental sources of lead are the lead-based paint used in the past and leaded gasoline. The kidneys are critically affected by long-term lead exposure. Acute lead nephropathy is characterized by the tubular transport mechanism (Fancoy syndrome), which shows tubular epithelium degeneration. Chronic exposure to lead can result in tubulointerstitial changes or chronic renal failure [41]. Cadmium is another important metal related to chronic kidney disease, and is produced by fuel combustion, household waste, tobacco smoke, and sewage. Cadmium exposure induces the synthesis of metallothionein, which is a cadmium scavenger in the liver. Cadmium-induced renal damage presents as proximal tubular dysfunction, hypercalcemia, and renal stones [38]. Arsenic exposure occurs from drinking water and food in the general population, and via inhalation when the exposure is occupational [42]. Although many constituents of PM affect the renal system, no single component cannot adequately explain the overall health effect observed in epidemiologic studies, due to the scarcity of data [43]. Although studies of arsenicinduced renal disease are rare, they have generally reported a positive association of arsenic with albuminuria and proteinuria in exposed populations; some studies showed a dose-response relationship [43]. Crystalline silica dust has been studied in the context of several occupational and environmental disorders, such as lung cancer, scleroderma, systemic lupus erythematosus, rheumatoid arthritis, or antineutrophil cytoplasmic antibody-associated vasculitis. Möhner et al. [40] conducted a meta-analysis exploring the association between respirable crystalline silica and nonmalignant renal disease. A total of 23 cohort and 4 casecontrol studies were included in the analysis. The authors found that the cohorts exposed to silica exhibited an elevated standardized mortality ratio (SMR), without a dose-response relationship. Cohorts with silicosis showed an overall SMR of 1.28 (95% confidence interval [CI], 1.01-1.62). The combined anal-ysis of the industry-based cohorts resulted in an SMR of 1.52 (95% CI, 1.16-1.98). Because the dose-response analysis was heterogeneous in these cohorts, the authors concluded that there were diagnostic and methodological issues related to the elevated SMR. PARTICULATE MATTER AND RENAL FUNCTION Previous studies on PM exposure and the circulatory system have identified renal function as an early index of cardiovascular disorders due to PM. Two prospective cohort studies and a quantitative study of consecutive patients have investigated whether decreased renal function is associated with PM exposure. The outcomes were albuminuria, microalbuminuria, and the estimated glomerular filtration rate (eGFR). O'Neill et al. [15] conducted a prospective cohort study as part of their multiethnic study of atherosclerotic populations to investigate urinary albumin as a subclinical marker of microvascular function affected by PM ( Table 1). The cohort consisted of 6,814 men and women aged 44-84 years who were free of clinical cardiovascular disease at baseline. Half of the study subjects were women, and their average age was 63 years old. The outcome of interest was creatinine-adjusted urinary albumin excretion. Subjects were classified into 4 categories of urine albumin excretion: normal, high-normal, microalbuminuria, and macroalbuminuria. Recent air pollution was assessed based on the participant's place of residence at the time of the baseline examination. Chronic exposure was estimated based on the residential history of each participant. Long-term exposure was estimated for PM10 and PM2.5 using direct measurements by an Environmental Protection Agency monitoring network. The estimated association between air pollution and creatinine-adjusted urinary albumin was mostly negative. There was only weak evidence that long-term exposure was associated with changes in microalbuminuria over time. The authors concluded that urinary albumin is not a marker for the mechanism underlying the association between the cardiovascular system and air pollution. Even if albumin levels are known to be well-correlated with microvascular dysfunction, the renal system is capable enough to compensate effectively in the healthy population. Lue et al. [16] analyzed the eGFR of acute ischemic stroke patients to elucidate the effects of PM. The authors hypothesized that the eGFR would be associated with proximity to the major roads where PM is produced, because most PM-related health outcomes are vascular diseases and due to the profound vascularity of the kidney. The study population consisted of consecutive patients aged ≥ 21 years with a confirmed history of acute ischemic stroke. The results showed that living near a major roadway was associated with a lower eGFR. Patients living within 50 m of a major road had a 3.9 mL/min/1.73 m 2 lower eGFR (95% CI, 1.0-6.7; P = 0.007) than those living within 1,000 m of a major road, and this result was not confounded or mediated by age, sex, race, history of hypertension, diabetes, or socioeconomic status. The authors explained that long-term exposure to traffic pollution leads to vascular endothelial injuries, systemic inflammation, atherosclerosis, and microvascular changes, which result in renal functional changes (Table 1). Mehta et al. [17] concluded that PM2.5 reduced renal function in their Veterans Administration Normative Aging Study. The study included a closed cohort of 2,280 male volunteers from the greater Boston (MA, USA) area who were 21-80 years old at study entry. The mean age of the participants was 73.5 years, and the majority were ex-smokers and used antihypertensive medications. One-year PM2.5 exposure was associated with a lower eGFR; more specifically, a 2.1 μg/m 3 interquartile range higher 1-year PM2.5 exposure was associated with a 1.87 mL/min/1.73 m 2 lower eGFR (95% CI, -2.99 to -0.76]. Notably, participants using angiotensin receptor blockers showed a null association, implying that angiotensin receptor blockers might minimize the vasoconstrictive effect of PM. The subjects analyzed by O'Neill et al. [15] were healthy, which might explain why they could not demonstrate a relationship between PM and albuminuria. By contrast, Lue et al. [16] included patients with acute ischemic stroke, and Mehta et al. [17] investigated an aged population who may have had agerelated renal function changes. From these findings, the glomerular filtration rate was shown to be a possible index for measuring early renal functional changes in unhealthy, vulnerable populations who are more sensitive to the effects of PM. CHRONIC KIDNEY DISEASE AND PARTICULATE MATTER Two studies analyzed PM-related chronic kidney disease. One of them identified the specific renal diseases of immunoglobu- lin A nephropathy and membranous nephropathy. Xu et al. [18] collected 71,151 renal biopsy series over 11 years to investigate the temporal change of glomerular diseases associated with PM2.5. The authors found that immunoglobulin A nephropathy was the most common type of glomerulopathy (28.1%), followed by membranous nephropathy (23.4%). After adjustment for age and region, the odds of membranous nephropathy increased by 13% during the 11 years of the study. An increase of 10 μg/m 3 in the PM2.5 concentration was associated with 14% higher odds for membranous nephropathy. Yang et al. [19] recruited 21,656 adult participants with a mean age of 53.65 years during their 2007-2009 Health Screening Program. They calculated the eGFR using the Taiwanese Chronic Kidney Disease Epidemiology Collaboration equation. Exposure was estimated via annual average concentrations of PM2.5, PM10, and PMCoarse (defined as PM10-PM2.5) at each participant's residential address. The results showed that exposure during the previous year to PM10 and PMCoarse, but not PM2.5, was associated with the prevalence of chronic kidney disease and reduced renal function among Taiwanese adults. The association between PM and chronic kidney disease was stronger in females than in males for PM10. The authors noted that a possible reason for the null association of PM2.5 might be the different constituents and toxicity of PM according to diameter. RENAL CANCER AND PARTICULATE MATTER Almost all results showing renal cancer to be associated with environmental exposure have demonstrated a weakly increased risk related to gasoline vapors, engine exhaust, trichloroethylene, asbestos, and polycyclic aromatic hydrocarbons [44]. However, few studies have investigated PM exposure and renal cancer. Raaschou-Nielsen et al. [21] explored the associations between traffic pollution and cancer incidence in a Danish cohort. In total, 57,053 men (48%) and women (52%) aged 50-64 years were recruited. The authors analyzed various cancers and components of air pollution. The incidence rate of kidney cancer was found to be weakly associated with nitrogen oxides, without statistical significance. Using the European Study of Cohorts for Air Pollution Effects, Raaschou-Nielsen et al. [20] performed a multicenter cohort study to investigate the association between PM in outdoor air and kidney cancer. The participants were 14 cohorts located in 10 areas in Europe. In total, 289,002 participants were enrolled for the pooled analysis. Higher hazard ratios (HRs) were associated with higher PM concentrations (HR, 1.57; 95% CI, 0.81-3.01 per 5 μg/m 3 of PM2.5), although the findings were not statistically significant. The authors concluded that the small number of kidney cancer cases and misclassification of the exposure might have resulted in statistical insignificance. BLADDER CANCER Studies on bladder cancer and PM have only recently been conducted, meaning that insufficient evidence is available to draw conclusions on the causal relationship. Two case-control studies, 1 ecologic study, and 3 populationbased cohort studies have been conducted on the association between bladder cancer and PM ( Table 2). All the results, except for 1 cohort study, showed a positive association. Yanagi et al. [45] analyzed the association between PM10 and cancer incidence and mortality. They found that the incidence of some types of cancer, including bladder cancer, showed a statistically significant correlation with PM10. Case-control studies conducted by Castaño-Vinyals et al. [46] and Liu et al. [47] showed a small to moderate positive association between several indices of air pollution and bladder cancer. In the results of Castano-Vinyals et al. [46], living more than 40 years in a large city was associated with bladder cancer (odds ratio [OR], 1.30; 95% CI, 1.04-1.63). Polycyclic aromatic hydrocarbons and diesel were associated with an increased risk (OR, 1.29; 95% CI, 0.85-1.98). Liu et al. [47] found a significant association between levels of air pollution and bladder cancer mortality (OR, 1.37; 95% CI, 1.03-1.82). The cohort studies reported uncertain correlations between bladder cancer and PM. Smith et al. [48] and Yeh et al. [22] found a positive association of air pollution and PM2.5 with bladder cancer. However, Pedersen et al. [23] did not find any such association in their study that included 15 populationbased cohorts. PROSTATE DISEASE Although studies on whether prostate cancer is related to PM exposure began earlier than studies of other urologic disorders, the evidence is still equivocal. The association between prostate cancer and air pollution has been studied since Winkelstein and Kantor [24] analyzed the association of prostate cancer and air pollution in Erie County and Nashville. Because the study was conducted using Parent et al. [26] conducted a case-control study to investigate the association between air pollution and prostate cancer using ground-level nitrogen dioxide (NO2) as a marker of traffic-related air pollution. They found that exposure to ambient concentrations of NO2 was associated with an increased risk of prostate cancer. Ramis et al. [25] presented research into the spatial distribution of prostate cancer mortality in an industrialized area. They used distances from each of a number of industrial facilities as an indirect measure of industrial pollution. They found a significantly elevated risk of prostate cancer (by a factor of approximately 1.4) in the immediate vicinity, decaying with distance to a value of 1.08 at 12 km. Few studies have reported that prostatic hyperplasia was positively related with air pollution [49]; well-designed research will be necessary in the future to address this issue. CONCLUSIONS Too few studies on the association between PM and urologic disease have been conducted to draw conclusions regarding the causal relationship. Research on PM and human health has expanded widely from the cardiorespiratory system to include respiratory cancers and perinatal and reproductive outcomes. The most widely acknowledged mechanism through which PM affects the cardiorespiratory system is by damaging the vascular system, such as causing endothelial injuries to vessels in various organs. PM has been hypothesized to affect the kidney as a secondary effect of its damage to the respiratory or circulatory system. According to this hypothesis, PM-related vessel injury results in hypertension, which must be reflected in the renal tissue. Studies of PM exposure and the circulatory system conducted since 2008 have suggested that renal function may be an early index of cardiovascular disorders due to PM. Since then, the glomerular filtration rate has been used as a useful index to measure early renal functional changes in unhealthy, vulnerable Population-based cohort Geographically weighted regression was applied Ambient PM2.5 showed a positive correlation with bladder cancer mortality in males in northern Taiwan and females in most of the townships in Taiwan. populations. Considering the high vascularity of this organ, more research into the direct relationship between PM and chronic renal disease has been conducted since 2016. Two studies reported a significant association between PM exposure and chronic renal disease. However, the relationship between renal cancer and PM is controversial. Although some recently published studies have shown a significant association, these findings are insufficient to demonstrate the quality and quantity of the association. The evidence regarding the potential associations between bladder and prostate cancer likewise does not allow firm conclusions to be drawn. Further well-designed studies on specific urologic diseases are required.
2018-04-03T05:32:15.031Z
2017-09-01T00:00:00.000
{ "year": 2017, "sha1": "f77bd6c3676a613c06591ee894501189646c612a", "oa_license": "CCBYNC", "oa_url": "http://www.einj.org/upload/pdf/inj-1734954-477.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f77bd6c3676a613c06591ee894501189646c612a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
236319379
pes2o/s2orc
v3-fos-license
High Hole Concentration and Diffusion Suppression of Heavily Mg-Doped p-GaN for Application in Enhanced-Mode GaN HEMT The effect of Mg doping on the electrical and optical properties of the p-GaN/AlGaN structures on a Si substrate grown by metal organic chemical vapor deposition was investigated. The Hall measurement showed that the activation efficiency of the sample with a 450 sccm Cp2Mg flow rate reached a maximum value of 2.22%. No reversion of the hole concentration was observed due to the existence of stress in the designed sample structures. This is attributed to the higher Mg-to-Ga incorporation rate resulting from the restriction of self-compensation under compressive strain. In addition, by using an AlN interlayer (IL) at the interface of p-GaN/AlGaN, the activation rate can be further improved after the doping concentration reaches saturation, and the diffusion of Mg atoms can also be effectively suppressed. A high hole concentration of about 1.3 × 1018 cm−3 can be achieved in the p-GaN/AlN-IL/AlGaN structure. Introduction The AlGaN/GaN high-electron mobility transistor (HEMT) on Si has received tremendous research attention in high-power device application due to its large breakdown electric field, high electron saturation velocity, and good thermal conductivity [1]. In order to guarantee a safe operation and simplify the circuit architecture, the AlGaN/GaN HEMT is made in the enhanced mode (E-mode) configuration of normally-off operation [2]. The most common and commercial E-mode HEMT is designed in the p-GaN/AlGaN/GaN HEMT configuration. The p-GaN raises the GaN conduction band of the AlGaN/GaN HEMT above the Fermi level, leading to the depletion of the two-dimensional electron gas (2DEG) channel at zero gate bias [3]. Therefore, an E-mode p-HEMT with a higher and stable threshold voltage (Vth) is expected by increasing the hole concentration. However, Mg doping for higher hole concentrations encountered several challenges, including (1) the compensation effect of the donor due to native defects (V N ) and dislocations [4][5][6], (2) low p-type activation of Mg-H into GaN [7,8], (3) self-compensation effect due to saturation Mg doping-induced donor-type defects [9][10][11], (4) the formation of pyramidal defects from Mg segregation on threading dislocation [12,13], and (5) Mg diffusion into the AlGaN barrier layer and GaN channel layer [14,15]. Although, L. Sang et al. recently showed that the hole concentration and activation efficiency of Mg-doped p-GaN grown on a freestanding GaN substrate of a low dislocation density could be improved dramatically [6]. Yingda Chen et al. discovered that the growth technique of indium surfactant-assisted delta doping could substantially enhance the hole concentration of a p-GaN/u-GaN homostructure grown on a 2-inch c-plane sapphire to 1.5 × 10 18 cm −3 [16]. However, the issue Nanomaterials 2021, 11, 1766 2 of 9 of low activation efficiency for Mg-doped p-GaN/AlGaN hetero-structures on the more economic Si substrates remains. As the Mg doping increases, the deep-level emission dominates in the photoluminescence (PL) and cathodoluminescence (CL) spectra [17,18]. This implies the formation of deeper donors to compensate holes or the creation of deeper Mg acceptor levels rather than shallow acceptor levels, further resulting in the difficulty to activate holes from the deep Mg acceptors to the valence band and decrease the activation efficiency. Therefore, it is essential to further investigate the effect of Mg doping on the electronic and optical properties to discover the optimized growth condition for better activation efficiency. With the case of Mg diffusion into the AlGaN barrier layer and GaN channel layer, Loizos Efthymiou et al. discovered that Vth shifts firmly with Mg diffusion [19]. CL measurements revealed Mg diffusion along the dislocation [20]. Mg diffusion along the edgetype and mixed-type dislocations was also evidenced by transmission electron microscopy and atom probe tomography [21,22]. As a result, it is crucial to explore how to suppress Mg diffusion for better device performance of Mg-doped p-GaN/AlGaN/GaN HEMTs. In the current work, the flow rate of Cp 2 Mg was modulated to grow Mg-doped p-GaN on AlGaN to study the effect of different Mg doping concentrations on the hole concentration and activation efficiency. The PL experiment was carried out to investigate the deep emissions and self-compensation at various doped Mg levels. In addition, Hsien-Chin Chiu et al. demonstrated that a thin AlN etch stop layer in the p-GaN/AlN/AlGaN/GaN HEMT structure can effectively improve the device R ON uniformity and reduce the leakage current [23,24]. Thus, the influence of a thick GaN and thin AlN interlayer (IL) at the interface of the Mg-doped p-GaN and AlGaN layer on the activation efficiency and Mg diffusion was also investigated in this study. Materials and Methods The epitaxial structures of the Mg-doped GaN layers were grown by a metal organic chemical vapor deposition (MOCVD) system (Veeco Instruments Inc, Plainview, NY, USA) on 6-inch Si (1 1 1) substrates, as shown in Figure 1. The conventional source precursors including trimethylaluminum (TMAl), trimethylgallium (TMGa), ammonia (NH 3 ), and bis(cyclopentadienyl) magnesium (Cp 2 Mg) were used to grow the AlN, AlGaN, GaN, and Mg-doped p-GaN layers. To avoid Ga-Si melt-back etching, a 200 nm AlN nucleation layer was first grown at 1030 • C on Si substrate. There are three types of samples, A, B, and C, as shown in Figure 1. All samples applied the same step-graded AlGaN buffers, consisting of a 200 nm Al 0.7 Ga 0.3 N layer, a 300 nm Al 0.5 Ga 0.5 N layer, and a 300 nm Al 0.3 Ga 0.7 N layer grown at 1020 • C to modulate stress for avoiding cracking. The sample structures were designed for high Mg activation rates and suppressing Mg diffusion into the under-layers. For sample A, the 1000 nm-thick Mg-doped p-GaN layers were grown at 990 • C with different Cp 2 Mg flow rates of 0, 200, 450, 600, 750, and 900 sccm, labeled as A 0 , A 200 , A 450 , A 600 , A 750 , and A 900 , respectively. For both samples B and C, the Cp 2 Mg flow rate was 900 sccm for investigating the effect of undoped GaN (u-GaN) and AlN-IL on the Mg activation rate and diffusion. The post-growth thermal activation of Mg-doped p-GaN was performed for 20 min at 720 • C under a nitrogen atmosphere. The secondary ion mass spectroscopy (SIMS) measurement was carried out on all samples to determine the Mg concentration in the p-GaN layer by the IMS-6f (CAMECA SAS, Gennevilliers, France). In order to investigate the electrical properties of p-GaN, the standard Hall effect with the Van der Pauw method was conducted at room temperature by the HMS-3000 (Ecopia Corporation, Anyang-City, South Korea). The optical properties of all samples were studied using low-temperature photoluminescence (PL) spectroscopy by the excitation of a HeCd laser at 325 nm. The threading dislocation density (TDD) was evaluated from the full width at half maximum (FWHM), scanned on GaN (002) and (102) planes by X-ray diffraction (XRD, X'Pert Pro MRD, Malvern Panalytical, Almelo, The Netherlands). The characterization of structure strain was performed by Raman scattering. The effect of the Mg doping concentration on the surface morphology was carried out by scanning electron microscopy (SEM, JSM7001F, JEOL, Tokyo, Japan), optical microscopy (OM, AL100, Olympus Corporation, Tokyo, Japan), and atomic force microscopy (AFM, NT-MDT Spectrum Instruments, Moscow, Russia). full width at half maximum (FWHM), scanned on GaN (002) and (102) planes by X-ray diffraction (XRD, X'Pert Pro MRD, Malvern Panalytical, Almelo, The Netherlands). The characterization of structure strain was performed by Raman scattering. The effect of the Mg doping concentration on the surface morphology was carried out by scanning electron microscopy (SEM, JSM7001F, JEOL, Tokyo, Japan), optical microscopy (OM, AL100, Olympus Corporation, Tokyo, Japan), and atomic force microscopy (AFM, NT-MDT Spectrum Instruments, Moscow, Russia). Results and Discussion The hole carrier concentration and activation efficiency as a function of Mg doping are revealed in Table 1 and Figure 2. As we can see, the hole concentration increases, corresponding to decreased mobility with the increasing Mg doping. Meanwhile, the resistivity decreases initially and then increases with the Mg doping. The activation efficiency (Mg doping efficiency), which is defined as the ratio of the hole concentration (obtained from Hall measurement) and Mg doping density (measured by SIMS), increases initially and reaches a maximum value of 2.22% at Mg doping of 2. 42 10 19 /cm 3 (450 sccm), and then it decreases with the Mg doping. This can be attributed to the Mg saturated concentration of about 2 10 19 /cm 3 . Furthermore, the low Mg concentration behavior presented in our samples is similar to that of other reported data [11,25] for a GaN:Mg hetero-epitaxial layer on a sapphire substrate, as shown in Figure 2. However, they all showed constant reversion of the hole concentration after Mg saturation, owing to the self-compensation effect. Even A. Klump et al. applied UV illumination to reduce H passivation and the self-compensation impact on the GaN:Mg films, which was just helpful on the concentration below the Mg saturation. In our case, when the Mg doping is more than the selfcompensation onset of 2. 42 10 19 /cm 3 (450 sccm), it is worth noting that the activated hole concentrations still rise without a hole concentration reversion. However, the decrease in activation efficiency could be ascribed to the starting existence of high Mg doping-induced defects, for example, the formation of Mg interstitials [9,17], nitrogen vacancy VN [9,26], MgGa-VN complexes [11,27], and pyramidal inversion domain (PID) defects [28,29]. Another scenario could be the building possibility of Mg-N-Mg clusters. The rising formation probability of Mg-N-Mg double acceptors could split the acceptor level and create deeper acceptor states and further decrease the density of a single Mg shallow acceptor. The deeper acceptor states are not active in creating free holes, leading to lower activation efficiency. For even higher Mg doping concentrations, the possibility to generate Mg3N2 clusters increases [30,31]. The formation of Mg3N2 clusters decreases the single Mg concentration, and the energy states of Mg3N2 clusters are deep levels in the energy Results and Discussion The hole carrier concentration and activation efficiency as a function of Mg doping are revealed in Table 1 and Figure 2. As we can see, the hole concentration increases, corresponding to decreased mobility with the increasing Mg doping. Meanwhile, the resistivity decreases initially and then increases with the Mg doping. The activation efficiency (Mg doping efficiency), which is defined as the ratio of the hole concentration (obtained from Hall measurement) and Mg doping density (measured by SIMS), increases initially and reaches a maximum value of 2.22% at Mg doping of 2.42 × 10 19 /cm 3 (450 sccm), and then it decreases with the Mg doping. This can be attributed to the Mg saturated concentration of about 2 × 10 19 /cm 3 . Furthermore, the low Mg concentration behavior presented in our samples is similar to that of other reported data [11,25] for a GaN:Mg hetero-epitaxial layer on a sapphire substrate, as shown in Figure 2. However, they all showed constant reversion of the hole concentration after Mg saturation, owing to the self-compensation effect. Even A. Klump et al. applied UV illumination to reduce H passivation and the self-compensation impact on the GaN:Mg films, which was just helpful on the concentration below the Mg saturation. In our case, when the Mg doping is more than the self-compensation onset of 2.42 × 10 19 /cm 3 (450 sccm), it is worth noting that the activated hole concentrations still rise without a hole concentration reversion. However, the decrease in activation efficiency could be ascribed to the starting existence of high Mg doping-induced defects, for example, the formation of Mg interstitials [9,17], nitrogen vacancy V N [9,26], Mg Ga -V N complexes [11,27], and pyramidal inversion domain (PID) defects [28,29]. Another scenario could be the building possibility of Mg-N-Mg clusters. The rising formation probability of Mg-N-Mg double acceptors could split the acceptor level and create deeper acceptor states and further decrease the density of a single Mg shallow acceptor. The deeper acceptor states are not active in creating free holes, leading to lower activation efficiency. For even higher Mg doping concentrations, the possibility to generate Mg 3 N 2 clusters increases [30,31]. The formation of Mg 3 N 2 clusters decreases the single Mg concentration, and the energy states of Mg 3 N 2 clusters are deep levels in the energy gap and do not contribute free holes. With a consistent result, we also obtained precipitation of Mg-rich and pyramid-shaped defects on our SEM and optical microscope images, respectively, after the flow rate of 450 sccm (not shown here). The energy-dispersive X-ray spectroscopy (EDS) analysis also exhibited the Mg content of the 900 sccm sample, about 2.4% on the Mg-rich precipitates and around three times that of the blank background (0.79%). In addition, AFM images show the root mean square (RMS) of the surface roughness increases from 0.49 to 1.75 nm in the 5 µm × 5 µm scan area, while the Mg flow rate increases from 0 to 900 sccm. gap and do not contribute free holes. With a consistent result, we also obtained precipitation of Mg-rich and pyramid-shaped defects on our SEM and optical microscope images, respectively, after the flow rate of 450 sccm (not shown here). The energy-dispersive Xray spectroscopy (EDS) analysis also exhibited the Mg content of the 900 sccm sample, about 2.4% on the Mg-rich precipitates and around three times that of the blank background (0.79%). In addition, AFM images show the root mean square (RMS) of the surface roughness increases from 0.49 to 1.75 nm in the 5 μm × 5 μm scan area, while the Mg flow rate increases from 0 to 900 sccm. The PL spectra of p-GaN films at 10 K with different Mg doping concentrations are shown in Figure 3. The PL of the undoped GaN film shows a sharp near-band edge emission (NBE) at 3.46 eV (358.4 nm), as shown in Figure 3a. The broad emissions below 3.2 eV are attributed to the defect emissions from the AlGaN layers. In addition, the oscillation in the PL intensity below 3.2 eV is due to the Fabry-Perot interference of the whole sample structure. By measuring the energy separation of the two nearest peaks ΔE, the total sample thickness could be evaluated by d = hc/(2nΔE) ≈ 2 μm, where h, c, and n are the Planck constant, speed of light, and refraction index at the emission peak, respectively. When the Mg doping is turned on at 200 sccm, the native donor (VN) [4] and shallow Mg acceptor pair (DAP) emission dominates the PL spectrum, as it can be seen in Figure 3b. [11,25] are also plotted for comparison, as shown by blue circles, green circles, and orange triangles. The PL spectra of p-GaN films at 10 K with different Mg doping concentrations are shown in Figure 3. The PL of the undoped GaN film shows a sharp near-band edge emission (NBE) at 3.46 eV (358.4 nm), as shown in Figure 3a. The broad emissions below 3.2 eV are attributed to the defect emissions from the AlGaN layers. In addition, the oscillation in the PL intensity below 3.2 eV is due to the Fabry-Perot interference of the whole sample structure. By measuring the energy separation of the two nearest peaks ∆E, the total sample thickness could be evaluated by d = hc/(2n∆E) ≈ 2 µm, where h, c, and n are the Planck constant, speed of light, and refraction index at the emission peak, respectively. When the Mg doping is turned on at 200 sccm, the native donor (V N ) [4] and shallow Mg acceptor pair (DAP) emission dominates the PL spectrum, as it can be seen in Figure 3b. The peak of DAP is around 3.1 eV. As the Mg doping is further increased to 450 sccm, the peak energy of blue luminescence (BL) is near 2.8 to 3.0 eV (Figure 3c). The emission peak near 2.8-3.0 eV was attributed to the deep donor-to-shallow acceptor transition [32,33]. These deep donors could be created by the heavy Mg doping-induced defects. The emission peak near 2.8-3.2 eV could also be ascribed to the recombination of a native donor and heavy Mg doping-induced deep Mg acceptor. The PL spectra presented in Figure 3d-f, for higher Mg doping samples, are basically the same as in Figure 3c. The peak intensity of green luminescence (GL) and yellow luminescence (YL) becomes more prominent with increased Mg doping, which means that structural defects related Nanomaterials 2021, 11, 1766 5 of 9 to V N begin to increase [26,34]. In general, the collected PL data corroborate the results of electrical measurements mentioned above (Table 1 and Figure 2). As the Mg doping exceeds 450 sccm, the more Mg atoms incorporated into the GaN crystal generate not only more single Mg shallow acceptors but also more Mg-N-Mg deep acceptors, or donor-type defects, leading to a drop-off in activation efficiency. If the BL emission at 2.8-3.0 eV of Figure 3c is due to the donor-to-deep acceptor recombination, the deep acceptors are about 300 to 500 meV above the valence band compared with the activation energy of the shallow acceptor of about 200 meV [35,36]. Therefore, the deep acceptors have lower efficiency to be activated to offer free holes in the valence band for conducting. Similar competition of two emissions was also discovered recently by Hanxiao Liu et al. for their low-and high-Mg doping samples [18]. They attributed the two emissions at 3.25 eV and 2.9 eV to the shallow donor-to-acceptor and deep donor-to-acceptor transitions, respectively. We suggest that the BL near 2.9 eV can be caused by both the deep acceptors and deep donors. The deep acceptors should result from the Mg-rich and Mg 3 N 2 precipitates to decrease the activation efficiency. The deep donors of donor-like defects from the V N and Mg-V N complexes can decrease the hole concentration by the self-compensation effect. near 2.8-3.0 eV was attributed to the deep donor-to-shallow acceptor transition [32,33]. These deep donors could be created by the heavy Mg doping-induced defects. The emission peak near 2.8-3.2 eV could also be ascribed to the recombination of a native donor and heavy Mg doping-induced deep Mg acceptor. The PL spectra presented in Figure 3df, for higher Mg doping samples, are basically the same as in Figure 3c. The peak intensity of green luminescence (GL) and yellow luminescence (YL) becomes more prominent with increased Mg doping, which means that structural defects related to VN begin to increase [26,34]. In general, the collected PL data corroborate the results of electrical measurements mentioned above (Table 1 and Figure 2). As the Mg doping exceeds 450 sccm, the more Mg atoms incorporated into the GaN crystal generate not only more single Mg shallow acceptors but also more Mg-N-Mg deep acceptors, or donor-type defects, leading to a drop-off in activation efficiency. If the BL emission at 2.8-3.0 eV of Figure 3c is due to the donor-to-deep acceptor recombination, the deep acceptors are about 300 to 500 meV above the valence band compared with the activation energy of the shallow acceptor of about 200 meV [35,36]. Therefore, the deep acceptors have lower efficiency to be activated to offer free holes in the valence band for conducting. Similar competition of two emissions was also discovered recently by Hanxiao Liu et al. for their low-and high-Mg doping samples [18]. They attributed the two emissions at 3.25 eV and 2.9 eV to the shallow donorto-acceptor and deep donor-to-acceptor transitions, respectively. We suggest that the BL near 2.9 eV can be caused by both the deep acceptors and deep donors. The deep acceptors should result from the Mg-rich and Mg3N2 precipitates to decrease the activation efficiency. The deep donors of donor-like defects from the VN and Mg-VN complexes can decrease the hole concentration by the self-compensation effect. In order to investigate the effect of Mg diffusion on the activation efficiency, the electrical properties of samples B and C are discussed. Figure 4a shows the SIMS of samples A900, B, and C. Mg diffusion is the strongest for sample B, the p-GaN homo-epitaxy of the In order to investigate the effect of Mg diffusion on the activation efficiency, the electrical properties of samples B and C are discussed. Figure 4a shows the SIMS of samples A 900 , B, and C. Mg diffusion is the strongest for sample B, the p-GaN homo-epitaxy of the 200 nm GaN template. The difference in Mg diffusion for samples A 900 and C is not significant. However, the hole concentration and activation efficiency are very different for three samples, as shown in Figure 4b. Suppose that the hole concentration evaluated by the Hall measurement is majorly contributed by the top part of the p-GaN layers; the similar Mg doping concentrations at the top of the p-GaN layers for all three samples imply that the self-compensation effects are different. We would like to emphasize that the activation efficiency was effectively increased by decreasing the self-compensation effect, while the decrease in Mg diffusion was trivial, as extracted from the SIMS results. The p-GaN film grown on AlN-IL (2 nm)/Al 0.3 Ga 0.7 N has the best activation efficiency of 2.2%. In the event of p-GaN grown on Al 0.3 Ga 0.7 N and GaN, the activation efficiencies are 1.4% and 0.8%, respectively. This could be due to the strain between the layers to suppress the formation of the Mg doping-induced donor-type defects. These results indicate that aluminum has a smaller atomic radius than gallium, which can inhibit Mg diffusion and increase the compressive stress on the GaN:Mg film [37]. It is expected that a high Al composition could significantly suppress the self-compensation effect, reduce the Mg diffusion concentration, and further increase the hole concentration and activation rate. three samples, as shown in Figure 4b. Suppose that the hole concentration evaluated by the Hall measurement is majorly contributed by the top part of the p-GaN layers; the similar Mg doping concentrations at the top of the p-GaN layers for all three samples imply that the self-compensation effects are different. We would like to emphasize that the activation efficiency was effectively increased by decreasing the self-compensation effect, while the decrease in Mg diffusion was trivial, as extracted from the SIMS results. The p-GaN film grown on AlN-IL (2 nm)/Al0.3Ga0.7N has the best activation efficiency of 2.2%. In the event of p-GaN grown on Al0.3Ga0.7N and GaN, the activation efficiencies are 1.4% and 0.8%, respectively. This could be due to the strain between the layers to suppress the formation of the Mg doping-induced donor-type defects. These results indicate that aluminum has a smaller atomic radius than gallium, which can inhibit Mg diffusion and increase the compressive stress on the GaN:Mg film [37]. It is expected that a high Al composition could significantly suppress the self-compensation effect, reduce the Mg diffusion concentration, and further increase the hole concentration and activation rate. Many research groups also investigated the role of stable and metastable Mg-H complexes on the activation efficiency [7,8,25]. They discovered that the hole concentration is proportional to the density of H atoms from the Mg-H complex measured by SIMS before thermal annealing. The Mg atoms without the formation of the Mg-H complex could occupy the interstitials, Mg-VN complexes, or lattice positions of nitrogen (MgN). They are donor-type defects and play the role of self-compensation. As shown in Figure 5a, a higher H concentration was observed before annealing in p-GaN/AlGaN and p-GaN/AlN-IL structures than that in the p-GaN/GaN-IL structure. However, the p-GaN/AlN-IL structure displayed a similar H concentration to p-GaN/AlGaN, which could not expound the higher hole concentration and activation efficiency with AlN-IL. Therefore, these two structures were measured by the HRXRD rocking curves for the FWHM of GaN (002) and (102) planes to calculate the threading dislocation densities (TDDs) [38]. The GaN (002)/(102) planes of 678/1024 arcsecs without AlN-IL, respectively, correspond to the screw/edge-type TDDs of 9.24 × 10 8 and 3.13 × 10 9 cm −2 . The screw/edge-type TDDs of 7.98 × 10 8 and 3.51 × 10 9 cm −2 with the AlN-IL structure were calculated by the GaN (002)/(102) planes of 630/1028 arcsecs. In contrast to the relationship, the total TDDs with AlN-IL slightly increased from 4.05 × 10 9 to 4.30 × 10 9 cm −2 , indicating that the TDDs do not dominate the hole concentration in this case. We recommend excluding the effect of Mg-H and TDDs on the increasing activation efficiency after the Mg doping concentration reaches Many research groups also investigated the role of stable and metastable Mg-H complexes on the activation efficiency [7,8,25]. They discovered that the hole concentration is proportional to the density of H atoms from the Mg-H complex measured by SIMS before thermal annealing. The Mg atoms without the formation of the Mg-H complex could occupy the interstitials, Mg-V N complexes, or lattice positions of nitrogen (Mg N ). They are donor-type defects and play the role of self-compensation. As shown in Figure 5a, a higher H concentration was observed before annealing in p-GaN/AlGaN and p-GaN/AlN-IL structures than that in the p-GaN/GaN-IL structure. However, the p-GaN/AlN-IL structure displayed a similar H concentration to p-GaN/AlGaN, which could not expound the higher hole concentration and activation efficiency with AlN-IL. Therefore, these two structures were measured by the HRXRD rocking curves for the FWHM of GaN (002) and (102) planes to calculate the threading dislocation densities (TDDs) [38]. The GaN (002)/(102) planes of 678/1024 arcsecs without AlN-IL, respectively, correspond to the screw/edge-type TDDs of 9.24 × 10 8 and 3.13 × 10 9 cm −2 . The screw/edge-type TDDs of 7.98 × 10 8 and 3.51 × 10 9 cm −2 with the AlN-IL structure were calculated by the GaN (002)/(102) planes of 630/1028 arcsecs. In contrast to the relationship, the total TDDs with AlN-IL slightly increased from 4.05 × 10 9 to 4.30 × 10 9 cm −2 , indicating that the TDDs do not dominate the hole concentration in this case. We recommend excluding the effect of Mg-H and TDDs on the increasing activation efficiency after the Mg doping concentration reaches saturation. Furthermore, in Figure 5b, the PL spectra of p-GaN exhibit that the photon intensities of BL, GL, and YL decreased dramatically with AlN-IL. The lower concentration of self-compensation defects in the p-GaN on AlN-IL could be due to the greater compressive strain in p-GaN. This is consistent with our Raman spectra, where GaN E 2 (High) and A1 (LO) shift from 563.46 to 563.74 cm −1 and 722.26 to 726.19 cm −1 , respectively. The Raman energy blue shift implies greater compressive stress in the p-GaN epilayer with AlN-IL [39,40]. This effect is in agreement with the suppression of donorlike defects under greater compressive strains from inserting an AlN interlayer into the Mg-doped GaN/AlGaN superlattice by Hu et al. [41,42]. Herein, we would like to express that the existence of the greater compressive stress of heavily Mg-doped GaN is crucial in affecting the self-compensation effect because it can effectively extend the Fermi energy and consequently increase the formation energy of self-compensation defects [9]. This was mentioned in other research [10,43,44] which found that a strain state from compressive to tensile is accompanied by the BL emission due to large local lattice relaxations by the generation of self-compensation defects. This study reveals that a high-Al composition layer under the p-GaN layer can effectively enhance the hole concentration and significantly reduce the self-compensation effect. Furthermore, no reversion of the hole concentration could be observed after Mg saturation. This finding is precious for application in E-mode GaN HEMTs. and A1 (LO) shift from 563.46 to 563.74 cm −1 and 722.26 to 726.19 cm −1 , respectively. The Raman energy blue shift implies greater compressive stress in the p-GaN epilayer with AlN-IL [39,40]. This effect is in agreement with the suppression of donor-like defects under greater compressive strains from inserting an AlN interlayer into the Mg-doped GaN/AlGaN superlattice by Hu et al. [41,42]. Herein, we would like to express that the existence of the greater compressive stress of heavily Mg-doped GaN is crucial in affecting the self-compensation effect because it can effectively extend the Fermi energy and consequently increase the formation energy of self-compensation defects [9]. This was mentioned in other research [10,43,44] which found that a strain state from compressive to tensile is accompanied by the BL emission due to large local lattice relaxations by the generation of self-compensation defects. This study reveals that a high-Al composition layer under the p-GaN layer can effectively enhance the hole concentration and significantly reduce the self-compensation effect. Furthermore, no reversion of the hole concentration could be observed after Mg saturation. This finding is precious for application in E-mode GaN HEMTs. Conclusions In this study, the flow rate of Cp2Mg was modulated to grow heavily Mg-doped p-GaN on AlGaN for application in enhanced-mode HEMTs. A maximum activation rate of 2.22% was accomplished with Mg doping of around 2.42 × 10 19 cm −3 . The further increase in the hole concentration with the increasing Mg concentration reveals that the hole reversion could be restrained, owing to the decreased compensation-type defects resulting from the enhanced compressive strain. In addition, a high hole concentration of 1.3 × 10 18 cm −3 with a high activation efficiency was also achieved by heavy Mg doping of around 6.05 × 10 19 cm −3 in the p-GaN/AlN-IL/AlGaN structure. The diffusion of Mg can be effectively suppressed by inserting an AlN layer at the interface of Mg-GaN and AlGaN. The current results provide important information for the growth of Mg-doped p-GaN of a high hole concentration in E-mode HEMT application. Conclusions In this study, the flow rate of Cp 2 Mg was modulated to grow heavily Mg-doped p-GaN on AlGaN for application in enhanced-mode HEMTs. A maximum activation rate of 2.22% was accomplished with Mg doping of around 2.42 × 10 19 cm −3 . The further increase in the hole concentration with the increasing Mg concentration reveals that the hole reversion could be restrained, owing to the decreased compensation-type defects resulting from the enhanced compressive strain. In addition, a high hole concentration of 1.3 × 10 18 cm −3 with a high activation efficiency was also achieved by heavy Mg doping of around 6.05 × 10 19
2021-07-26T05:28:51.735Z
2021-07-01T00:00:00.000
{ "year": 2021, "sha1": "4b2cf3c5e19a4df31d6aaa6292a1b419005f383a", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-4991/11/7/1766/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4b2cf3c5e19a4df31d6aaa6292a1b419005f383a", "s2fieldsofstudy": [ "Engineering", "Materials Science", "Physics" ], "extfieldsofstudy": [ "Medicine" ] }
3428594
pes2o/s2orc
v3-fos-license
Ambitwistor formulations of $R^2$ gravity and $(DF)^2$ gauge theories We consider $D$-dimensional amplitudes in $R^2$ gravities (conformal gravity in $D=4$) and in the recently introduced $(DF)^2$ gauge theory, from the perspective of the CHY formulae and ambitwistor string theory. These theories are related through the BCJ double-copy construction, and the $(DF)^2$ gauge theory obeys color-kinematics duality. We work out the worldsheet details of these theories and show that they admit a formulation as integrals on the support of the scattering equations, or alternatively, as ambitwistor string theories. For gravity, this generalizes the work done by Berkovits and Witten on conformal gravity to $D$ dimensions. The ambitwistor is also interpreted as a $D$-dimensional generalization of Witten's twistor string (SYM + conformal supergravity). As part of our ambitwistor investigation, we discover another $(DF)^2$ gauge theory containing a photon that couples to Einstein gravity. This theory can provide an alternative KLT description of Einstein gravity compared to the usual Yang-Mills squared. I. INTRODUCTION In a fascinating paper [1], Cachazo, He and Yuan constructed a way to write the n-point amplitudes for Yang-Mills and for gravity in D dimensions. They wrote the amplitudes as n-dimensional integrals on the support of the so-called scattering equations. Later on, the formalism was shown to be well-suited to describe other theories as well, such as bi-adjoint scalars [2], Dirac-Born-Infeld and the non-linear sigma model [3]. These compact formulae were subsequently shown to also arise from ambitwistor strings [4,5]. In this paper we will add three extra theories to the list of those that admit a simple CHYtype formulation. The first theory is the (DF ) 2 theory constructed in [6]. This theory is related to conformal gravity [7] via the KLT relations [8]. We compute its lower point amplitudes and subsequently find an n-point generalization, that possesses the correct factorization channels. The CHY formulation makes the absence of all ε i · ε j -terms in the amplitudes manifest, a property that is otherwise obscure from a Feynman diagram representation. The second theory we consider is conformal gravity itself 1 . For this theory we also propose a CHY formulation for the n-point amplitude and show that it factorizes correctly. The formula beautifully generalizes the one by Berkovits and Witten for conformal gravity [9], to which it reduces when considering the MHV sector in D = 4. Finally, we show that these theories can be given a straightforward interpretation in terms of ambitwistor strings. In our investigation of the corresponding ambitwistor string theories, 1 or more accurately a D-dimensional R 2 theory which in D = 4 becomes conformal gravity. Throughout the paper we will use these terms interchangeably. There are other types of R 2 gravity but this is the only one of interest to us we find a third theory which can be given a simple CHY formulation. This theory consists of a photon field governed by a (DF ) 2 term and coupled to Einstein gravity. With these theories in hand, we can expand the usual matrix of possible ambitwistor theories with a new row/column. The new matrix of ambitwistor theories is shown in table I, with different choices of ambitwistor actions and the resulting theories coming from these actions. Note that the (Weyl) 3 -theory is just the usual bosonic ambitwistor string, corresponding to the choice (None, None). At tree level, the theories we analyze can also be interpreted as sectors of previously considered ambitwistor models. For example, the conformal gravity given by the (Single Fermion, None)-choice is a sector of the heterotic ambitwistor string given by (Single Fermion, Current Algebra), in the same sense that Berkovits-Witten is a sector of Witten's twistor string [10] . In fact, the same is true for any pair of theories of the form {(X, None); (X, Current Algebra)}. Nonetheless, it is remarkable that the ambitwistor approach allows us to truncate the larger models and consider those sectors themselves as stand-alone theories, and the applicability of this is exemplified by the fact that the theories considered in this paper had not been discussed before in the context of ambitwistor strings. One should note that the theories studied in this paper are un-physical, due to the presence of modes with a wrong-sign propagator which render the theories non-unitary. However, they are interesting to study because of their relationships with well-known, physical theories. Conformal supergravity can be related to Einstein gravity in asymptotically (anti-) de Sitter space [11] and its U(1) anomaly can be used to study the similar anomaly in Poincaré supergravity [12]. Furthermore, the α ′ → ∞ limit of the heterotic string should also be related to some kind of conformal gravity. The theory with a (DF ) 2 photon coupled to Einstein gravity, which we mentioned above and will describe further on in section V, is also related to a physical theory. By taking a specific limit, it is possible (at tree level) to relate the amplitudes of the photons to graviton amplitudes from pure Einstein gravity. This provides an alternative route for generating (tree-level) gravity amplitudes through the double copy by merging the (DF ) 2 theory of Johansson and Nohle with the non-linear sigma model. As for the (DF ) 2 itself, apart from being a piece in the double-copy constructions, it is of interest for the ambitwistor string community since it helps clarify some aspects of the theory, as we show in this paper. The paper is structured as follows. We will begin by describing some basic properties of gluon amplitudes, the (DF ) 2 theory and the similarities between the amplitudes of this theory and those of Yang-Mills (section II). Then we will review the scattering equations and the CHY-formulation of amplitudes, as well as some functions that will prove useful later (section III). We will then argue that the theories in question give rise to amplitudes that are extremely simple when written in the CHY-formulation (section IV). Subsequently we show how these simple formulae can arise from ambitwistor theories (section V). Finally we sum up our results in the conclusions. The (DF ) 2 theory created by Johansson and Nohle [6] will play an essential role in this paper so in this section the theory will briefly be described. We should perhaps note that Lagrangians with similar operators have previously been studied for phenomenological reasons in [13][14][15][16][17][18] and because the operators arise as corrections in the α ′ expansion of bosonic open string theory [19][20][21]. It is however the specific theory introduced in [6] that interests us as it satisfies color-kinematics duality and gives conformal gravity through the double copy. In general the amplitudes of this theory have many features similar or identical to the beautiful features of Yang-Mills amplitudes. For this reason it will be useful to review some of the basic properties of Yang-Mills amplitudes. For starters the tree-level amplitudes of gluons in Yang-Mills theory can be written as a sum over single-trace color factors and corresponding color-ordered amplitudes: Using the Kleiss-Kuijf relations [22], this can be re-expressed as a sum over strings of structure constants: where the color-ordered amplitudes are the same as in (2.1). This is known as the DDM basis [23,24], and it is the form that the amplitudes from the ambitwistor string naturally appear in. The gluon amplitudes of Yang-Mills are also known to satisfy the color-kinematics duality [25] (see also [26]), which works as follows. Consider an n-point amplitude written in the form: where the c i 's are products of structure constants, the n i 's are kinematic numerators and the D i 's are products of propagators. There is a certain ambiguity in how the numerators are chosen because the c i 's are dependent on each other due to the Jacobi relations. However the color-kinematics duality tells us that it is possible to chose the numerators in such a way that they satisfy relations identical to the Jacobi relations for the corresponding color factors. For the color-ordered amplitudes the duality leads to the BCJ relations [25] (proven from a string theory perspective in [27] and from a field theory perspective in [28] using the BCFW recursion relations [29,30]). (2.4) Writing the amplitudes in a form satisfying color-kinematics duality has the advantage that it makes the relationship between Yang-Mills and Einstein gravity straightforward. If the numerators satisfy the duality, one simply replaces the color factors, c i , by another copy of the numerators, n i , in order to arrive at the amplitudes for gravity. This is known as the double copy and is equivalent to the KLT relations: where the color-ordered gauge-theory amplitudes have been packaged into column/row vectors of (n − 3)! size, and the matrix S is the (field theory) KLT kernel. Schematically we can write this as: These are the properties of Yang-Mills theory that will be relevant for our discussion of the (DF ) 2 theory which we will now turn to. The Lagrangian of this theory is given by: where the field strength and the covariant derivatives are defined as The scalar ϕ α transforms in a real representation of the gauge group, with generator (T a ) αβ . Some of the interactions are parametrized by symmetric Clebsch-Gordan coefficients C αab = C αba and totally symmetric d αβγ constants, which are only implicitly defined through the two relations From eq. (2.9), and together with the Lie algebra relations that trivially follow from infinitesimal group transformations we have a sufficient number of relations to reduce any tree-level Feynman diagram with external adjoint particles (and possibly internal scalars) to a sum over strings of f abc structure constants, or equivalently, a sum over single-trace factors Tr(T a 1 · · · T an ). So the gluonic amplitudes for this theory can also be expressed as in (2.1). Furthermore, the color-ordered amplitudes will obey the Kleiss-Kuijf relations by virtue of the fact that the trees can alternatively be expressed in terms of only f abc 's. Hence it is also possible to express the amplitudes of the (DF ) 2 theory in the DDM basis as well. Of course there are significant differences between Yang-Mills and the (DF ) 2 theory. For instance, the (DF ) 2 theory will have 1/p 4 poles since the kinetic term has four derivatives, and in four dimensions the all-plus and single-minus amplitudes are non-vanishing A(± + + . . . +) = 0. The latter implies that the theory does not admit a supersymmetric generalization, which can also be seen from the presence of the F 3 term in the Lagrangian; this operator is well-known to be incompatible with supersymmetry. Besides the gluon and scalar states, the (DF ) 2 contain gluon ghost states (i.e. the linearized equations of motion for A µ has additional solutions) which have the wrong-sign propagator. According to standard field-theory arguments this suggest that the (DF ) 2 theory violates unitarity; however, this will not be important in the current context. As formal objects the tree amplitudes are well defined, and it is not surprising that such ghost states are present given the close relationship between (DF ) 2 and conformal gravity. The only caveat is that we need to be Notice how some of the color-ordered amplitudes have 1/p 4 poles and one of them has a u pole which is not possible in Yang-Mills for this particular ordering. These amplitudes however still satisfy the BCJ amplitudes relations (2.4) and it is possible to write the amplitudes in such a form that they satisfy color-kinematics duality (the relations (2.9) are necessary for the theory to satisfy the duality, and demanding that the theory satisfy the duality was part of how the color relations were found in [6]). Notice that the denominators, D i , in (2.3) will still be the same as they were in Yang-Mills theory, even though this theory contains double propagators. The extra poles will simply be absorbed into the numerator factors. As shown in [6] it is possible to get conformal gravity by using the double copy between the (DF ) 2 and ordinary Yang-Mills. Schematically, we write this as CG = (DF ) 2 ⊗ YM . (2.14) For the supersymmetric generalizations (N = 1, 2, 4 in D = 4 notation) we get conformal supergravity from the double copy where all the supersymmetry belongs to the SYM theory. At tree level and for adjoint external particles, we can write the double copy in terms of the KLT formula, As an example consider the following four-point MHV amplitude in conformal gravity (2.17) One can of course do the double copy where both numerators come from the (DF ) 2 theory. As will hopefully become clear in section V, the resulting theory will be the (Weyl) 3 theory that arises from the bosonic ambitwistor string [4]. III. THE SCATTERING EQUATIONS AND THE CHY FORMULA It is our goal to express the amplitudes of the theory described in section II in the CHY formulation. In this section we will therefore review some basics about the CHY formulation as well as some functions that will prove useful when considering the (DF ) 2 theory. The amplitudes of several quite different theories can be written in the following form in D dimensions: Here the prime on the product sign means that three of the delta function are left out: This is necessary as the scattering equations are SL ( in the denominator is also necessary in order not to integrate over infinitely many identical terms. It indicates that three of the integration variables will have to be fixed. The remaining part of the integrand is divided into two parts: a left integrand and a right integrand. When we turn towards the ambitwistor string theories, these two parts of the integrand will correspond to different parts of the string action. In order to get Yang-Mills amplitudes, one can make the following choices for the left and right integrand: The dependence on the polarization vectors in the amplitude comes from the 2n × 2n antisymmetric matrix called M n . This matrix can be written in the following form: where the different submatrices are defined as: The Pfaffian of this matrix vanishes so the object appearing in the CHY formula is the reduced Pfaffian which is defined by removing rows and columns number k and l, then computing the Pfaffian of this smaller matrix and finally multiplying by (−1) k+l /σ kl . The choice of k and l is arbitrary. If one instead is interested in the amplitudes of Einstein gravity, one can choose both the left and the right integrand to be given by reduced Pfaffians: If on the other hand, one chooses both the left and the right integrand to be given by a color trace over a Parke-Taylor factor: one will end up with the amplitudes of a bi-adjoint scalar. A. Some useful building blocks In order to write the amplitudes for the (DF ) 2 theory from section II in the CHY form, it is necessary to use some additional building blocks, besides the ones that Yang-Mills and gravity amplitudes are constructed from. These building blocks must contain an additional factor of momentum squared as compared to the reduced Pfaffian used for Yang-Mills amplitudes. This can easily be seen by inspecting the Lagrangian: the term with three gluons also contains three derivatives (as opposed to one for Yang Mills), the term with four gluons contains two derivatives (as opposed to none for Yang Mills) etc. Fortunately such factors have already been discussed in the literature [31,32]. They can be written in terms of the following functions: where the trace is over Lorentz indices and the f 's are linearized field strengths: One also needs to introduce the following special case: A useful feature of these functions is that they are gauge-invariant. Equation (3.10) in order to make Möbius invariance manifest in the formula for the amplitude. Therefore we employ momentum conservation to re-express the function as: Here r is simply some external leg which is different from i. This is a better way of writing the function because σ i then appears twice in the denominator just like in the other functions in (3.8), making it easier to construct manifestly Möbius invariant quantities. From the above elements we construct the following permutationally invariant functions to be used when constructing the n-pt. amplitudes: Here the i's are chosen to satisfy: vectors. This exactly matches the counting mentioned above. We thus expect that the right integrand will consist of these functions in place of the reduced Pfaffian while the left integrand will remain the color trace over a Parke-Taylor factor just like in Yang-Mills. Indeed this expectation will turn out to be correct. One should notice that the functions defined in equation (3.12) are not all independent. They can be combined to give the Pfaffian of the matrix M n which as mentioned before is zero: As a consequence of this one gets that: Because of these relations there can be different ways of expressing the amplitudes. We will try to write the amplitudes in a way that makes the generalization to n-point amplitudes as straigthforward as possible. IV. THE AMPLITUDES Having described the CHY formalism as well as some functions that will prove useful, we can now turn our attention to the amplitudes of the (DF ) 2 theory described in section II. We have computed the amplitudes up to 6 points using standard Feynman rules and then subsequently determined which of the previously described functions matched them. The expressions in the CHY formalism were evaluated using the tools developed in [33,34] (how to apply these tools to double poles has also been dealt with in [35]). We arrive at the following results for the amplitudes: The reason for the choices above is that they expose a rather simple pattern which is easy to generalize to n points. Based on the amplitudes above, we propose the following expression for the n-point amplitude: This equation is somewhat similar to the formula for the Yang-Mills amplitudes, only with the reduced Pfaffian, Pf ′ M n , replaced by the function 4W 11···1 . A curious property of this formula is that it contains no ε i · ε j -terms. This has interesting consequences upon dimensional reduction. Consider the case where we go from D dimensions to d dimensions. The D-dimensional gluon then splits into a d-dimensional gluon and D − d scalars. However the lack of any ε i · ε j -terms in the amplitudes tells us that the new scalars decouple. This property is not manifest in the Feynman rules and only appears after many different terms cancel each other. In order to support the claim that (4.5) is in fact the correct n-point generalization, we are going to check that it has the correct factorization channels. Since we only have a formula for the scattering of n gluon fields and none with the scalars in the theory as external states, we are going to focus on how the amplitude factorizes when a gluon goes on-shell. These are in any case the easiest factorization channels to determine since they provide a double pole when q 2 → 0 as opposed to the scalars which only give a single pole. A. Factorization In order to check the factorization channels of (4.5), the external momenta are divived into two groups: We then consider the case where the sum of the momenta in each group goes on-shell: If (4.5) is the correct n-point generalization, the formula should develop a q −4 R -pole and the residue of this pole be the product of two lower-point amplitudes of the same form. This will turn out to indeed be the case as can be demonstrated by considering different pieces of the formula individually. As shown in [2], the trick to study a factorization channel like the one above is to redefine the integration variables: The variables u 1 , u 2 and v n will be fixed in order to remove the SL(2, C) symmetry from the amplitude expression. In addition to this, the variable v n−1 will be consider to be fixed in exchange for treating s as an integration variable. This means that now four u, v variables are fixed. However one would expect there to be six (three for each amplitude). The last two of the fixed integration variables will be the ones corresponding to the new states arising from letting q 2 R go on-shell, and in the calculations to come the quantities will factorize into pieces that will look exactly as expected if the u and v variables corresponding to the new on-shell states have been set to zero. The s integration will be responsible for the pole. When q 2 R goes to zero, the variable will begin to behave like The order of the pole (or whether there is one) then depends on how many factors of s come from the different parts of the CHY expression. The individual factors will be dealt with in appendix A. We will only be interested in the dominant terms which will be the ones with the lowest power of s. Below is a summary of the powers of s for the (DF ) 2 theory contrasted with ordinary Yang-Mills: We see that the (DF ) 2 theory has an extra factor of s −2 compared to Yang-Mills, which is to be expected since this theory has double poles while Yang-Mills only has single poles. In the q 2 R → 0 limit, the amplitude of the (DF ) 2 theory then become proportional to The numerator can be understood as the product of the w (i) -functions for the new on-shell state. We therefore introduce polarization vectors for the intermediate state that has gone on-shell: Here · · · indicate terms proportional to q µ L or q ν R . These terms vanish as each lower point amplitude is gauge-invariant. Equation (4.13) can then be written as: The numerator is equivalent to two w (i) -functions with the u and v variables corresponding to the new on-shell state both having been fixed to 0. The remaining details can be found in appendix A. Putting them all together, one arrives at the conclusion that (4.5) does indeed satisfy the correct factorization properties: As a final comment about factorization, let us focus on some terms that do not play a role in (4.18), but are nonetheless interesting. They are some of the sub-leading terms from the color part of the CHY formula. The only terms that contribute to (4.18) are those where the color generators in the trace separate nicely into one product of generators for the L set and one product of generators for the R set. As a shorthand, we could denote these as the Tr(LR)terms. However, one could also consider the Tr(LRLR)-terms. Such terms do not generate a pole in Yang-Mills theory as they correspond to having an intermediate state which is not in the adjoint representation of the gauge group. However they do generate a simple pole in the (DF ) 2 theory, which is to be expected since this theory does in fact contain particles that are not in the adjoint representation, the scalars. To conclude, this section showed that (4.5) factorizes into two amplitudes of the same form when a 1/q 4 propagator was put on-shell. It also showed that the expression for the amplitude requires that the theory contain particles that are in a different representation of the gauge group than the adjoint. Both these observations support the claim that (4.5) is in fact the correct n-point amplitude for the (DF ) 2 theory. B. Conformal gravity amplitudes Conformal gravity can be found through combining the (DF ) 2 theory described in section II with standard super Yang-Mills in the KLT relations [8]. In the CHY formalism, one can simply replace the color factor in (4.5) with the reduced Pfaffian from Yang-Mills: This should be the D-dimensional formula for conformal gravity (up to some overall constant). As a simple check for this formula let us point out that using the factorization properties of the reduced Pfaffian: it is straightforward to show that the formula factorizes correctly. Another simple check is to focus on the 4-dimensional MHV amplitudes. It is believed that in this case, there is only one relevant solution to the scattering equations [36][37][38]. It can be written in terms of spinors as follows: Here |χ is an arbitrary spinor not collinear with |1 or |2 . This solution was proven to give the correct n-point MHV amplitude for Yang-Mills theory and Einstein gravity in [39] where it was also shown that, at least up to 9-point, the other solutions to the scattering equations make the reduced Pfaffian vanish. Compared to those two theories, the only new element in (4.19) is the function W 11···1 which, on this particular solution to the scattering equations and assuming that particles 1 and 2 are the only negative helicity gluons, can be written as: where |η is another arbitrary spinor not necessarily identical to |χ . As a side note let us point out that this way of simplifying the CHY formulation in 4dimensional MHV case will not work for the (DF ) 2 theory. This is due to the fact that the function W 11···1 is just a product of functions for each individual on-shell leg (the w (i) 's from equation (3.10)), which means that for the function to be zero, one of these functions will have to be zero. These functions only depend on the helicity of the given external leg and not on all the other helicities. So if we imagine that a given solution to the scattering equations does not contribute to the all plus amplitudes because it sets w 1 to 0, then all other amplitudes where the helicity of particle 1 is positive will also not get contributions from this solution to the scattering equations. We should also note that of supersymmetrizing (4.19) is essentially the same as the problem for Yang-Mills theory since the supersymmetry in the R 2 theory derives from this theory (see equation (2.16)). If it is possible to construct a simple CHY-formulation for the amplitudes of N = 4 super Yang-Mills, it should therefore be straightforward to construct a supersymmetric version of equation (4.19) as well. V. AMBITWISTOR INTERPRETATION The fact that the amplitudes of conformal gravity and the (DF ) 2 theory can be written as CHY formulae suggests that there should be ambitwistor string theories [4,5] corresponding to them. In this section we will briefly review ambitwistor string theory and show which specific choices of the worldsheet action lead to the amplitudes given in equations (4.5) and (4.19). A. Review The ambitwistor string theories can be thought of as chiral worldsheet models describing the interactions of massless states. In the simplest example, bosonic strings, the action is given by where X µ (µ = 0 to D − 1) denotes the string coordinates in the D-dimensional target space, P µ are their conjugate momenta and e is a Lagrange multiplier enforcing the constraint P 2 = 0. Because of this first-class constraint, the model is invariant under the following local symmetry, in addition to reparameterization invariance: for some transformation parameter α. One can use this symmetry to gauge-fix e = 0, and then the standard BRST procedure yields the gauge-fixed action together with the BRST charge Physical states correspond to vertex operators in the cohomology of Q, which in this case contains only 2 V = c c P µ P ν ǫ µν e ip·X (5.5) and its integrated version BRST-closedness requires p 2 = p µ ǫ µν = 0, while the analysis of BRST-exact states implies the gauge transformation δǫ µν = p (µ ǫ ν) for some parameter ǫ µ such that p µ ǫ µ = 0. Thus, these operators correspond to an on-shell graviton. However, if one computes the correlation function containing three unintegrated vertex operators, the result does not agree with the expected three-point amplitude coming from Einstein gravity. In fact, it is of order six in the momenta. In [4], the authors could not interpret the result in terms of any known theory of gravity, although they mention that it could be related to a (Weyl) 3 vertex. The tree-level n-point function is given by with P µ constrained to take its value as P µ (σ) = n i=1 p (i) µ /(σ − σ i ). Note that, using the language introduced in section 3, this amplitude can be cast as 8) and the appearance of the W 11···1 function squared indicates that this theory will be the result of squaring the (DF ) 2 theory via the double copy. This purely bosonic model can be generalized in many different ways. To do so, the standard procedure consists of adding two other terms to the action (5.1), S L and S R , which ultimately correspond to the left and right integrands in CHY formulae (cf. (3.1)). In perhaps the most successful example, both S L and S R are RNS-like fermion systems, with the important difference that in the ambitwistor case all worldsheet fields are left-moving (holomorphic). The complete action is given by: where Ψ µ 1 , Ψ µ 2 are the worldsheet fermions and χ 1 , χ 2 are fermionic Lagrange multipliers for the fermionic constraints P · Ψ 1 , P · Ψ 2 . Gauge-fixing the Lagrange multipliers to zero via the BRST procedure, one ends up with the usual RNS-like bosonic (anti)ghosts (β 1 , γ 1 ) and (β 2 , γ 2 ), in addition to the same (anti)ghosts as before. The BRST charge is now given by and its cohomology contains the vertex operator together with corresponding picture-number-zero or integrated versions, where ǫ µ 1 , ǫ ν 2 combine to form the graviton, Kalb-Ramond and dilaton polarizations. One can show that the tree-level n-point correlation function of these vertex operators gives rise to the CHY formula (3.6) when restricted to gravitons. Another possibility for (S L , S R ) is to replace one of the fermionic systems of the previous model with an action for a generic current algebra, S C . Then one can define the currents J I satisfying the OPE where ℓ is the so-called level of the algebra and f IJ K are the structure constants of the gauge group. The BRST charge of this model has the same form as (5.10), with the obvious differences that now the sum over r comprises only one term and the energy-momentum tensor is the one corresponding to the new gauge-fixed action. This theory is reminiscent of the usual heterotic string theory, and its spectrum also contains two sectors: the gauge one and the gravity one. However, the latter does not correspond to the usual Neveu-Schwarz sector of heterotic strings, and in particular it contains a 3-form potential whose interpretation was unclear in the original work by Mason and Skinner. In the gauge sector, the following vertex operator belongs to the cohomology of Q: where T I denotes the generators of the gauge group. BRST invariance imposes p 2 = p · ǫ = 0, and the vertex operator is BRST-trivial if ǫ µ ∝ p µ . Therefore, it describes an on-shell gluon. When restricted to single-trace contributions, the tree-level n-point correlation function in- From the review in the previous subsection, it should be clear that there is a correspondence between the choice of (S L , S R ), the vertex operators and the correlation functions of a given ambitwistor string. We summarize the results presented so far in the following table. In the above, 0 signifies that S L or S R are absent from the model, e.g. (0, 0) represents the bosonic ambitwistor string. Moreover, "Vertex" denotes the contribution to the simplest vertex operator and I L/R the two different parts of the integrand in the CHY formulation of amplitudes (cf. (3.1)). More precisely, the (single-trace) tree-level n-point correlation function of any (S L , S R )-model gives rise to a CHY formula containing I L and I R . Thus, by comparing with (4.5), we see that the CHY formula for the (DF ) 2 -theory can be obtained via the ambitwistor model (J, 0), while a comparison with (4.19) leads to the conclusion that the CHY formula for conformal supergravity can be obtained through the model (Ψ, 0). Since, to the best of our knowledge, models of the type (S L , 0) have not yet been explored in the literature, it is worth to discuss them in a bit more detail. In the (J, 0) case, the action is given by where L C is the Lagrangian corresponding to a generic current algebra. The gauge-fixing procedure is almost identical to the one for the bosonic case, and we are left with the BRST-charge which looks exactly the same as (5.4), but now T includes the energy-momentum tensor T C corresponding to L C . Accordingly, the central charge receives a contribution c C from the gauge sector, and is given by c (J,0) = 2(D − 26) + c C . Thus, one can make c (J,0) vanish in a given number of dimensions by choosing the current algebra appropriately. However, we need not concern ourselves much about this since we only work at tree level. The cohomology of Q (J,0) contains the vertex operator V (J,0) = c c P · ǫ e ip·X J I T I , (5.16) together with its integrated version -which as usual amounts to replacing the ghosts with d 2 σδ(p · P ). This expression is BRST-invariant if and only if p 2 = p · ǫ = 0, and ǫ µ ∝ p µ renders it BRST-trivial, hence it corresponds to an on-shell gluon. It is easy to see that the tree-level n-point correlation function computed with these operators gives rise to (4.5). Note that the cohomology also contains gravity states, a feature common to all known ambitwistor string theories. In this case, the graviton vertex operators are identical to the ones in the bosonic model, given in (5.5) and (5.6), and thus the 3-point amplitude exhibits the same (Weyl) 3 behavior. As anticipated in the introduction, it is a general property of (S L , 0)models that the states and tree-level amplitudes obtainable from one such model can also be obtained from an (S L , J)-model, and the appearance of gravity states in the (0, J)-model is just a consequence of that. By the same token, the (J, 0)-model can be identified with a sector of the more general (J,J)-model, which contains bi-adjoint scalars transforming under two potentially different gauge groups. It is remarkable that the ambitwistor framework allows such a truncation, i.e. that some sectors can be treated as theories on their own. We will encounter another example of that in the following. Let us now discuss the (Ψ, 0) ambitwistor string, which gives rise to the tree-level n-point amplitude in (4.19). The action of the model is given by After gauge-fixing e = χ = 0, one gets the BRST charge whose cohomology contains the vertex operator together with corresponding picture-number-zero or integrated versions, where ǫ µ 1 , ǫ ν 2 combine to form the graviton, Kalb-Ramond and dilaton polarizations. Restricting to gravitons, one can show that the tree-level n-point correlation function of these vertex operators gives rise to the CHY formula (4.19). However, since the central charge is computed to give c (Ψ,0) = 5 2 D − 41, it is not possible to make sense of this model beyond tree level, in any (integer) number of dimensions. Note that, at tree level, this model is equivalent to the gravity sector of the heterotic ambitwistor string, given by (Ψ, J). Indeed, the current-algebra part of the heterotic model is inert in the gravity sector, which implies that the cohomology and correlation functions are the same as those in the (Ψ, 0) model. In particular, the (Ψ, 0) model also contains the unexpected (from the Einstein-gravity point of view) massless 3-form first encountered in [4], whose picture-number −1 vertex operator is given by with p µ A µνρ = 0. Therefore, we conclude that the gravity sector of the heterotic ambitwistor string describes conformal supergravity, and it is then natural to interpret that theory as a generalization of Witten's twistor string theory. We will come back to this point shortly. This is reminiscent of the (Ψ 1 , Ψ 2 ) model, and indeed the action and BRST operator are the same as (5.9) and (5.10), respectively. Hence, one would naively think that the spectrum and correlation functions of the two models are identical. However, putting both fermion systems on the same side of the model translates into having weaker GSO-like conditions. To make this point clearer, consider the following state: Since there is no current algebra in this particular model, the state in (5.21) corresponds to a U(1)-field, i.e. a photon. One can show that the tree-level n-point correlation function of these photon states gives where M A,n is an n by n matrix identical to one of the submatrices of the bigger matrix M n defined in (3.5). From this discussion, it is evident that one more row can be added to the table above [5]: Let us now consider the amplitude in (5.22) from the quantum field theory point of view. It arises from combining the (DF ) 2 theory with the non-linear sigma model in the KLT relations. 3 By inspecting the amplitude, we find that up to four points the simplest Lagrangian for this theory is given by: We will refer to this theory as the (DF ) 2 -photon theory. Note that the ordinary Einstein gravity appears as part of this Lagrangian and that the coupling constant for its self-interaction is the same as for its interaction with the gravitons. From the ambitwistor string theory point of view, the appearance of Einstein gravity is fairly obvious since both the vertices (5.11) and Consider an amplitude of 2n (DF ) 2 -photons, group the photons into n pairs and take the limit where the propagator for each pair goes on-shell. In this scenario, the amplitude in (5.22) behaves in the following way: where the matrix M A,2n can be written in the following form (where i and j only run over the odd numbers): By comparing with the formula for Einstein gravity (3.6), one sees that this is the amplitude of n gravitons with momenta p i + p i+1 where the polarization vectors have been replaced by i+1 . This makes it clear also from the quantum field theory perspective that the (DF ) 2 photon couples to Einstein gravity. C. Connection to Witten's twistor string Even though we only discuss bosonic states in this paper, it should be said that the spectrum of the (Ψ, J) ambitwistor string theory also contains fermions and is in fact supersymmetric -see [40] for a description in the pure-spinor context. In ten dimensions, the gauge sector corresponds to SYM, while the gravity sector must be equivalent to the R 2 conformal supergravity studied by de Roo in [41] -see also [42] -, since the action presented in that paper is supposed to be unique. From our point of view, it is then natural to interpret this theory as a D-dimensional generalization of Witten's twistor string theory [10]. In four dimensions, the gauge sector describes N = 4 SYM, while the gravity sector reduces to the conformal supergravity sector analyzed by Berkovits and Witten [9]. Indeed, the CHY formula (4.19) can be obtained from the gravity sector of this ambitwistor theory. Note also that a massless 3-form has no propagating degrees of freedom in four dimensions. In summary, we have the following table of approaches to the same theory: Witten's twistor string where φ 3 stands for the bi-adjoint scalar theory, whose amplitudes can be obtained in the CHY representation through the (J,J) ambitwistor string. It would be very interesting to obtain a more direct relation between the heterotic ambitwistor string and the twistor string studied by Berkovits and Witten, for example at the level of vertex operators. We plan to address this question in future work. VI. CONCLUSIONS In this paper, we introduced three new, elegant CHY-type formulae and provided an ambitwistor string interpretation for each of them. The string actions are all of the type (S L , 0) so, together with the bosonic ambitwistor string, they form an entire new row/column in the matrix of possible ambitwistor models. First we considered the (DF ) 2 theory introduced in [6]. The CHY formulation of this theory is simple and exposes a property of the amplitudes that is far from obvious from the Feynman diagram perspective, namely the absence of ε i · ε j terms. The second theory we considered was an R 2 theory of gravity which in D = 4 becomes conformal gravity. Our work can therefore be seen as a D-dimensional generalization of the paper [9] by Berkovits and Witten, and our CHY formulation of the amplitudes does in fact reduce to their result in the appropriate limit. Finally, we looked at the (DF ) 2 -photon theory. This theory arose naturally from our studies of the previous two theories. An interesting feature of this theory is that the photon couples to regular Einstein gravity. This may seem surprising since the theory can be described using The role of the scalars is in general interesting, if somewhat mysterious. They are essential for the (DF ) 2 theory to satisfy the color-kinematics duality, but their strange color structure leads to non-planar diagrams making contributions to tree-level amplitudes. For instance this means that in the four-point amplitudes, the numerator n s could get a term proportional 1/u (terms like this can of course be removed through redefinitions of the numerators, but only in exchange for similarly weird terms in the other numerators). This in turn makes the interpretation of the function of the fields in the double copy a bit hazy, because it means that an internal graviton carrying momentum p 1 + p 2 somehow is the product of a gluon with the same momentum and a scalar carrying momentum p 1 + p 3 . Perhaps a closer look at the amplitudes of the scalars will provide some answers. It should be fairly straightforward to get some of the amplitudes from the Tr(LRLR)-terms arising in the factorization limit as described towards the end of section IV A. (4.13). To see how this comes about, just consider the scattering equations for the particles in the R set: If we multiply this v i (v n − v i )/sv n and sum over all of the particles belonging to R, this becomes: This is what imposes the behaviour of s in (4.12). In total the delta functions for the R set becomes: The delta functions for the particles in the L set has a straightforward under the shift and become: Putting the factors of s together from (A2), (A3) and (A4), we get that the dominant behaviour will be s n L −n R −2 as in the table on page 16. The different terms in the sum depend differently upon s so we will begin by determining which have the lowest power of s. Each term contains n factors of (σ i − σ j ) −1 . If both i and j belong to L, such a factor will contribute with s −1 while if they both belongs to R, it will contribute with s. If i belongs to L and j belongs R or vice versa, such a factor will contribute with s. As a consequence the terms with as few factors of (σ i − σ j ) −1 where i and j belongs to different sets, will be the terms will the lowest power in s. This is perhaps not surprising from the point of view of the color factor as the amplitude is thus split into a product of two planar amplitudes with one only containing the particles from the set L plus an intermediate state and the other only the ones from the set R plus the intermediate state. The factor involving the traces over the gauge group generators look exactly as one would expect if we imagine the u and v variables corresponding to the new on-shell state to have been fixed to zero. We note that the dominant term is proportional to s n R −n L +2 as mentioned in the table on page 16. Finally, we consider the function W 11···1 or rather the individual functions that it is a product of, the w i 's. If i belongs to the set R, this function becomes: while it becomes the following for i belonging to L: We see that it both cases the dominant terms depend only on the other particles in the same set in addition to a term depending on the momentum of the internal propagator that has gone on-shell. From the above expressions we see that W 11···1 will contribute with a factor of s n R −n L as mentioned in the table on page 16.
2017-10-06T15:34:57.000Z
2017-07-07T00:00:00.000
{ "year": 2017, "sha1": "879a887b1c323de897921a7a42377fbe20925748", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP11(2017)052.pdf", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "879a887b1c323de897921a7a42377fbe20925748", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
15033553
pes2o/s2orc
v3-fos-license
Expression of nicotinamide N-methyltransferase in hepatocellular carcinoma is associated with poor prognosis Background Hepatocellular carcinoma (HCC) is the most common tumor in the adult liver, with high relapse and mortality rates despite diverse treatment modalities. In this study, nicotinamide N-methyltransferase (NNMT), a key enzyme in drug metabolism, was investigated as a potential prognostic factor. Methods Frozen tumors and non-cancerous surrounding tissues from 120 patients with primary HCC were studied. Expressions of NNMT and internal control genes were measured by real-time reverse-transcription PCR (RT-PCR). The relationship of NNMT mRNA level with clinicopathologic parameters and clinical outcome was evaluated. Results NNMT mRNA level is markedly reduced in HCCs compared to non-cancerous surrounding tissues (P < 0.0001), and NNMT expression in tumors was significantly correlated with tumor stage (P = 0.010). Moreover, stratification of patients based on tumor NNMT mRNA levels revealed that the patients who expressed higher NNMT mRNA levels tended to have a shorter overall survival (OS) time (P = 0.053) and a significantly shorter disease-free survival (DFS) time (P = 0.016). Both NNMT expression (P = 0.0096) and tumor stage (P = 0.0017) were found to be significant prognostic factors for DFS in a multivariate analysis. Conclusion The results of this study indicated that NNMT gene expression is associated with tumor stage and DFS time in HCC cases. Because of the broad substrate specificity of NNMT, which could alter the efficacy and adverse effects of chemotherapy, NNMT merits further investigation regarding its role as a prognostic factor with a larger cohort of HCC patients. Background Hepatocellular carcinoma (HCC) is the fifth most common cancer worldwide and the most common form of liver cancer, being responsible for 80% of primary malignant tumors in adults. HCC causes more than 600,000 deaths annually worldwide [1] and its endemic prevalence in Asia, including South Korea, makes HCC one of the top causes of death in this region. HCC is a type of tumor that is highly resistant to available chemotherapeutic agents, administered either alone or in combination [2]. Thus, in many cases, no effective therapy can be offered to patients with HCC. Therefore, it is of vital importance to identify important prognostic factors and novel molecular targets of HCC to develop targeted therapies, ultimately advancing therapeutic strategies of HCC in general. Current evidence indicates that the precancerous liver and the early stages in HCC development are characterized by certain common traits governed by both genetic and epigenetic mechanisms [3,4]. These include the alteration of numerous signaling pathways leading to autonomous and deregulated cell proliferation and resistance to cell death [4][5][6][7]. Therefore, it is important to better understand the roles of deregulated genes in hepatocellular carcinogenesis. Derangements in various methylation processes in liver diseases have been identified [8,9], including increased nicotinamide methylation in cirrhotic patients [10]. Nicotinamide N-methyltransferase (NNMT) catalyzes the N-methylation of nicotinamide, pyridines, and other structural analogues [11]. It is involved in the biotransformation of many drugs and xenobiotic compounds. Although several studies indicated differential expression of NNMT in HCC specimens [12][13][14][15], the clincopathologic relevance of NNMT expression has not been fully investigated. The aim of the present investigation was to examine whether NNMT expression could be used to predict the clinical course of HCC. Using a real-time RT-PCR analysis of NNMT gene expression, we found significant correlation between NNMT mRNA levels and poor prognosis of HCC. Thus, potential biological changes related to NNMT gene expression require further study, as they may have implications in predicting clinical outcome and choosing treatment modalities, due to the central role of NNMT in biotransformation and detoxification. Methods Patients and tissue samples HCC (T) and corresponding non-cancerous hepatic tissues (NT) were obtained with informed consent from 120 patients who underwent curative hepatectomy for primary HCC between 2001 and 2006 in the Department of Surgery, Samsung Medical Center, Korea. The study proto-col was approved by the Institutional Review Board of Samsung Medical Center. Complete clinical data were available in all 120 cases (median follow-up, 50 months; range, 3 -92 months). The patients, ranging in age from 21 to 78 years (mean, 51.3 years) and having adequate liver function reserve, had survived for at least 2 months after hepatectomy, and none received treatment prior to surgery such as transarterial chemoembolization or radiofrequency ablation. Clinicopathologic features of the 120 HCCs in this study are described in Table 1. Surgically resected specimens were partly embedded in paraffin after fixation in 10% formalin for histological processing and partly immediately frozen in liquid nitrogen and stored at -80°C. All available hematoxylin and eosin stained slides were reviewed. The tumor grading was based on the criteria proposed by Edmondson and Steiner (I, well differentiated; II, moderately differentiated; III, poorly differentiated; IV, undifferentiated) [16]. The conventional TNM system outlined in the cancer staging manual (6th ed.) by the American Joint Committee on Cancer (AJCC) was used in tumor staging. RNA extraction and cDNA synthesis Total RNA was extracted from cancerous and surrounding non-cancerous frozen tissues using an RNeasy minikit (Qiagen, Germany) according to the manufacturer's instructions. The integrity of all tested total RNA samples was verified using a Bioanalyzer 2100 (Agilent Technologies, United States). DNase I treatment was routinely included in the extraction step. Residual genomic DNA contamination was assayed by a quantitative real-time PCR assay for GAPDH DNA and samples with contaminating DNA were re-subjected to DNase I treatment and assayed again. Samples containing 4 μg of total RNA were incubated with 2 μl of 1 μM oligo d(T) 18 primer (Genotech, Korea) at 70°C for 7 min and cooled on ice for 5 min. The enzyme mix was separately prepared in a total volume of 11 μl by adding 2 μl of 0.1 M DTT (Duchefa, Netherlands), 2 μl of 10× reverse-transcription buffer, 5 μl of 2 mM dNTP, 1 μl of 200 U/μl MMLV reverse-transcriptase, and 1 μl of 40 U/μl RNase inhibitor (Enzynomics, Korea). After adding the enzyme mix to the annealed total RNA sample, the reaction was incubated for 90 min at 42°C prior to heat inactivation of reverse-transcriptase at 80°C for 10 min. The cDNA samples were brought up to a final volume of 400 μl by the addition of diethylpyrocarbonate (DEPC)-treated water. Quantitative real-time PCR Real-time PCR amplifications were carried out in 384 well plates according to the instructions of the manufacturer, using Applied Biosystems PRISM 7900HT instruments. The real-time PCR analysis was performed in a total volume of 10 μl with 5 μl of 2× Taqman gene expression master mix (Applied Biosystems, United States), 1 μl each of Edmondson grade 0.368 5 μM forward and reverse primers and 1 μM probe (Genotech), and 2 μl of cDNA (or water as a control, which was always included). The amplification steps were as follows: an initial denaturation step, 95°C for 10 min, followed by 40 cycles of denaturation at 95°C for 15 sec; elongation at 60°C for 1 min. The primer and probe sequences were designed using Primer Express 3.0 software (Applied Biosystems) and all probe sequences were labeled with FAM at the 5' end and with TAMRA at the 3' end. The following primer and probe sequences were used: B2M forward ( ). Expression of NNMT mRNA was measured (the number of cycles required to achieve a threshold, or C T ) in triplicate, and then normalized relative to a set of reference genes (B2M, GAPDH, HMBS, HPRT1, and SDHA) by subtracting the average of the expression of the 5 reference genes [17]. Using the ΔC T value (NNMT C T -average C T of reference genes), the mRNA copy number ratio was calculated as 2 -ΔCt . Standard curves were constructed from the results of simultaneous amplifications of serial dilutions of the cDNA samples. Statistical analysis All statistical analyses were done with the open source statistical programming environment R http://www.rproject.org/. Significant differences between gene expression levels were evaluated by a Student's t test. Correlation between gene expression and clinicopathologic variables was evaluated using a χ 2 test. Categorical clinicopathologic variables were classified as in another study on HCC prognosis [18], and continuous clinicopathologic variables were classified by cutoff values close to their medians as in other studies [19,20]. For instance, the cutoff value of 100 ng/ml AFP level has been used in another study [19], and the cutoff values of 52 and 56 years have been used in other recent studies [18,20]. Kaplan-Meier survival curves were calculated using tumor recurrence (defined as the first appearance of a tumor at any site following definitive treatment) or death as the end points. The difference of overall survival curve or disease-free survival curve was examined by log-rank test. In addition, the Cox proportional hazard regression model was used to identify independent prognostic factors for overall survival and disease-free survival. A two-tailed P value test was used and a P value of < 0.05 was considered statistically significant. Expression of NNMT gene in hepatocellular carcinoma We performed real-time RT-PCR for NNMT mRNA from frozen paired samples derived from 120 patients with HCC. A total of 120 HCCs (T) and 40 non-cancerous hepatic samples (NT) were assessed by real-time RT-PCR. Expression of NNMT mRNA was measured in triplicate, and then normalized relative to a set of reference genes (B2M, GAPDH, HMBS, HPRT1, SDHA) by subtracting the average of the expression of the 5 reference genes [17]. NNMT mRNA was significantly lower in T than in NT tissues (2.47 vs 35.75; median copy number ratio, P < 0.0001) (Figure 1). The reduced expression of NNMT mRNA in HCC is consistent with findings of other studies including research employing microarray measurements [12][13][14][15]. In addition, NNMT mRNA was higher in recurrent tumors than in non-recurrent tumors (3.93 vs 1.56; median copy number ratio, P = 0.21), especially in stage III & IV tumors (7.26 vs 0.95; median copy number ratio, P = 0.056), although the differences were not statistically significant (data not shown). Relationship between tumor NNMT mRNA level and clinicopathologic features To better understand the significance of NNMT expression in HCC, we correlated the mRNA expression level with the major clinicopathologic features. The statistically most significant cutoff value of NNMT mRNA level discriminating between patients with a good prognosis and patients with a poor prognosis was used. As shown in Table 1, I 1 3 1 5 II 30 43 III & IV 5 14 NNMT expression was significantly associated with tumor stage (P = 0.010) in 120 HCCs. However, no correlation was observed between NNMT mRNA level and other clinicopathologic parameters (age, gender, virus, liver cirrhosis, tumor size, Edmondson grade, and AFP level) (P > 0.05). Impact of tumor NNMT mRNA levels on OS and DFS During the follow-up observation period of up to 92 months, locoregional recurrence or distant metastases occurred in 72 patients (60%) and death was confirmed in 35 patients (29%). To assess the prognostic significance of NNMT expression, we analyzed overall survival (OS) and disease-free survival (DFS) rates using the Kaplan-Meier method. At the 5-year follow-up, approximately 79% of the patients with low NNMT expression (< 4.40; copy number ratio) survived, whereas 60% of the patients with high NNMT expression (≥ 4.40; copy number ratio) survived (Figure 2A). Similarly, at the 5-year follow-up, approximately 45% of the patients with low NNMT expression were disease-free, whereas 22% with high NNMT expression were disease-free ( Figure 2B). The logrank test showed that patients who expressed higher NNMT mRNA levels tended to have a shorter OS time (P = 0.053) and a significantly shorter DFS time (P = 0.016). A univariate Cox regression analysis was used to identify important prognostic factors of OS and DFS. High Edmondson grade (grade I vs II, P = 0.020; grade I vs III-IV, P = 0.019), high AFP level (P = 0.0070), large tumor size (P = 0.00012), and high tumor stage (stage I vs II, P = 0.0068; stage I vs III-IV, P = 2.2 × 10 -5 ) were identified as important risk factors for OS (Table 2), whereas high NNMT mRNA level (P = 0.018) and high tumor stage (stage I vs III-IV, P = 0.0049) were identified as important risk factors for DFS (Table 3). In a multivariate Cox analysis, both NNMT expression (P = 0.0096) and tumor stage III & IV (P = 0.0017) were found to be significant prognostic factors for DFS (Table 4). Discussion The metabolism of drugs, toxic chemicals, and hormones is important in the fields of pharmacology and endocrinology given its implication in many pathophysiological processes, such as cancer and resistance to chemotherapy [21]. One of the key enzymes involved in biotransformation and drug metabolism is NNMT, which catalyzes the N-methylation of nicotinamide, pyrimidines, and other structural analogues [22,23]. NNMT is predominantly expressed in the liver, where its activity varies with a bimodal frequency distribution, thus raising the possibility that a genetic polymorphism might play a role in regulating the enzyme activity [23]. Lower expression is observed in other organs such as the kidney, lungs, placenta, heart, and brain. Although several studies indicated differential expression of NNMT in HCC [12][13][14][15], the role of NNMT in the molecular pathogenesis of HCC has yet to be elucidated. This study focused on NNMT as a potential molecular marker responsible for determining clinicopathologic features and the prognosis of HCC. Utilizing a large number of HCC specimens, the quantitative real-time PCR assay showed that the expression of NNMT is markedly reduced in HCCs compared to non-cancerous surrounding tissues, consistent with other studies [12][13][14][15]. Stratification of HCC specimens based on NNMT gene expression levels showed that NNMT expression was significantly correlated with tumor stage (P = 0.010). More importantly, the log-rank test showed that patients who expressed higher NNMT mRNA levels tended to have a shorter OS time (P = 0.053) and a significantly shorter DFS time (P = 0.016). Both NNMT expression (P = 0.0096) and high tumor stage (P = 0.0017) were found to be significant prognostic factors for DFS in a multivariate analysis. It is not clear why NNMT expression level was a significant prognostic factor for DFS but not for OS. We believe that the limited follow-up time was not the main cause of lack of correlation between NNMT and OS because the events (death or relapse) were rare after the median follow-up time of 50 months in our cohort. Our analysis of NNMT expression in correlation with the clinicopathologic features and prognosis of HCC yielded the novel finding that NNMT mRNA levels could be used as a prognostic factor for DFS. Box and whiskers plot for NNMT mRNA levels in non-can-cerous liver (NT) and HCC (T) determined by real-time RT-PCR Figure 1 Box and whiskers plot for NNMT mRNA levels in non-cancerous liver (NT) and HCC (T) determined by real-time RT-PCR. The box is marked by the first and third quartile with the median marked by a thick line. The whiskers extend to the most extreme data point which is no more than 1.5 times the interquartile range from the box. The mechanism for reduced expression of NNMT and its relation to HCC progression is not clear. Several metallothionein genes involved in detoxification and drug metabolism are downregulated in HCC especially in tumors with high Edmonson grades, reflecting de-differentiation of cancer cells [12]. Thus, it is possible that the liver specific function of NNMT is lost during the progression of HCC. On the other hand, a recent in vitro study found that NNMT was necessary for cancer cell migration in bladder cancer cell lines [24], pointing to a possible involvement in tumor invasion. In 120 HCCs observed in this study, NNMT mRNA was higher in recurrent tumors than in non-recurrent tumors especially in stage III & IV tumors, although the differences were not statistically significant. Thus, there's a possibility that increased NNMT expression is related to cell mobility and tumor invasiveness in high stage HCC. Interestingly, the NNMT expres-sion level was decreased in stage II tumors compared to stage I tumors, while stage III & IV tumors showed a similar NNMT level as stage I tumors. This could be due to tumor de-differentiation preceding tumor invasion. However, we cannot rule out other regulatory mechanisms independent of tumor de-differentiation and invasion. In tumors, abnormal expression of NNMT has been reported in glioblastoma [25], stomach cancer [26,27], papillary thyroid cancer [28,29], colon cancer [30], and renal carcinoma [31,32]. NNMT was identified as a novel serum marker for human colorectal cancers although this protein is not thought to be secreted [30]. Interestingly, the upregulation of NNMT was found to be inversely correlated with tumor size in renal clear cell carcinoma, suggesting that the enzyme may be significant in an initial phase of malignant conversion [32]. Increased expression of NNMT in non-tumor cells was reported in a few situations: the cerebellum of patients with Parkinson's disease [33,34], human hepatoma cells (Huh7) with expression of the hepatitis C core protein [35], and the liver of mice transplanted with tumors [36,37]. In these situations, the mechanism for deregulated NNMT expression remains unclear. Recently, NNMT promoter was cloned and studied in papillary thyroid cancer cell lines, where it was shown to be activated by hepatocyte nuclear factor-1β [29]. Subsequently, it was found that the NNMT promoter region also contains the consensus sequences for signal transducers and activators of transcription (STAT) binding elements and nuclear factor-interleukin (IL) 6 binding elements [38]. Accordingly, hepatoma cell line (Hep-G2), which expressed low levels of NNMT, increased NNMT expression several fold upon stimulation by IL-6. The stimulation by IL-6 was largely abolished with the expression of dominant-negative STAT3 [38]. Activation of STAT3 alone caused a four-fold higher induction of NNMT promoter activity in the transformed Hep-G2 cells. Thus, NNMT expression could be regulated by IL-6 and STAT3 in a subclass of HCC. The expression of NNMT analyzed in relation to the expression of related regulatory molecules could improve the predictive power on HCC prognosis. To our knowledge, this is the first report of NNMT as a prognostic factor of DFS in HCC. The findings herein indicate that NNMT is an attractive target for therapeutic regulation because it is involved in drug metabolism and could alter the efficacy of standard chemotherapeutic drugs. Additional research in larger populations of HCC Conclusion We found that NNMT was associated with the tumor stage and that higher NNMT mRNA levels in HCC was significantly associated with shorter DFS time. It is very important to develop new target molecules and to establish novel chemotherapy strategies in malignancies such as HCC, which shows frequent relapse and high mortality despite various treatment modalities. The broad substrate specificity of NNMT suggests that it could alter the efficacy and/or adverse effect of standard doses of chemotherapeutic drugs. Therefore, NNMT merits further study for its role as a prognostic factor of OS and DFS with a larger cohort of HCC patients. Moreover, NNMT itself could be a target for chemotherapeutic agents. Establishing the molecular interactions of NNMT with diverse molecular pathogenic factors in HCC will enable new studies and development of effective therapeutic regimens. Kaplan-Meier curves for OS and DFS of patients with high and low NNMT mRNA levels after surgery Figure 2 Kaplan-Meier curves for OS and DFS of patients with high and low NNMT mRNA levels after surgery. A, patients with high NNMT mRNA levels (≥ 4.40; copy number ratio) tended to have a shorter OS time (P = 0.053). Broken lines, patients with low NNMT mRNA levels (n = 72); thin lines, patients with high NNMT mRNA levels (n = 48). B, patients with high NNMT mRNA levels had a significantly shorter DFS time (P = 0.016). Broken lines, patients with low NNMT mRNA levels (n = 72); thin lines, patients with high NNMT mRNA levels (n = 48).
2017-06-29T20:05:42.764Z
2009-02-16T00:00:00.000
{ "year": 2009, "sha1": "e86d673a4668a824ac81c18892b4559f0bf79650", "oa_license": "CCBY", "oa_url": "https://jeccr.biomedcentral.com/track/pdf/10.1186/1756-9966-28-20", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ab55df2c34181ab60db4f6f47747021025fd6786", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
118004168
pes2o/s2orc
v3-fos-license
A Conventional Physics Explanation for the Anomalous Acceleration of Pioneer 10/11 Anderson, et al., find the measured trajectories of Pioneer 10 and 11 spacecraft deviate from the trajectories computed from known forces acting on them. This unmodelled acceleration can be accounted for by non-isotropic radiation of spacecraft heat. Various forms of non-isotropic radiation were proposed by Katz, Murphy, and Scheffer, but Anderson, et al. felt that none of these could explain the observed effect. This paper calculates the known effects in more detail and considers new sources of radiation, all based on spacecraft construction. These effects are then modelled over the duration of the experiment. The model provides a reasonable fit to the acceleration from its appearance at a heliocentric distance of 5 AU to the last measurement at 71 AU, but overpredicts by 9% the decrease in acceleration between intervals I and III of the Pioneer 10 observations. (For comparison, the two different measurements of the effect (SIGMA and CHASMP) themselves differ by 4% in interval III.) In any case, by accounting for the bulk of the acceleration, the proposed mechanism makes it much more likely that the entire effect can be explained without the need for new physics. I. INTRODUCTION In [1], Anderson et al. compare the measured trajectory of several spacecraft against the theoretical trajectory computed from known forces. The find a small but significant discrepancy, referred to as the unmodelled or anomalous acceleration. It has an approximate magnitude of 8 × 10 −8 cm s −2 directed approximately towards the Sun. Needless to say, any acceleration of any object that cannot be explained by conventional physics is of considerable interest. Explanations for this acceleration fall into two general categories -either new physics is needed or some conventional force has been overlooked. One of the most likely candidates for the anomalous acceleration is non-isotropic radiation of spacecraft heat. This is an appealing explanation since the spacecraft dissipates about 2000 watts total; if only 58 watts of this total power was directed away from the sun it could account for the acceleration. Several possible mechanisms have been debated in the literature, but none are totally satisfactory. In this paper we re-examine each proposed mechanism, explicitly including their time dependence. We propose several additional mechanisms -asymmetric RHU heat, misdirected feed radiation, and mis-modelled solar reflectivity. Finally, we compare the acceleration induced by the proposed mechanisms with the measured data, and get reasonable agreement over the whole data span. * Electronic address: lou@cadence.com II. THE ANOMALOUS ACCELERATION As the Pioneer spacecraft receded from the sun, solar forces decreased and only gravitational forces, and an occasional maneuver, affected the trajectory of the spacecraft. Anderson, et al. noticed that a small additional acceleration needed to be added to the known forces to make the measured data and computations match. This is the anomalous acceleration, which started to become noticeable about 5 AU from the sun, and was roughly the same for Pioneer 10 and 11. The onset is shown in Figure 1. Further constraints come from the ongoing study of Pioneer 10, where there are fewer confounding effects and the data span is long enough to provide significant constraints due to the radioactive decay of the heat sources. Figure 2, reproduced from [2], shows the measured acceleration 1987 to 1998. (Although they have different horizontal axes, Figure 2 largely follows Figure 1 chronologically. Pioneer 10 was at 40 AU in 1987.) The authors divide the 1987-1998 Pioneer 10 history into three intervals. Interval I is January 1987 to July 1990, interval II from July 1990 to July 1992, and interval III is from July 1992 to June 1998. The authors make this distinction by looking at the spin rate of the craft -in intervals I and III it was decreasing smoothly, but in interval II it decreased quickly and irregularly. They therefore consider the data from interval II to be less reliable than intervals I and III, since whatever affected the spin (probably gas leaks) may also have affected the acceleration. More recent analyses have refined these results somewhat, though the main conclusions remain unchanged. Table I shows the most recent results from [3], which fits a constant, independent acceleration in each interval. Accelerations are in units of 10 −8 cm s −2 . SIGMA and CHASMP are two different and largely independent trajectory modelling programs; the difference between the programs is our best estimate of the real uncertainties since it is far greater than the formal errors. This data, taken at face value, shows that 57 directed watts can account for the acceleration in 1998, and that a 3% decrease was observed between interval I and interval III. III. PREVIOUS WORK Many paper [4] and web [5] descriptions of the Pioneer spacecraft are available. In this section we summarize the existing literature on the hypothesis that non-isotropic radiation is responsible for the unmodelled acceleration. Murphy [6] (and a related proposal by Scheffer [7]) suggests that the anomalous acceleration seen in the Pioneer 10/11 spacecraft can be, "explained, at least in part, by non-isotropic radiative cooling of the spacecraft." Katz [8] proposes that at least part of the acceleration is generated by radiation from the RTGs reflecting off the back of the antenna. Slusher (as credited by Anderson) proposed that the forward and backward surfaces of the RTGs may emit non-equally. Anderson, et al. argue in reply [3,9,10] that none of these proposed sources adequately account for the acceleration. IV. DISCUSSION We consider asymmetrical radiation from 4 sourcesthe RTG heat (direct radiation and reflection off the antenna), the electrical power dissipated by the spacecraft, the radioisotope heater units (RHUs) on the spacecraft, and radiation from the feed that misses the antenna. We also consider one modelling error, a mis-estimation of the reflectivity of the antenna to solar radiation. The available power from all these sources changes in time. In the following discussion, let d be the date, in years. The sunward side of the spacecraft is the back, and the antisunward side, in the direction of motion, is the front [11]. We calculate thrust in units of watts of directed (antisunward) radiation. A. Radiation of spacecraft power First, consider thermal radiation from the body of the spacecraft. A thought experiment shows that the electrical power dissipated in the spacecraft must result in thrust. The simplest model consists of the main compartment as a 60 watt isotropic radiator, and the back of the antenna a mirror. The antenna subtends 120 degrees as seen from the instrument compartment, so if the emitted radiation is isotropic, the antenna intercepts 1/4 of the total radiation, and reflects it away from the sun. Since the main compartment is centered behind the antenna, and since the sides, if anything, are worse radiators than the front, we conclude that at least 25% of spacecraft electrical power must be converted to thrust. A more detailed analysis shows the radiation is even more anisotropic than these arguments would suggest. Assuming a uniform internal temperature and closed louvers, the power emitted from each surface is proportional to the area times the "effective" emissivity of the surface [12]. The sides and the rear of the compartment are covered with multi-layer insulation(MLI) [4], with an effective emissivity of 0.007 to 0.01 [13]. The lowest emissivity material on the front of the spacecraft is the surface of the louvers, with a emissivity of 0.04 [4]. Since the sides and the front have comparable surface areas, then about 80% of the total power will be radiated though front (though hard to characterize heat leaks could reduce this value). Frontal radiation would be expected to be about 66% efficient, assuming Lambertian emission. Defining ǫ BUS as the fraction of main compartment heat that is converted to thrust, we then expect ǫ BUS to range between 0.25 (blockage arguments) to 0.52 (differential emissivity). From [3], the total electrical power is modelled E(d) = (68 + 2.6 (1998.5 − d)) watts and the thrust (assuming an 8 watt radio beam) is Feed pattern of the radio beam An ideal radio feed antenna would illuminate its dish uniformly, with no wasted energy missing the dish. However, the feed is physically small and cannot create such a sharp edged distribution, so some radiation always spills over the edge. This radiation is converted to thrust with an efficiency of 1.7 since it directly subtracts from the sun directed power and adds anti-sun power at a roughly 45 degree angle to the spin axis. This produces thrust where ǫ F EED is the fraction of RF power that misses the antenna. Since dish area is wasted if not fully illuminated, an optimum feed (for transmission) will result in ǫ F EED ≈ 0.1. C. Radiation from the RHUs From diagram 3.8-1 in [4], 10 1-watt (in 1972) radioisotope heater units are mounted to to external components (thrusters and the sun sensor) to keep them sufficiently warm. The diagram is not very specific, but the units to which they are mounted are primarily behind the main dish. Radiation from these components will contribute thrust, which we model as where ǫ RHU is the proportion of RHU heat converted to thrust. Reasonable values for ǫ RHU might range from 0.0 to 0.5, with the latter corresponding to components behind the dish radiating uniformly. D. Radiation from the RTGs The RTGs might contribute to the acceleration by radiating more to the front of the spacecraft than the rear, and/or by having their heat reflected asymmetrically from the spacecraft. The RTGs radiate all the thermal power that is not turned into electricity, so RT G HEAT (d) = (2580 watts)2 −(d−1972)/88 − E(d) In [3], direct radiation asymmetry is estimated to contribute to thrust with an efficiency of at most ±0.003. RTG reflection by the antenna was proposed by Katz, but argued against by Anderson, primarily on the grounds that the RTGs are on-axis as seen by the antenna. We re-examine this argument here. From figure 3.1-2 of [4], we see that the centerline of the RTGs is behind the center of the antenna. Measurements from this diagram indicate this distance is about 23.8 cm. Figure 3.1-3 of [4] shows the far end of the RTGs is 120.5 inches (or 3.06 meters) from the centerline. From this geometrical data we can estimate the area blocked by the antenna from each RTG [14]. Numerical integration of these areas, assuming Lambertian emission by the RTGs, shows about 0.6% of the near RTG radiation and 0.4% of the far RTG radiation fall upon the dish. This energy is turned into thrust by two effects. First, the antenna shadows radiation which would otherwise go forward. An angle in the middle of the antenna is about 17 degrees forward; this corresponds to an efficiency of 0.3 (the true efficiency is probably higher since the edge is both at a greater angle and more brightly illuminated.) Next, the energy that hits the antenna must go somewhere. Some will be absorbed and re-radiated; some will bounce into space, and some will bounce and hit the instrument compartment, and be reflected or re-radiated from there. A detailed accounting seems difficult, but an overall efficiency of 0.7-0.9 seems reasonable (0.3 for shadowing and 0.4-0.6 for reflection and re-emission). We model the total thrust from RTG heat as where ǫ RT G is the proportion of RTG heat converted to thrust. Combining the effects of this section, we expect ǫ RT G to range from 0.004 to 0.012. E. Antenna solar reflectivity The trajectory analysis programs fit the reflectivity of the spacecraft to solar radiation, K, as a force that falls off as 1/r 2 , where r is the heliocentric distance. This fit can hide an otherwise unmodelled acceleration. Over a short time period, during which r varies little, any constant radial acceleration can be absorbed into K. Over a longer period of time the fitting procedure will mask any component of anomalous acceleration that varies as 1/r 2 and is less than the acceleration corresponding to the allowed variation in K. In particular the acceleration proposed in this paper will be partially masked since it decreases with time and hence has a 1/r 2 component. The fitted solar reflectivity constant also provides a natural explanation for the onset of the anomalous acceleration. Consider the case where the acceleration (from any cause) exists for all r. When r is small, the fitting programs absorb the extra acceleration by adjusting the value of K. As r increases, the power available from this source decreases, and eventually K runs into the limits allowed in the fit. (Physically reasonable values perhaps range from 1.5 to about 1.9; they are certainly greater than 1.0 and less than 2.0). Once the limit of adjustment for K is reached, it becomes constant and can no longer mask the acceleration, which appears as shown in figure 1. It might be possible to see additional signs of this process in archival data -it would show up as a decrease in the fitted value of K as the spacecraft receded from the sun. In this paper, we model the effect of any error in K by introducing a fictitious force, whose value is simply the solar force on the spacecraft times the error in K. We assume the distance from the sun, measured in AU, increases linearly from 20 AU in 1980 to 78.5 AU in 2001: The thrust, in watts, is where f ⊙ = 1367 W/m 2 (AU) 2 is the "solar radiation constant" at 1 AU, and K SOLAR , the amount by which the solar reflection constant is underestimated. V. COMPARISON WITH EXPERIMENT To compare the hypothesis with experiment, we sum the individual sources, then convert to acceleration by dividing by c, the speed of light, and m, the spacecraft mass (here 241 kg): RADIO(d) + BU S(d) − SOLAR(d)] We then compare with the plots from [2,3]. The proposed explanation has 5 adjustable parameters. In theory all are separable since they decay at different rates; in practice the data are not good enough to separate them and many fits are plausible. One reasonable fit over the entire data span has the following coefficients: ǫ RHU = 0.5, ǫ RT G = 0.0108, ǫ F EED = 0.1, ǫ BUS = 0.35, and K SOLAR = 0.3. This fit to the data is shown in Figures 1 and 2. The agreement seems reasonable in both regimes, and the proposed model provides a better fit to the early data than the constant acceleration of [3], even assuming reflectivity mismodelling to account for the onset of the acceleration. The fit from 1987 to 1998 also looks acceptable, as shown in Figure 2. Finally, we compare with the most recent results [3] that fit a constant acceleration in each interval of the later Pioneer 10 data. The proposed model gives an average thrust of 57.8 watts in interval I, and 51.0 watts in interval III. We can normalize the result to get the correct overall average, or the right acceleration in interval I, but in either case we would expect to see an 11.8% decrease from interval I to III, where only a 3% decrease is observed. The two different measurements of the effect (SIGMA and CHASMP) themselves differ by 4% in interval III. If we treat this difference as a statistical result (a procedure of dubious merit, but the best we can do) then the 9% discrepancy is 2.25 standard deviations out. This makes it unlikely at about the 2% level that this hypothesis alone accounts for all the measured result. We can get a better fit (1.75 sigma) to the Pioneer 10 data by assigning different efficiencies to instrument heat and main compartment heat, at the cost of an extra parameter and the need to consider instrument power dissipation in detail [14]. VI. CONCLUSIONS AND FUTURE WORK There is surely an unmodelled effect on the Pioneer spacecraft, based upon its thermal characteristics. Rough estimates show it can account for the magnitude of the unmodelled acceleration to within the errors, but overpredicts the rate of change. In any case, the proposed explanation, by accounting for the bulk of the effect, makes it more likely that conventional physics can account for the entire unmodelled acceleration. Conventional explanations for the remaining discrepancy include other unmodelled effects such as gas leaks, inaccuracies in the simple thermal model, or the effects of a complex fitting procedure applied to noisy data. This explanation also explains some other puzzles: the values of acceleration of Pioneer 10 and 11 would be expected to be similar, but not identical, as observed. The acceleration would not have a strong effect on the spin; most of the radiation will generate little torque. Other spacecraft, built along the same general principles, would be expected to show a similar effect, but planets and other large bodies would not, as is observed. More detailed modeling, using the Pioneer materials, construction details, and history, might confirm or refute the proposed hypothesis, and additional tracking could be useful as well. However, such improvements are limited since accurate thermal modelling is difficult [3] and the spacecraft was not designed for this purpose. Longer term, other proposed experiments such as LISA [15] are designed specifically to reduce non-gravitational systematics (by a factor of about 10 5 ) and allow frequent and accurate tracking (a differential distance measurement, each second, accurate to 10 −9 cm) Assuming the anomalous acceleration exists at all heliocentric distances, (as argued in section IV E), then it should be detectable in just a few seconds of LISA data. On the other hand, if no unmodelled acceleration is detected in these more precise experiments, then almost surely the anomalous acceleration of Pioneer 10/11 is caused by overlooked prosaic sources such as those proposed here. VII. ACKNOWLEDGEMENTS I'd like to thank Edward Murphy and Jonathan Katz for comments, suggestions, and helpful documents; Larry Lasher and Dave Lozier answered questions about Pioneer. John Anderson suggested adding the statistical likelihood calculations.
2019-04-14T02:21:15.731Z
2001-08-22T00:00:00.000
{ "year": 2001, "sha1": "286c6b2901e684e113bf3b1a765dcc326e011bab", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "286c6b2901e684e113bf3b1a765dcc326e011bab", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
257057614
pes2o/s2orc
v3-fos-license
Alhagi maurorum extract modulates quorum sensing genes and biofilm formation in Proteus mirabilis Proteus mirabilis (P. mirabilis) is a frequent cause of catheter-associated urinary tract infections. This study aims to investigate the anti-infective effect of Alhagi maurorum extract (AME), the traditional medicinal plant in the middle east, on the biofilm-forming P. mirabilis isolates. Hydroalcoholic extract and oil of A. maurorum were characterized by HPLC and GC–MS. The antiproliferative, anti-biofilm, and bactericidal activity of AME at various concentrations were assessed by turbidity, crystal violet binding, and agar well diffusion assays, respectively. The AME’s effect on adhesion and quorum sensing (QS) were investigated by in vitro adhesion assay on cell culture and agar overlay assay using Janthinobacterium lividum (ATCC 12472) as a biosensor strain. In addition, the expression level of selected genes involved in QS and biofilm regulation were determined by quantitative Real-Time PCR. Furthermore, the bladder phantom model was created to evaluate the assays and investigate the catheter’s calcium deposition. The most effective chemical compounds found in AME were tamarixetin, quercetin, and trans-anethole. Although AME did not inhibit swarming motility, it reduced biofilm production and exerted a concentration-dependent anti-adhesive and anti-QS activity against P. mirabilis. AME also downregulated the expression level of selected genes involved in biofilm formation and QS. This study showed that AME as a natural compound reduced biofilm formation of P. mirabilis by targeting virulence factor genes, quorum sensing, and other strategies that include preventing the adhesion of P. mirabilis to the cells. The results suggest that A. maurorum extract might have the potential to be considered for preventing UTIs caused by P. mirabilis. Scientific Reports | (2022) 12:13992 | https://doi.org/10.1038/s41598-022-18362-x www.nature.com/scientificreports/ in rheumatic pains, bilharzias, liver disorders, and gastrointestinal discomfort disease treatment. Furthermore, A. maurorum has a potential effect on treating UTI s and acts as a powerful diuretic and antilithiastic 10 . Although the possible mechanisms of some phytochemicals assessed for P. mirabilis were studied 4 , the anti-biofilm activity and the molecular mechanisms caused by A. maurorum extract (AME) are unclear. This study aimed to evaluate the effect of A. maurorum extract in biofilm degradation and QS genes expression of P. mirabilis isolated from the urinary catheters. High-performance liquid chromatography (HPLC). The dried extract was standardized using quercetin and tamarixetin as the bioactive marker for the standardization of the extract. Quercetin and tamarixetin peaks of extract appeared at a retention time of 8.730, 9.588 min, respectively. Using a calibration curve, the extract was standardized to contain 19 μg/100 mg of quercetin and 55 μg/100 mg of tamarixetin (Fig. 2). Qualitative QS inhibition assay. J. lividum synthesizes the violet pigment violacein as a result of QS. Loss of purple pigmentation of J. lividum in the vicinity of the plant extracts indicated QS inhibition by the plant extract, which was seen in 62.5, 125, 250, and 500 μg/mL (Fig. 4). Cell viability assay. The cytotoxicity of the AME was examined in HeLa cells. Cells treated with the AME at different concentrations (62.5-1000 μg/mL) survived as well as the control cells (P > 0.05), indicating that a high dose of AME did not affect HeLa survival (Fig. 7). Adhesion assay. The quantitative binding of P. mirabilis was investigated on the HeLa cell line by enumeration by plating on TSA. AME at different concentrations (125-1000 μg/mL) decreased the adherence of P. mirabilis to the HeLa cell line in a concentration-dependent manner. Results showed that at the concentration of 0.125 mg/ml of AME, P. mirabilis presents a 40% reduction in adhesion to HeLa cells (Fig. 8). Although, at the higher concentration of the extract (0.5 and 1 mg/ml), no significant reduction in the adhesion of P. mirabilis to HeLa cells was seen compared to the control (P > 0.05). Bladder phantom model. To precisely evaluate the impact of AME on crystalline biofilm formation, models of late-stage infection were deactivated after 18 h, and calcium levels on catheter sections were quantified. As demonstrated in Fig. 9, AME significantly reduced the levels of encrustation at the concentration of 0.125 mg/ml. The urine pH was measured after treatment, and there was no significant difference in urine pH after treatment of extract in comparison with control (P > 0.05). Effect of AME on gene expression. We used a qRT-PCR assay to examine the effect of AME at an optimal concentration of 125 μg/ml on the adhesion and quorum sensing gene expression levels. Results showed that all of the mrpA, pmfA, luxS, rsmA, and rsbA genes were significantly downregulated, and their expression levels were reduced approximately by 2 −3.9 , 2 −5.6 , 2 −1.6 , 2 −4.5 , and 2 −1.4 -fold, respectively. Among examined different time intervals (4, 16, and 48 h), a significant reduction in the expression of these genes was seen after 16 h treatment (P < 0.05) (Fig. 10). Discussion Antibiotic resistance in bacterial biofilms piqued researchers' interest in looking for additional anti-biofilm drugs and alternative therapeutics. Plants have long been thought to be a rich source of phytochemicals, which are bioactive compounds. Medicinal plants are a good substitute for commonly used antimicrobial drugs 11 . Among their various applications, phytochemicals have attracted particular interest to their antibiofilm activity, which was attributed to the inhibition of virulence factors, including microbial adherence, quorum sensing, urease activity, and exopolysaccharide matrix production 4 . 12 . The phytochemical analysis of AME by GC-MS and HPLC revealed the presence of Trans-Anethole (p-methoxy propenyl benzene), tamarixetin, and quercetin. Trans-Anethole (tA), as a significant component of many essential oils, is an organic compound and a by-product of terpene synthesis 13 . Kwiatkowski et al. reported the significant antibacterial activity of tA on S. aureus. They showed that tA increased 2-3 times the inhibition zone of bacterial lawn and reduced 60%-80% the biofilm formation of S. . Impact of AME with an optimal concentration on crystalline biofilm formation on the catheter. ***P < 0.001. 20 . We further investigated the swarming motility of P. mirabilis ATCC7002 phenotypically and observed clearly visible swarming in AME-treated bacteria and the control. Although, Aygul et al. showed that quercetin (as an active component of AME) inhibited the swarming motility of P. mirabilis. They supposed that the inhibitory effect of quercetin on P. mirabilis swarming was possibly in terms of regulating the expression level of polyamine enzymes which trigger the swarming differentiation or active pump proteins 21 . Scientific Reports The virulence factors such as swarming, biofilm formation, and the presence of an efflux pumping system are involved in the pathogenesis of P. mirabilis in UTIs 4 . We indicated that AME inhibits QS in a dose-dependent manner. The anti-QS activity of different plant extracts was investigated, and surprisingly, a wide range of natural products and traditional medicinal herbs showed significant anti-QS ability against Gram-positive and Gramnegative bacteria 22,23 . Another important virulence factor is biofilm formation. Several reports assessed the antibiofilm activity of phytochemicals [24][25][26] . We evaluated the effect of AME on biofilm formation by crystal violet with the microtiter plate method. AME achieved up to 76% inhibition of biofilm formation at a concentration of 0.125 mg/mL. It is possible that the phytochemical compounds could inhibit bacterial growth in different pathways, including weakening the virulence of bacteria without showing bactericidal activity. The inhibition of quorum sensing (QS) and initial attachment to cells may be related to the biofilm inhibition and decrease in adhesion to cells, for instance, the study which was done by Šimunović et al. showed that some of the natural extracts such as oregano, nettle, winter savory, roseroot, yarrow, and rosemary could reduce the motility and adhesion of C. jejuni via the modulation of LuxS (QS) system 27 . However, the molecular mechanisms of Alhagi hydroalcoholic crude extract have not been studied yet. To analyze the mechanism of the biofilm inhibition, the difference in expression levels of selected genes involved in biofilm formation and QS were evaluated by RT-qPCR. Gene expression analysis showed an altered pattern (2.6-20.7-fold) downregulation of the genes that affected the virulence properties of P. mirabilis, such as motility, biofilm formation, and QS activity. The data obtained from genotypic analysis confirmed the phenotypic results and showed AME could interact with biofilm and QS regulators in a dose-and time-dependent manner. Our data were in agreement with the data achieved from qualitative and quantitative QS inhibition assay performed by J. lividum. In agreement with our results, several reports illustrated the anti-QS,-biofilm formation, and -luxS expression of natural compounds [28][29][30] . Two existing genes, rsmA and rsbA in P. mirabilis, regulate swarming and virulence factor expression 31 . Our results demonstrated that treatment results in the downregulation of these genes, which might be led to swarming inhibition. An optimized antiadhesive compound should interact with the adhesins of the pathogen, leading to significant inhibition of the docking process between bacteria and eukaryotic cells 32 . As a result, AME reduces the adhesion of P. mirabilis on our constructed bladder phantom model and consequently affects the calcium deposition in the catheter. There are limited data about the molecular investigation for the ability of AME in the prevention of UTI s . As a result, AME, a widely used medicinal plant in folk medicine, might strongly regulate QS and biofilm formation of P. mirabilis and could decrease the amount of calcium deposited on the catheter. Moreover, the concentrations used do not show cytotoxicity suggests that this extract has the potential to be considered for further studies on the topics, including the prevention of UTIs caused by P. mirabilis. This study showed that AME as a natural compound reduced biofilm formation of P. mirabilis by targeting virulence factor genes, quorum sensing, and other strategies that include preventing the adhesion of P. mirabilis to the cells. The results suggest that A. maurorum extract might have the potential to be considered for preventing UTIs caused by P. mirabilis. Materials and methods Preparation of Alhagi crude extract. The whole part of the A. maurorum plant was collected during the flowering stage in July 2020 from the desert areas around Isfahan province (Gaz, Isfahan, Iran). The plant samples were authenticated by a specialist. The material was identified by J.B and M.G. A voucher specimen of the material is retained in the archives of the Department of Pharmacognosy, Isfahan Pharmaceutical Sciences Research Center under the designation 38,330 (FUMH). Ten grams of freshly powdered plant material were extracted with 100 mL of 50% ethanol for 15 min (3 × 5 min) under ice-cooling by rotor-stator extractor (Ultraturrax®) at maximum rotor speed. The extraction step was repeated 3 times. Then, the suspension was centrifuged at 5.000 × g for 15 min, and the clear supernatant was dried by a rotary vacuum evaporator to yield 2.0 g of dry extract (herbal material: extract ratio = 5:1). The A. maurorum extract (AME) was stored at − 20 °C in sealed containers under a vacuum 12 . Essential oil (EO) isolation. The powdered A. maurorum (100 g) was subjected to hydro distillation for 4 h using the Clevenger apparatus (Clevenger, 1928). Then, the EOs were dehydrated by olive oil and stored in tightly sealed glass vials at − 20 °C for further analysis. Gas chromatography-mass spectrometry (GC-MS) analysis. The gas chromatograph was equipped with a programmable split/spitless injector, a capillary column, and a programmable oven. A sample volume of 2 μL was injected at 271 °C, in spitless mode, in a baffle Siltek-deactivated liner (2 mm × 2.75 mm × 120 mm) provided by Thermo Fisher Scientific. Samples were analyzed via gas chromatography (Agilent USB-393752) equipped with an FID detector and capillary column 33 . High-performance liquid chromatographic (HPLC). Active phytochemical compounds were determined in the aqueous extract of the leaves by HPLC. A 100 mg of dried extract was hydrolyzed in HCl: Tetrahydrofuran (2.5 M) for 1 h. Flavonoid analytes were extracted into a water-soluble solvent (HCL (2 N) and diethyl ether), followed by partitioning of the analyte molecules in an organic solvent in the presence of a salt mixture (salting-out effect). The binary mobile phase consisted of solvent A (water: H 3 PO 4 10 mM; 99:1; v/v) and solvent B (acetonitrile). NUCLEOSIL® 100-5 RP-18 (Thermo scientific column, 150 mm × 4.6 mm) was used to separate phenolic compounds with isocratic elution: 75% A to 25% B at a flow rate of 1.2 ml/min, the time rum was over 10 min. A UV detector detected the phenolic acids and flavonoids at 200-500 nm wavelength. A standard calibration curve in the range of 0.005 to 0.1 mg/ml was prepared for quantitative analysis using different concentrations of standards (0.005, 0.025, 0.01, 0.1 mg/ml). The chromatographic peaks were identified by comparing the retention time of analytics with that of the reference compounds. The relationship between the concentration and peak area of the standard was measured using the minimum square method (R 2 value). Determination of cell viability (MTT assay). Cytotoxic assays were done in the HeLa cell line (ATCC CCL-2) obtained from the National Cell Bank of Iran, Pasteur Institute of Iran (Tehran, I.R. Iran). HeLa cells (0.5 × 10 4 cells/ well) were seeded in 96 well-microtiter plates in the presence of Dulbecco's Modified Eagle Medium (DMEM, Gibco, USA), supplemented with 5% FBS (Gibco, USA), and incubated for 12 h in a humidified atmosphere with 5% CO 2 at 37 °C. AME was solubilized in water to give a stock solution with a 2 mg/ mL final concentration. Serial ten-fold dilutions of AME in DMEM were prepared to reach 62.5-1000 μg/mL concentrations. Then, 100 µl of each dilution was added to each well. Hela cells with the growth medium were used as control. After incubation for 24 h, the viability of the cells were assessed by MTT assay as described previously 34 . Microbial isolation and identification. In this study, 40 P. mirabilis were isolated from the catheters collected from intensive care unit (ICU) patients of various hospitals in Isfahan and confirmed with conventional biochemical and genetic tests 35 . The P. mirabilis ATCC7002 was used as a standard control. Antimicrobial activity of AME, growth measurement and MIC of AME and the bioactive compounds of AME (quercetin, tamarixetin, and trans-anethole). The antimicrobial activity of AME was initially tested against P. mirabilis strain (ATCC7002) by the agar well diffusion method 36 . A freshly prepared culture of P. mirabilis was adjusted to OD 620 of 0.2 and suspended in sterile PBS. 100 μL of the bacterial suspension were swabbed on the Muller-Hinton plate and spread homogeneously. Then, 6 mm of the wells were cut into the agar plate, followed by adding 50 µL of AME dissolved in PBS at different concentrations (62.5-2000 μg/ mL). Plates were incubated at 37℃ for 24 h. The inhibition zones around the tested wells were measured to detect the AME range of effect against P. mirabilis. The disc antibiotic model of ofloxacin (5 μg/mL) was put on an agar surface as a positive control, and PBS was added in well served as a negative control. www.nature.com/scientificreports/ For growth measurement, the overnight culture of P. mirabilis 7002 (10 8 CFU/ml) was inoculated into 10 mL of Luria Bertani Broth, and the OD 620 value was adjusted to 0.1. Then, 50 µL of the culture was transferred into each well of a 96-well polystyrene microtiter plate that contained 100 µL of LB broth. Subsequently, AME at different final concentrations (125-1000 μg/mL) was added to the wells, and the cultures were incubated at 37 °C for 24 h while shaking (180 rpm). Gentamicin (100 μg/mL) and liquid medium served as positive and negative controls, respectively. The bacterial growth was monitored at 30 min intervals, and the OD 620 nm was recorded by a microplate reader (Infinite F50, Tecan) 37 . The test was done in triplicate for each concentration. MIC values of crude extract and its essential oils were determined using the microdilution broth method described by Wiegand et al. 28 . Adhesion assay. HeLa cells (0.5 × 10 5 cells/ well) were seeded in 24-well plates with/without different extract concentrations (125-1000 μg/mL) and infected with 10 6 CFU/mL of P. mirabilis and incubated at 37 °C under 5% CO 2 for two hours. The wells were washed three times with PBS to remove non-adherent bacteria. To detect adherent bacteria, cell cultures were treated with 500 μl 0.025% Triton X-100 for 5 min at 37 °C in 5% CO2 to detach and lyse the cell monolayer. After that, the cell lysates were diluted in ten serial dilutions. Bacterial colonies were counted after the cell lysates were inoculated on Trypticase Soy Agar (TSA) and incubated at 37 °C for 24 h. The number of bacterial colonies in treated plates was compared to the control 29 . Swarming motility assay on agar. Fifty microliters of the series of AME (125-1000 μg/mL), quercetin (1 mg/mL), tamarixetin (1 mg/mL), trans-anethole (1 mg/mL) were mixed with 10 ml of molten Mueller-Hinton agar medium and poured immediately over the surface of the plate as an overlay. The plate was point-inoculated with an overnight culture of P. mirabilis (ATCC7002) once the overlaid agar had solidified and incubated at 37 °C for 3 days. The extent of swarming was determined by measuring the area of the colony 30 . The test was done in triplicate for each concentration. Static biofilm assay. In this study, twelve MDR P. mirabilis strains were isolated from CAUTIs of patients attending reference AL-Zahra hospital (Isfahan, Iran) and identified, as described previously 35 . The P. mirabilis clinical isolates were assessed for their biofilm activity in a microtiter plate according to the previously described method 35 . The clinical isolates which had strong biofilm formation were chosen for further investigation. To study the extract's antibiofilm activity, 100 µl (OD 620 = 0.1) of each isolate culture were plated into a 96-well polystyrene microtiter plate and incubated for 72 h at 37 °C. Then, the media were discarded, and the biofilms were washed with PBS (pH 7.2). The biofilms were supplemented with 100 µl of the AME (125-1000 μg/mL), quercetin (62.5-1000 μg/mL), tamarixetin (62.5 μg/mL-1 mg/mL), trans-anethole (62.5 μg/mL-2 mg/mL), individually and incubated for 18 h at 37 °C. Then, the media were removed, and the wells were fixed with 96% ethanol, followed by staining with 0.1% crystal violet for 15 min. The wells were consequently washed 5 times with H 2 O, solubilized in acetone 33% and ethanol 80% (1:1). The amount of biomass was quantified by measuring the OD 620 using an ELISA-microtiter plate reader (Infinite F50, Tecan). Each treatment was done in triplicates. As a control, 100 µl of nutrient broth was added to the original biofilm of the isolated P. mirabilis. The percentage of biofilm reduction is calculated with this formula: (control untreated OD 590 nm-the mean of three replicants test OD 590 nm/control untreated OD 590 nm) × 100 38 . All of the OD of tests were normalized by subtracting the OD 590 of stained treated and untreated (bacteria only) from the OD 590 of stained control wells containing bacteria-free medium only. Qualitative screening of anti-QS activity. We used pigmented biosensor strain of Janthinobacterium lividum (ATCC 12472) as a reporter to study the anti-QS potential of the four crude A. maurorum extracts 39 . Agar overlay assay was done using 5 ml of molten soft Luria-Bertani (LB) agar (0.3% agar, 45℃), and 50 μL of the freshly prepared culture of the J. lividum (OD 620 = 0.7) was then added before plating the supernatants on the media. The agar-culture solution was immediately poured over the surface of pre-warmed LB agar plates. Then, 20μL of the AME (125-4000 μg/mL) was pipetted on sterile paper discs and let to dry. The discs were placed on the solidified agar. The plates were incubated overnight at 30 °C. Antibacterial activity was revealed through a zone of clearance at the center, and QS inhibition was observed around a colorless, opaque zone with intact bacteria. DMSO was used as a control 40 . This assay was performed in triplicate. Quantitative anti-QS assay. Quantitative evaluation of QS inhibitory activity of the AME was carried out based on their ability to inhibit the production of purple pigment violacein by J. lividum ATCC 12472. The strain was cultured aerobically in LB at 30 °C supplemented with the optimal concentrations determined by a qualitative anti-QS test (125-1000 μg/mL). Eugenol (0.625 mg/mL; Sigma, St. Louis, MO, USA) was used as QSIpositive control. One milliliter of an overnight culture of the J. lividum was centrifuged (13,000 rpm, 10 min) to precipitate the insoluble violacein, and the pellet was evenly resuspended in 1 mL of DMSO. The solution was centrifuged (13,000 rpm, 10 min) to remove the cells, and the violacein was quantified at OD 620 nm using a UV spectrophotometer (UV-1800, Shimadzu, Kyoto, Japan). The percentage of violacein inhibition was calculated by the following formula: Percentage of violacein inhibition = (control OD 620 nm − test OD 620 nm/control OD 620 nm) × 100 40 . Bladder phantom model for treating the biofilm with crude extract. In vitro bladder models, originally described by Stickler et al. 41 www.nature.com/scientificreports/ tion balloons were inflated with 10 ml of sterile water. To form a sterile and closed drainage system, catheters were attached to a drainage bag. P. mirabilis 7002 suspensions (10 10 CFU, representing late-stage infection) were inoculated directly into the residual bladder urine, and flow was suspended for 1 h to permit cells to establish within the system. At 45 min after bacterial inoculation, test models were treated with an optimal concentration of AME in a volume of 1 ml; the flow was restored 15 min later. The amount of deposited calcium on the primary 2 cm of the catheter was measured and compared with the control. pH was also measured at the start and end of experiments by sampling the medium in the bladder 42 . Quantification of crystalline biofilm formation on catheter sections. To measure the levels of crystalline biofilm formation and catheter encrustation in control and extract-treated models, the amount of calcium present on catheter sections removed from bladder models run for a set time (18 h) was quantified by flame photometry 42 . Quantitative real-time PCR analysis. The quantitative real-time PCR (qRT-PCR) assay was carried out to study the effect of AME on the expression of QS and adhesion genes (mrpA, pmfA, luxS, rsmA, and rsbA) of P. mirabilis. An overnight inoculated pooled urine with P. mirabilis7002 was transferred to fresh urine, treated with an optimal concentration of AME, and incubated for different time intervals (4, 16, and 48 h) at 37 °C. Then, cells were washed with sterile PBS (pH 7.2) three times and collected after 10 min centrifugation at 4 °C. Total RNA was extracted from bacterial cells using an RNA extraction kit (Jena Bioscience, Germany) following the manufacturer's instructions. Reverse transcription PCR was conducted, and cDNA was synthesized according to the Jena bioscience kit (Germany) protocol. A qRT-PCR was performed on an ABI system (Applied Biosystems StepOne PlusTM, USA). Each 20 µL reaction contained 2 × Master Mix (SYBR® Green Ampliqon, Denmark), diluted cDNA (5 ng/μL), primers (10 pM of each forward and reverse primers), and RNase-free ddH 2 O. The thermocycling conditions were as follows: denaturation for 10 min at 95 °C, followed by 40 cycles of denaturation for 15 s at 95 °C, annealing, and extension at 54 °C for 60 s. 16 s rRNA was used as an internal control. The primers used in this study were designed by the online tool Primer 3 web version 4.0.0 and listed in Table 2. All samples were run in triplicate. The relative expression of target genes was calculated by the conventional 2 −ΔΔCT method 43 . Statistical analysis. Statistical analysis was performed by the SPSS software package (Version v16, IBM Corporation, Armonk, NY, USA). All results were presented as mean ± standard deviation (SD). One-way ANOVA plus post-hoc Tukey test or two-tailed paired t-test was used to evaluate statistical significance between samples. Statistical significance was regarded as p values < 0.05.
2022-08-19T06:17:52.372Z
2022-08-17T00:00:00.000
{ "year": 2022, "sha1": "ee73b5d5063b185f70d9d043e39c0c8dc56a2ab5", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-022-18362-x.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "9fdcbe0c6323afefc4056be20a951e4c93585d49", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
218719358
pes2o/s2orc
v3-fos-license
Dynamical phase transitions in dissipative quantum dynamics with quantum optical realization We study dynamical phase transitions (DPT) in the driven and damped Dicke model, realizable for example by a driven atomic ensemble collectively coupled to a damped cavity mode. These DPTs are characterized by non-analyticities of certain observables, primarily the overlap of time evolved and initial state. Even though the dynamics is dissipative, this phenomenon occurs for a wide range of parameters and no fine-tuning is required. Focusing on the state of the 'atoms' in the limit of a bad cavity, we are able to asymptotically evaluate an exact path integral representation of the relevant overlaps. The DPTs then arise by minimization of a certain action function, which is related to the large deviation theory of a classical stochastic process. From a more general viewpoint, in the considered system, non-analyticities emerge generically in a Fock space representation of the state. Finally, we present a scheme which allows a measurement of the DPT in a cavity-QED setup. Introduction Phase transitions in equilibrium are the prime examples where non-analytical behaviour of physical observables occurs. Upon changing a parameter in the considered equilibrium system, such as temperature, the state of the system undergoes a non-analytical change resulting in cusps or jumps of observables. The understanding of equilibrium classical and quantum phase transitions is far developed and a tremendous amount of theoretical and experimental work has been dedicated to the field. Recently, however, motivated partly by experimental advances, the focus of many researchers has shifted to the study of non-equilibrium physics. Nowadays the dynamics of quantum many-body systems can be measured in real time in platforms such as cold atomic gases and trapped ions [1,2]. Naturally, the question arises whether non-analyticities of physical observables can occur also in these settings. One particular example of such behavior are dynamical phase transitions (DPT) in the sense that an observable changes non-smoothly at a critical time after a quench, that is a sudden parameter change [3]. We will focus on this notion of dynamical phase transitions in this letter. While a full understanding of the phenomenon is still missing, several important results were obtained for unitary quantum many-body evolution in systems traditionally studied in the condensed matter community [4][5][6][7][8][9][10][11], including experimental realizations with cold atom and trapped ion experiments [12,13]. Since in many experiments the physical systems are not isolated but subject to dissipation, it is important to consider also many-body systems evolving non-unitarily. For simple Fermionic models it was shown that, while finite temperature generally smooths out non-analyticities, they may persist in the presence of dissipation [14][15][16][17]. In fact, DPTs can even be found in classical dissipative systems, like solutions of the KPZ equation [18][19][20]. In this letter we study a driven and damped version of the Dicke model, a well known quantum optical many-body system which can be experimentally realized [21][22][23][24]. We show that this model can feature DPTs without requirement of parameter fine tuning. With theoretical tools of quantum optics, we are able to find approximations for the full state of the system allowing us to gain a complete understanding of the dynamical transitions in the model. Let us first set the stage by revising the basic ideas of DPTs. For unitary dynamics, the Loschmidt echo is the absolute value of the quantum mechanical overlap of the time-evolved state and the initial state [5,6]. In the thermodynamic limit of infinite system size N → ∞ this object may become non-analytic as a function of time. For open quantum systems we need a generalization of the Loschmidt echo for mixed states. This can be straightforwardly defined as the Uhlmann-fidelity [25][26][27] of final and initial state, which is however cumbersome to treat analytically [14][15][16]28]. Here, we like to consider a much simpler observable which will be used as definition of the Loschmidt echo in this letter. Many authors stick to the Uhlmann-fidelity measure as definition of the Loschmidt echo because of its interpretation as a distance measure. However, the fidelity is much harder to access in experiments than the probabilistic quantity proposed here. Since overlaps generally scale exponentially with system size, one considers the rate function which is well behaved in the thermodynamical limit. It can be seen as analogous to the free energy in statistical physics, with system size playing the role of inverse temperature. Dynamical transitions in the driven Dicke model We analyze DPTs in a driven version of the well known Dicke model. The Dicke model is an iconic model in quantum optics that has been studied in great detail both theoretically and in experiments [21]. It consists of N two level systems, referred to as the atoms, interacting collectively with a single mode of a light field inside a cavity. If in addition the atoms are driven by an external classical field, the Hamiltonian of the model reads [29] where we choose a frame rotating at the frequency of the drive, and counter-rotating terms are neglected. Here, a is the cavity photon annihilation operator and J = 1 2 N i=1 σ i is the collective angular momentum operator of the two level atoms. ∆ 0 and ∆ 1 define the detuning of the cavity mode and the two level systems respectively. For simplicity we consider a resonant drive ∆ 1 = 0. In addition to the unitary dynamics generated by the Hamiltonian (3), photons leak out of the cavity. This can be modeled by adding the usual dissipator of GKSL form [30,31], so that the master equation for the state of atoms and cavity is and γ denotes the cavity loss rate. A brief description of the dynamics of this model in mean-field theory can be found in the supplement. We first provide some numerical evidence that DPTs indeed appear in the model, by focusing on the Loschmidt echo of the atomic state ρ A = tr C ρ. As initial state we choose an empty cavity and all atoms in the ground state. The rate function we want to consider is In Fig 1, we show this function for moderate system sizes, obtained from numerical integration of the master equation. For the choosen parameters, dynamical transitions occur. As the system size increases, the rate function develops typical kinks at critical times where the overlap with the initial state is small, i.e. the rate function has a local maximum. Due to the dissipative nature of the dynamics, the state will spread in Hilbert-space over time, leading to a damping of the peaks. Crucially, even though the system is dissipative, this emergence of cusps is generic and does not require fine tuning of parameters. We note here that a straightforward numerical determination of Loschmidt echos is a computationally hard task. Even though due to the permutation symmetry in the present model the effective Hilbert-space dimension is polynomial in system size, quantum state overlaps generally exhibit exponential scaling so that exponentially more precision is required for larger N . In addition, the numerical approach does not give an insight to the mechanism leading to the emergence of the non-analyticities. (5) of the driven Dicke model for parameters g 2 = 25/72ωγ, ∆ 0 = 0.1ω. As the system size is increased, the rate function develops kinks at critical times. Exact results in the bad cavity limit In order to simplify the model allowing for an exact treatment, we adiabatically eliminate the cavity by assuming a large cavity loss rate γ. More precicely, we consider the limit γ → ∞ while keeping λ = ωγ/(2g 2 ) constant. The GKSL master equation for the atomic state ρ A that we want to consider reads This is, up to the dephasing term in the second line, the correct master equation describing the atomic state in bad cavity limit of the driven dissipative Dicke model. The added dephasing is merely a technical trick to avoid subtle complications. In fact, it can be argued that the presence of this term does not lead to a change of the Loschmidt rate function in the limit of infinite system size [32]. The steady state phase diagram of (6) resembles that of the cooperative resonance fluorescence model [33][34][35][36][37][38]. For λ = ωγ/(2g 2 ) < 1 there is a single symmetric steady state. At λ = 1 a second order symmetry breaking phase transition occurs. The model then exhibits oscillations which persist on a time scale of the order of the system size. As a result of these oscullations, DPTs occur in this phase. Proceeding with technical steps, we utilize results from Ref [33]. Therein it is shown that no entanglement between the atoms is produced by the dynamics generated with this master equation, and the state can be mapped to a classical stochastic process of coherent states. With this exact mapping, we are able to find an exact expression for the state in the limit of large system sizes, without relying on semi-classical approximations [39]. In particular, the P -function of the state is given by where φ and θ are spherical coordinates constituting the phase space of a spin [32]. The action S and the nonexponential prefactor F follow from the steepest descent evaluation of the path integral propagator for the Pfunction. Since the P -function obeys a Fokker-Planck equation, this corresponds to the weak noise theory of a classical stochastic process, and S is the action of the path integral introduced by Martin, Siggia, Rose and others (MSRJD) [40][41][42][43]. With the diagonal Prepresentation of the state in terms of coherent states, the Loschmidt echo is given by where |φ, θ is a spin coherent state and dΩ = dφdθ sin θ is the phase space measure for the spin. For typical initial conditions, in particular for all pure initial states, the overlap term scales exponentially with the system size, such that W (φ, θ) = − 1 N ln φ, θ|ρ A (0)|φ, θ is independent of N . The integral (8) can now be performed in steepest descent approximation, by expanding the exponent of the integrand around its minimum values up to second order. This exponent, we name it K, consists of the sum of two contributions, the MSRJD action S from (7) and the contribution from the overlap K(φ, θ, t) = S(φ, θ, t) + W (φ, θ) . (9) The steepest descent approximation is completed by performing a Gaussian integration, which yields Here, β is a label for the local minima of K, and K is the Hessian matrix of K. The rate function in the limit of infinite system size is then determined by the absolute minimum of the exponent K. A DPT occurs at a critical time when the value of K at two minima coincides. In order to compute the Loschmidt echo with the steepest descent method, the main task is to determine the action S. We provide a detailed description of this in the supplement. Fig 2 shows the Loschmidt rate function computed with the steepest descent method for λ = 1.2 and starting with the atomic ground state. (9) is displayed along the symmetry line θ = π/2 at different times, for the quench scenario in Fig 2. The asymptotic Loschmidt echo is determined by the absolute minimum of K, which switches at the critical time. The dashed line shows the extremal MSRJD action S, which has only a single minimum at all times. For large system sizes, we find excellent agreement with a direct integration of the master equation. As expected, the asymptotic function has a kink at a critical time, which can be seen as a first order transition, with K acting as the potential function. In Fig 3 this potential function is displayed along the cut θ = π/2 where the minima occur. We also display the extremal MSRJD action S. This object always has a single minimum S = 0 which follows the 'mean field' trajectory -this is the most relevant contribution to the state. The potential K, however, features two local minima, because in addition to the action it also includes the contribution W from the quantum overlap of trajectory and initial state. Swapping of the global minimum leads to a kink in the asymptotic Loschmidt rate function. For finite system size there exists a critical region of times in proximity to the critical time at which the Loschmidt echo is influenced by both minima. Then no non-analyticities occur and the kink is smoothed out. It also becomes clear that the non-analyticities are stable with respect to changes in the model parameters and that these merely determine the exact critical time. Non-analyticities in Fock state overlaps Nonanalyticities can generally arise in observables which are determined by the tails of the quantum distribution, such as overlaps. We do not see a reason to consider specifically only the Loschmidt echo as the overlap of interest. Recently, without refering to DPTs, a few works have been published which discuss cusp caustics occuring in Fock-space representations of quantum states following quenches [44][45][46]. The Fock-space amplitudes feature typical wave catastrophe patterns, regularized for finite system sizes by the discrete quantum theory. In Ref [46] even the stability of these catastrophes with respect to dephasing is discussed. With our knowledge of the full quantum state, we are able to compute all Fock-state overlaps. For our permutation symmetric spin model, by Fock-states we mean the symmetric Dicke states |m with J z |m = m |m , which are in fact symmetrized atomic states with fixed excitation number. The angular momentum quantum number ranges from m = − N 2 , ..., N 2 . We focus on diagonal elements of the density matrix in this basis. Since all overlaps exhibit exponential scaling in system size, it is again convenient to rescale them logarithmically In the case of model (6), we can compute these overlaps using the steps of the the previous paragraph. Fig 4 shows the resulting rate functions for the same quench scenario as in Fig 2. The non-analyticity in the Loschmidt-echo, which is just r N/2 , continues for smaller m values at later critical times, defining a transition line. Since the dynamics is oscillatory, at later times new transition lines occur. Diffusion spreads the state in Hilbert-space so that peaks at later times are less pronounced. This picture makes clear that the non-anlyticities are generic in the model and the notion of a DPT or a critical time only exists on the level of a given observable. We display these elements logarithmically corresponding to the rate functions (11). Note that small values mean large overlap. Non-analyticities (cusps) are highlighted by the dashed lines. Measurement scheme Even though the model can be realized in current experimental platforms [22-24, 47, 48], straightforward measuremt of overlaps for finite system size does not predict non-analyticities in the thermodynamical limit. However, because of the exponential scaling, measurement of overlaps is restricted to small systems. Thus to find evidence for a DPT, further theoretical input is required [12,13]. To this aim we can utilize our knowledge from the analytical investigations. Loosely speaking, evidence for the transition can be found if in an experiment, one is able to probe the contributions to the Loschmidt echo of both minima from Eq (9) separately. In the driven Dicke model, this can be achieved in an elegant way by combining the measurement of the Loschmidt echo of the atoms with a homodyne measurement of the light field in the cavity. Since the light field contains information about the atomic state, the homodyne measurement outcome can predict the 'location' of this state in phase space. In more detail, one has to distinguish whether the cavity quadrature is in the 'left' or 'right' half of phase space, which corresponds to a generalized measurement given by the POVM E + + E − = 1 with Here, |α is a coherent state of the cavity field and α is the complex coherent state label. Realizing this in practice is as simple as discriminating the measurement outcomes by whether the homodyne measurement gives a positive or negative value of the quadrature a + a † . The (14). They cross at the critical time t ≈ 4ω indicating the dynamical transition. (unnormalized) reduced state of the atoms conditioned on the outcome of measurement (12) is given by and the full reduced state is recovered upon collecting all outcomes ρ A = ρ A+ +ρ A− . This way the Loschmidt echo can be written as Note that L ± can be obtained experimentally by measurement of the Loschmidt echo after measurement of the light field. Crucially, both contributions are overlaps which exhibit exponential scaling in system size. Therefore we know that in the thermodynamical limit only the minimum of both curves contributes to the corresponding rate function. If the two curves cross at a critical time, we have found evidence for an emerging non-analyticity. Fig 5 displays the rate functions corresponding to the conditioned states which we obtained by numerical integration with just N = 12 atoms. One can see that the two curves cross at the critical time where the dynamical transition is expected. Thus, even though the finite-size Loschmidt echo for the full state is smooth, the non-analyticity can be measured with minimal theoretical input. Discussion and Conclusions In this letter we studied dynamical phase transitions in the driven and damped Dicke model. These transitions are characterized by kinks in the Loschmidt rate function at critical times, and occur for a wide range of parameters. Focusing on the bad cavity limit, we were able to determine the Loschmidt echo in an exact way, by mapping the dynamics to a classical stochastic process. Quantum overlaps can then be expressed as classical phase space integrals. This property of the model allows to obtain a complete and exact description of the DPTs. The mechanism leading to non-analyticities is the same as in large deviation theory of classical dissipative systems [18][19][20]. Quantum overlaps are determined by minimization of a Landau-like potential. As in the Landau theory of first order phase transitions, at a critical time, this minimum swaps position leading to a kink. Previous studies have analyzed this mechanism only in classical systems. Our findings show that it occurs naturally in a simple model from quantum optics. From a general point of view, we find that all overlaps of the time evolved quantum state with Fock states are determined by large deviations of the quantum distribution in phase space, and can develop cusps asymptotically as the system size is increased. The model can be realized in current experimental platforms and we have presented a simple way to measure the rate function and the critical time with systems consisting of few atoms only. Because this scheme relies on measurements of the environment of the atoms, we crucially exploit that the system is not closed. From a theoretical point of view, the thorough description of dynamical phase transitions in the driven Dicke model can be a starting point to find quantum optical models in which dynamical transitions occur that accompany symmetry breaking. This would allow the study of scaling and universality near the critical time. It is a pleasure to thank Markus Heyl and Kimmo Luoma for discussions and advice. This research was supported in part by the National Science Foundation under Grant No. NSF PHY-1748958. V.L. acknowledges support from the International Max Planck Research School (IMPRS) of MPIPKS Dresden.
2020-05-21T01:01:37.131Z
2020-05-20T00:00:00.000
{ "year": 2020, "sha1": "11ecad97de96e0c8908e6b103a4046621b371474", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2005.10013", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "11ecad97de96e0c8908e6b103a4046621b371474", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
55091905
pes2o/s2orc
v3-fos-license
Biomedical Knowledge Engineering Using a Computational Grid Bioengineering is an applied engineering discipline with the aims to develop specific methods and technologies for a better understanding of biological phenomena and health solutions to face the problems regarding the sciences of life. It is based on fields such as biology, electronic engineering, information technology (I.T.), mechanics and chemistry (MIT, 1999). Methods of Bioengineering concern: the modeling of the physiological systems , the description of electric phenomena or magnetic ones ,the processing of data, the designing of medical equipments and materials or tissues, the study of organisms and the analysis of the link structure property typical of biomaterials or biomechanical structures. Technologies of Bioengineering include: biomedical and biotechnological instruments (from the elementary components to the most complex hospital systems), prosthesis, robots for biomedical uses, artificial intelligent system, sanitary management systems, information systems, medical informatics, telemedicine (J. E. Bekelman et al, 2003). Biomedicine has recently had an innovative impulse through applications of computer science in Bioengineering field. Medical Informatics or better the Bioinformatics technology is characterized by the development of automatic applications in the biological sector whose central element is the information. There are several reasons to apply the “computer science” in many fields, such as the biomedical one. Advantages as the turn-around time and precision are among the basically improving factors for a job. For example the identification of the functions of genes has taken advantage from the application of an automatically system of analysis of database containing the result of many experiments of microarray getting information on the human genes involved in pathologies (C. Müller et al, 2009). With a such approach regions with specifically activities have been identified inside the DNA regions, different regions exist in the genome, some stretches are the actual genes, others regulates the functions of the former ones. Other research have been made through computational techniques on the Functional Genomics, Biopolymers and Proteomics, Biobank e Cell Factory (M. Liebman et al, 2008). This chapter explores a particularly promising area of systems development technological based on the concept of knowledge. The knowledge is useful learning result obtained by an information processing activity. The Knowledge Engineering, regards the integration of the knowledge in computer systems in order to solve the difficult problems which typically require a high level of human specialization. (M. C. Linn, 1993) Introduction Bioengineering is an applied engineering discipline with the aims to develop specific methods and technologies for a better understanding of biological phenomena and health solutions to face the problems regarding the sciences of life.It is based on fields such as biology, electronic engineering, information technology (I.T.), mechanics and chemistry (MIT, 1999).Methods of Bioengineering concern: the modeling of the physiological systems , the description of electric phenomena or magnetic ones ,the processing of data, the designing of medical equipments and materials or tissues, the study of organisms and the analysis of the link structure property typical of biomaterials or biomechanical structures.Technologies of Bioengineering include: biomedical and biotechnological instruments (from the elementary components to the most complex hospital systems), prosthesis, robots for biomedical uses, artificial intelligent system, sanitary management systems, information systems, medical informatics, telemedicine (J.E. Bekelman et al, 2003).Biomedicine has recently had an innovative impulse through applications of computer science in Bioengineering field.Medical Informatics or better the Bioinformatics technology is characterized by the development of automatic applications in the biological sector whose central element is the information.There are several reasons to apply the "computer science" in many fields, such as the biomedical one.Advantages as the turn-around time and precision are among the basically improving factors for a job.For example the identification of the functions of genes has taken advantage from the application of an automatically system of analysis of database containing the result of many experiments of microarray getting information on the human genes involved in pathologies (C. Müller et al, 2009).With a such approach regions with specifically activities have been identified inside the DNA regions, different regions exist in the genome, some stretches are the actual genes, others regulates the functions of the former ones.Other research have been made through computational techniques on the Functional Genomics, Biopolymers and Proteomics, Biobank e Cell Factory (M.Liebman et al, 2008).This chapter explores a particularly promising area of systems development technological based on the concept of knowledge.The knowledge is useful learning result obtained by an information processing activity.The Knowledge Engineering, regards the integration of the knowledge in computer systems in order to solve the difficult problems which typically require a high level of human specialization.(M.C. Linn, 1993) Whereas standalone computer system have had an important impact in Biomedicine, the computer networks are nowadays a technology to investigate new opportunities of innovation.The capacity of the networks to link so many information allows both to improve the already existing applications and introduce new ones; Internet and the Web are two well know examples.Information based processes involved in the research to discovery new knowledge take advantage from the new paradigms of distributed computing systems.This chapter is focuses on the design aspects of the knowledge-based computer systems applied to the biomedicine field.The mission is to support the specialist or researcher to solve problems with greater awareness and precision.At the purpose, a framework to specify a computational model will be presented.As an example, an application of the method to the diagnostic process will be discussed to specify a knowledge-based decision support system.The solution here proposed is not only to create a knowledge base by the human expert (or by a pool of experts) but support it using automatic knowledge discovery process and resources enhancing data, information and collaboration in order to produce new expert knowledge over time.Interoperability, resource sharing, security and collaborative computing will emerge and a computational model based on grid computing will be taken into account in order to discuss an advanced biomedical application.In particular in the next section it will be presented a framework for the Knowledge Engineering based on a problem solving strategy.In section 3 the biomedical diagnostic process will be analyzed using the knowledge framework.In particular the problem, the solution and knowledge resources will be carried out.In section 4 the design activity of the diagnostic process is presented.Results in terms of system specifications will be shown in terms of Decision Support System architecture, Knowledge Discovery and Grid Enable Knowledge Application.A finally discussion will be presented in the last section. Method for the knowledge Modeling is a building activity inspired by the problem solving for real problems which not have a unique exact solution.The Knowledge Engineering (K-Engineering) deals with the computer-system applications which are computational solutions of more complex problems which usually ask for a high levels of human skill.In this case the human knowledge must be encoded and embedded in knowledge based applications of computer systems.The K-Engineer build up a knowledge model useful for an algorithmic description by a structured approach or method.Three macro phases can be distinguished in the modeling process of knowledge: 1. Knowledge Identification (K-Identification); 2. Knowledge Specification (K-Specification); 3. Knowledge Refinement (K-Refinement).These phases can be cyclical and times retroaction rings are necessary.For instance, the simulation in the third phase can cause changes in the knowledge model.(A.Th.Schreiber, B. J. Weilinga, 1992).Each phases is composed by specific activities and for each activities the K-Engineering literature proposes different techniques.The Fig. 1 shows the modeling of the knowledge based on the problem solving strategy.The proposed framework is applied at different levels of abstraction from high to low level mechanisms (top-down method). What must be represented: At the epistemological level identify what should represent aspects of knowledge that is necessary to consider the application to be addressed.In particular, what are its classes, patterns, what are the inferential processes involved and the quality of relevant knowledge. Which is the problem: To identify the problem to be solved is important to address the investigation about the relevant knowledge.It will be very important in the next modeling phases. How the problem can be solved: It indicates strategies for solving a given problem based on patterns bounded in the application domain. How to represent: Modeling derives from the subjective interpretation of the knowledge engineer with regards to the problem to be faced; a mistake is always possible and therefore the knowledge model must be made in a revisable way.Tools and processes for the knowledge management have been consolidated, this management can be expressed in several ways: rules, procedures, laws, mathematical formulae, structural descriptions. Table1. Knowledge Identification guidelines. The interviewee must have some characteristics related with his life or his belonging to a certain social group, the number of the people interviewed must however be consistent, so it is possible to obtain every possible information on the phenomenon.The conversation between the two parts is not comparable to a normal conversation because the roles are not balanced: the interviewer drives and controls the interview respecting the freedom of the interviewer in expressing his opinions.According to the different degree of flexibility is possible to distinguish among: 1. structured interview 2. semi-structured interview 3. not structured interview.Usually the structured interview is used to investigate a wide phenomenon, the interview is carried out by a questionnaire supplied to a large sample of people; in this case the hypothesis must be well structured a priori.The structured interview can be used in a standard way but at the same time the limited knowledge of the phenomenon does not allow the use of the multiple choice questionnaire.As the number of the interviewees decreases a semi-structured or a un-structured interview can be taken in to account. Elicitation analysis The elicitation analysis is an effective method used by the knowledge engineer to individuate the implicit mental patterns of the users, and categorizing them.The knowledge engineer carry out this phase analyzing documentations about the domain under the investigation and match the information according to some well know mental model of him.Knowing the mental patterns and the implicit categorizations makes possible the organization of the information so that it is more simple to use them, improving, in that way, the quality of the product.Through the elicitation analysis it is possible to identify the classification criterion used by the users and to identify the content and the labels of categories they used.Possible differences in the categorization among various groups of interviewed can be seen and controlled. A draft of the conceptual model This activity establish a first formal representation of knowledge acquired up to now composed by elements and their relationships.The representation is used to check the correctness by the user.It is a formal scheme on which the K-specification phase will runs.The knowledge is represented using an high level description called conceptual model.This model is called conceptual because it is the result of a survey carried out by the literature and domain experts for the transfer of concepts considered useful in the field of study is concerned.Fundamental indications about "what is" and "how to build" the conceptual model are shown in Table 2. Some formalisms are proposed in the literature: the semantic networks are used to represent the knowledge with a graph structure; the frames are data structures which allow group, like inside a frame, the information about an entity; an object representation allows to join procedural aspects with declarative aspects, in a single formalism; and so on. What it is not: • it is not a basic of knowledge on paper/calculator Knowledge specification The goal of this stage is to get a complete specification of the knowledge.The following activities need to be carried out to build such a specification. Inference Now let is write about the inferential structures which make possible to know things starting from a codified knowledge.It is interesting, even for the inferential structures, to identify different types of structures, in order to focalize, during the construction of the conceptual pattern, the structures which are actually used in the specific application.The most general form is the one which turns into rules.However it is interesting to consider even more specific structures, as these ones help the identification of particular necessities inside an application.(Waterman [Wa86]).The main characteristics of an inference are the ability of specifying the knowledge, the ability of reasoning, the ability of interacting with the external world, the ability of explaining own behavior and the ability of learning.The inference architecture can be organized in object level and meta level.Each of them can be seen as an individual system, with an appropriate language of representation.The aim of the level object is to carry on reasoning in the domain of the system application, whereas the aim of the meta-level is to control the behavior of the object level. Task analysis The aim of the task analysis is to identify the "main task" by the analysis of the users involvement in order to understand how to they execute their work, identifying types and levels: • How is the work carried on when more people are involved (workflow analysis); • A single man work during a day , a week or a mouth (job analysis); • Which tasks are executed by all the people who could use to product (task list); • The order every uses of execution of the tasks (tasks sequences); • Which steps and decision the user chooses to accomplish a task (procedural analysis); • How to explode a wide task into more subtasks (hierarchies of task).The Task analysis offers the possibility to view the needs, display the improvement areas and simplify the evaluation.It can be carried out according to: (Mager,1975;Gagnè,1970) • rational analysis -Inside the theories of the Knowledge it is a procedure which divides a task into simpler abilities, up to reaching the activities that can be executed by every process the task is assigned to.The result of this procedure is a typical hierarchy of activities with a correspondent hierarchy of execution aims.• empirical analysis -Inside the Knowledge Engineering it indicates a procedure which splits up the activity or task into executive process, strategies and meta-cognitive operations which the subject accomplishes during the execution of that task.The result is a sequence, not always ordered, of operations aiming at the realization of the task.This is an activity about K-Specification.It works on the output of the K-Identification (see Table 1), that specify the resolution strategy of the problem.At the purpose the task analysis is carried out by the following specific steps: Problem specification; Activity analysis; Task modeling and Reaching of a solution. In the Problem Specification phase the problem must be identified specifying one or more activities for its realization, at the conceptual level; these activities will be analyzed in the following step.In Activity Analysis a task is identified grouping activities which must be executed to achieve the aim of the task.There are different task hierarchies where the activities can be divided into subtask.This exercise on the task hierarchy means both to specialize every task and to study the task execution on the base of priorities and temporal lines.Task Modeling builds a model which precisely describes the relationships among tasks.A model is a logical description of the activities which must be executed to achieve the users goals.The model based design aims to specify and analyze interactive software applications from a more semantic level rather than from an implementative one.Methods for modeling the tasks are: • Standard : analysis on how tasks should be made; • Descriptive: analysis of the activities and tasks just as they are really made.Task models can be taken into accounts according to the following point of view: • System task model.It describes how the common system implementation states the tasks must be executed; • Envisioned task model.It describes how the users must interact with the system according to the opinion of the designer; • User task model.It describes how the tasks must be done in order to reach the objects according to the opinion of the users.Usability problems can arises when discrepancy occurs between the user task model and the system model.The last step in the task analysis (Reaching of a solution) is devoted to specify the tasks identified.That are conceptual building block of this analysis.Table 3 shows a formalism to specify the task aim, the technique used for the realization of it and the result produced on the task execution.Moreover a procedural description of the task must be carried out using conceptual building blocks based tools. Goal Aim of the task Techniques Task descriptions and it implementation Result Output task Table 3.A Task Description formalism. Knowledge refinement The aim of the Knowledge Refinement is to validate the knowledge model using it in a simulation process as much as possible.Moreover it is to complete the knowledge base by inserting a more or less complete set of knowledge instances. Analyzing the biomedical diagnostic process building a model for knowledge based system The case of study here presented, refers to the diagnostic process.This is a rich knowledge process prevalent in the biomedical field and to diagnostic pathologies starting from the symptoms. Knowledge identification The identification of knowledge in biomedicine has been here applied as described in table 4, using the framework proposed in the previous section.4. The K-Identification for the diagnostic process. Elicitation analysis In order to create the first reference model of the diagnostic process the K-Identification starts with the elicitation study.As the final aim is the development of an conceptual model could be useful to consider a process comparable where a living organism is like a perfectly working computer.If a computer problem rise, the operative system signal it to the user.To activate such a process, a warning is necessary, i.e. a message of mistake or wrong working.At this point a good computer technician put in action a diagnostic process based on the warning to individuate the problem, or in other words the error.The diagnosis on an organism is similar at the described scenery: the occurring of a pathology is pointed out to the organism through the signal of one or more symptoms (as already described for the computer errors).The diagnostic medical process is exercised by the specialist (in analogy of the computer technician) that will study the symptom origin and its cause and hence the diseases.The arising of a problem can be due both endogen and exogenous causes and provokes an alteration which would not normally happen (for instance an alteration in the albumin level produced by the pancreas ); this mutation causes a change, a working different from the mechanisms associated to that element (for example the mechanism, thanks to which the insulin makes the glucose enter into the cells for the production of vital energy, changes as an accumulation of glucose in the blood circle is met: subjects affected with diabetes).What it has been learned by the above application described on the elicitation analysis is shown in Table 5. A The computer show the error The patient has throat ache Table 5.Some elicitation analysis results to know the Diagnostic Process. Interview Although the elicitation study supply important elements to build a conceptual model of the knowledge, the expert contribution is of the great important also.At the purpose the structured interview is here applied to extract the knowledge and the mental process of the experts when they work.In other words the interview application aims to describe the experiences of the specialists in terms of structured knowledge.By interview the Clinical Case emerges as a summary composed by: all the possible information about the patient, a list of symptoms and a set of objective and instrumental exam results.Table 6 summarizes the results carried out by interview and elicitation analysis, in terms of problem and solution. Which is the problem The diagnoses are not always easy to be identified from a set of the possible solutions, sometimes several possible investigations must be done up to individuate an optimal solution. How it can be solved A valuable support to the specialist in order to makes a decision is highly desirable.An automatic computer system based on biomedical knowledge should be able to support the specialist using updated information and supporting decisions using a computational formal approach. Table 6.Interview and Elicitation Analysis Results for the Diagnostic Process. The solution here proposed is not only to create a knowledge base directly from the human expert (or by a pool of experts) but especially to increase the own knowledge through the provision of advanced computational analysis tools capable of enhancing data, information and collaboration in order to become expert knowledge over time. Semantic network Here is applied the semantic networks representation formalism to complete the knowledge identification phase and to provide a conceptual model in input to the next phase.See Fig. 2. Nodes are objects, concepts, states while arcs, represent the relationships between nodes as carried out by knowledge identification phase.For example, the figure highlights how a pathology will lead to an alteration of the value of a biological entity and how this variation is detected by an instrumental examination.New relationships can be made clear, not only the evident ones but also those coming deducted from the father-son hereditariness.Fig. 2 suggests that the signs detected during physical examination can be directly annotated in the clinical case. Knowledge specification The second stage in the presented framework is to construct a specification of the k-model. In order to specify k-model of the biomedical diagnostic process it has been used an approach based on the inference and task knowledge. Inference Inference rules are used to capture the way in which a specialist reason according to the inference logical scheme.The inference rules so defined will be embedded into Decision Support System in order to develop a Knowledge-Specialist Based System.The summary of all these inferences help to develop activities that the system must make either for the development of a formal instrument describing the said inferences or the classes studied in the representative formalism.In Fig. 3 is represented the logical schema of the classes that describes some rules of inference such as "the observation of a symptom strictly depends on the patient who is suffering from it".The dependence of the activities of a system from the inferences rules can be translated both into obliged passages or into a series of guide-line on designing the activities of the system to be followed.Many informatics system translate it into an "algorithm" called "inferential engine" which can complete the design phase of the activities automatically.In some cases a prototype of the inferential engine is build up.In the implementation phase it will be possible to choose whether developing a suitable software or maintaining that engine at a conceptual level.Fig. 4 shows a conceptual description of biomedical diagnostic inferential engine, composed by the following elements: • Interpreter: it decides the rule to be applied (meta-level); Task analysis To design an efficient and technologically advanced decisional system of support, the correctness of the applied methodology and rightness of the information must be considered.A good working system is based not only on accurately selected and organized data but also on a model epistemologically adherent to the everyday medical actions.The task design occurs at the purpose.Using the "top-down approach" the analysis and the task design starts from the macro-activity or main task designed for the solution about the problem that regard the support to the specialist decision.The Fig. 5 shows the main task of the solution composed by both Central System and Central Db components.Different Database are referred by External and Internal Db components. The kind of result proposes a form which gives back a list of pathologies with the relative probabilities from the letting in of a list of symptoms inserted by the specialist.The solution is an automatic computer system based on biomedical knowledge that should be able to support the specialist using updated information and supporting decisions by computational formal approach.The execution of a test implies a sequences of steps, each of them contributing to the achievement of the purpose.Task Analysis can be carried out using or the Descriptive approaches that describes the system organization in an analytical way or the Applicative approach to obtain single and simple elements, so that they can be studied.Task analysis starts from the study of the activities composing the main activity to the define the real phase, they will be translated into task system.Fig. 6 shows a scheme both to understand the succession of the activities and to determine the hierarchies. Symptoms FormulaƟon Entering Symptoms The results of the single task design is shown in Table 7. Task Model Knowledge Item Worksheet NAME Insert the symptom POSSESSED BY User terminal USED IN Main form DOMAIN It can be found both in the central system and in a remote terminal PEOPLE Users Table 7.An example of Knowledge Specification for the Task Model in Diagnostic Process. Knowledge refinement: analysis of information sources and data types In this application the Knowledge Refinement phase is carried out to complete the Knowledge modeling.At the purpose an analysis of the additional sources of information is here presented as suggested by the task analysis.The biomedical knowledge is mainly organized in order to solve problems in the formulation of diagnosis and therapies.A problem that the specialist has to be solved can be classified according to the complexity of the diagnostic process: in the more simple case is available the useful knowledge that leads to a disease individuation directly; complex problem involves situations in progress about the disease knowledge so that not all the reasoning have been produced and some of these can be hidden in a large quantity of data.In this cases different thinking schools lead to different solutions and then the specialist makes a decision on heuristic knowledge.From the information point of view the scientific literature, medical documents, experts consultations and forums, can be cited as the main source of information.In particular the digital information can be organized in databases, glossaries and ontologies.Table 8 shows the different source of knowledge with their data types. Information Source Data Types Archives of known and soluble cases Structured data organized into databases. Treatises of research and scientific publications Not structured data in a textual form, with some metadata.Social Networks and Forums Heuristic data. Table 8.Information Sources of Knowledge for the Diagnostic Process. In information technology, the terms database indicates archives structured so as to allow the access and the management of the data (their insertion, delete and update) on account of particular software applications.The database is a whole of data parted according the arguments into a logical order and then these arguments are divided into categories.A database is much more than a simple list or table.It offers the possibility to manage the data allowing their recovery, arrangements, analysis, summaries and the creation of the report in a few minutes.However a more thorough consultation of the data can be carried out with the most advanced technologies such as data mining. The activities of research provides large knowledge patrimonies.Several organizations works to organize patrimonies and accesses to them (i.e.NLM, BMC).They are available on different portals and catalogued into technical-scientific disciplines, geographical areas and, sometimes, into strategic programmers.Thanks to that cataloguing it is possible to make use of Information Retrieval techniques, which allows any system the automatically finding of scientific treatises.Nevertheless the information structures discussed up to now do not allow direct extraction of knowledge from data or among documents, at the purpose text mining technology could be used. Another very interesting source of knowledge is the one coming from socialization of experiences in describing and persistent stores: like forum, social network, web pages, FAQ and any other place used for exchanging ideas and opinions.These tools have the ability to make explicit the tacit knowledge of individuals through socialization.In fact, these virtual places, users describe their experiences in making themselves available to compare with others, increasing their level of knowledge and acquiring new skills.The Social Knowledge is a flexible approach that support the specialist to make our decisions. Designing the knowledge based system Decision Support Systems are here considered to address the design of the solution to support the specialist in the diagnostic process. Decision Support System A system which aims is to support the user to make a decision, is called Decision Support System (DSS).That is a system which makes available tools in order to increase the efficiency of the decisional process (G Desanctis, RB Gallupe, 1987).Fig. 7 shows a DSS high level architecture.Fig. 8 shows a structure of decision-making process over the different forms of knowledge possessed by the specialist.In particular, it describes the steps taken by decision makers from the most simple, based on data held, then the decision maker to proceed in its decision path will refer in order to other information that will acquire and then on explicit knowledge domain and finally on heuristic knowledge. A "decision" is a definitive choice among incompatible options.The clinical decision are always temporary and can be modified as the clinical case alters.In many cases such a decision must be taken among different options and it can be often followed by further diagnostic or therapeutic decision, which have the same difficulty level of the first one.Fig. 9 shows a decisional clinical tree.It is a schematic representation of the logical and temporal structure, of a clinical situation, in which it is necessary to take one or more decisions. Based on the decision tree clinical model, different clinical strategies can be descrived by spefying the action which must be made for every possible situations.The analytical decisional structures so produced have a lot of advantages in comparison with the intuitive method to make clinical decisions.Among these advantages there is the posibility to concentrate on a single aspect of the problem, always taking into account the whole.Another advantage is the analysis of the decisions which obliges the specialist to consider the relation between the acquired information and the subsequent decision.An example of analytical decisional structure is showed in Fig. 10. System structure Fig. 11 shows the knowledge-based system architecture.The K-Engineer collects, analyzes and formalizes the knowledge producing a knowledge model, he is trained by the Domain Expert; the User (Biomedicine Specialist) will use the system and says what he would like and if he is satisfied and the K-maintainer provides the resources and use conceptual tools for the knowledge system updating.The DSS examined has some functional elements which use conceptual instruments and resources, such described in Table 9.The Knowledge Core Layer contains all the procedures necessary to solve the problem of the user.To carry out a knowledge based system must be utilized precise procedures in order to identify the relationships between apparently The Web Miner is the element for the knowledge discovery from the web: it applies procedures similar to Data Mining to extract information from the resources present in the web.The computer paradigm is the Web Mining which derives from the Data Mining and deals with the discovery of resources on the web.It analyses web pages and from them extracts the resources of biomedical literature though query.The Text Miner has two objects: the transformation of the documents into a representation fit for the learning algorithm (the so called preprocessing phase) and the grammar analysis on the non structured text to extract the structured knowledge.As to processing phase the Text Miner draws out the meta information contained in the scientific texts and uses it to catalogue them in the Publication Catalogue where the document will wait for the analysis accomplishment according to the Text Mining algorithm .The grammar analysis starts after an K-Maintainer call which gives the Text Miner the parameters necessary to execute the analysis parameters contained in a document XML called Analysis Engine.The structured information will be memorized inside the Knowledge Repository.The Data Miner extracting the knowledge from the data the Text Miner got through automatic method, and elaborating them according to the K-Maintainer selected procedure.It is interfaced with the Publications Catalogue and makes use of the Data Mining Paradigm, which provides the instruments to locate, to extract and transform the data and the instruments for the management of data.The Inferential Engine is made by: an interpreter which decides the useful rules to request to the Knowledge Base, and a scheduler which organizes the rules to get developed and their execution order.The engine goal is to extract the rules for the problem solving according to an activation and recognition of the same, executing an exam of the rules on the ground of knowledge select the right rule.It is organized in two phases: a work memory (or blackboard) where the general plan is memorized and a workflow engine of what to do, besides a description of the solutions conjectured up to now.In the inferential engine there is also the consistence enforcer; this is a schedule which, once conjectured an hypothesis, looks for evidences to enforce or eliminate the hypothesis.Thus if the inference is a single deduction that the system can get from the premises (for example a single calculation), the inferential engine is the instrument fit to determinate the inferences complex.The DataWarehouse contains the information drawn out from the operations of text mining, whose structure is just like the semantic networks model one and is fit for the analysis algorithm of the Data Miner.It contains also the structured information from any external databases that the K-Maintainer wants to integrate into the system.The Web Resources is the database that contains the extracted information from the resources present in web.The Publication Catalogue is the repository of the scientific literature founded. Grid-enabled knowledge application In the field of the computer systems for the bio-medicine, the case studies have a long list of electronic documents about their health status, e.g.lab exams, diagnostic images, medical records, several special texts or even the letter of hospital discharge.Such data and information, since they are sector based, cannot be integrated or read by different systems tools even if digitalized, sometimes also because of the lack of appropriate communication infrastructures among the different systems.Even though they can be rebuilt or transferred on a similar computer platform, these documents cannot be reached from all researchers or specialists working on the same case study, just because such documentation has been kept in several autonomous computer systems.One of the major obstacles in their integration is the lack of a unique standard, yet also the different kinds of information products in the health world, the technological gap.The research and the agreement on a standard is even more important to the re-elaboration of the information acquired from the various information resources, balancing various elements (such as local resources, local networks, Internet) and employing various communication technologies, protocols and switching and filtering systems.Besides these drawbacks, the approaching techniques analyzed in the previous sections, such as the text mining or the statistic algorithm, when extended to a great amount of resources, require a huge computational power, a large broadcast band, and inner organization which can also make possible the interoperability.The solution proposed up to now can be really useful if an advanced technology based on computer systems is able to exceed computer networks and organizational problems that often make it difficult to share data and information.The Fig. 12 refers to a scenario of distributed resources all potentially very useful for continuous improvement of expert knowledge and indicates a solution based on grid computing.A worthy approach to use and share the resources and which includes computing power, mass memory, and a large broadcast band, comprise a network of information instruments called grid, whose main characteristic is its dynamism, i.e. the ability to re-configure itself in any moment, encompassing new knots and tolerating the loss of others, and capable to adapt to any required work every time it is necessary.All these operations are automatically performed, in a completely transparent way for the user, who will perceives the grid as a unique, stable, unchangeable, effective and always accessible system.The grids (or computational grids) are advanced services which allow to the users to share computational resources, archival space, scientific data and instruments among different computers.They are then a valid instrument to automatically store the accomplished researches and spread the knowledge to the domain.The grid technology, indeed, facilitates data downloading, arranges documents, and periodically updates the researches.The grids (or computational grids) are advanced services which allow to the users to share computational resources, archival space, scientific data and instruments among different computers.They are then a valid instrument to automatically store the accomplished researches and spread the knowledge to the domain.The grid technology, indeed, facilitates data downloading, arranges documents, and periodically updates the researches. To define a grid architecture valid for the activities involved by the knowledge model, is necessary to take into account that it is not only a architectural model to provide computational resources for applications with great need of processing, but it is also an infrastructure where can connect and unify heterogeneous resources issued on local or geographical scale.The interoperability becomes the main problem to solve; indeed, it is necessary to allow the interaction among different platforms, languages, and research areas. In an interconnected zone, the interoperability implies common authentication protocols, therefore the grid to implement needs an architecture of protocols first of all.In this way it is possible to define a basic system through which users and knots can communicate.A Grid system allows to manage its own resources to obtain different service qualities, such as response time, availability, security, and performance to satisfy users request.The quality of service (QoS) is a highly important parameter, but it is not the only one.An information system that has to solve highly complex problems, such as the management of the biomedical knowledge, can take other advantages of the grids.The scalability is one of these advantages: a grid can comprise millions of resources, implying a potential degradation of services.For this reason the grid-enabled application should be able to deal with connection latency on geographical distances.Dynamicity and adaptability are criteria that determine the reliability.In a grid a breakdown is a rule not an exception, since the great number of resources increases the chances of breakdown. Results and discussions This chapter deals with a particular class of biomedical engineering systems such as Computer-Based Decision Support Systems.Different DSS applications are proposed in the literature to date.They highlight the system performances in the biomedical field, without examining the methodological aspects on which the investigation was based.Since these systems are of considerable complexity, their development and adoption are a problem in the field of biomedical engineering.In this chapter in response to the problem just mentioned a framework for analysis and design of these systems is discussed.It is based on a methodology of knowledge engineering.An application to a diagnostic process in the biomedical field is also presented.Methods for problem solving vary from ad hoc to highly structured.Ad hoc approaches are often used to solve simple problems in a linear way.Highly structured methodology for studying complex problems become even more formal where it is usual to pre-specify all the system requirements.In this chapter has been used a problem solving highly structured life cycle based on the schema shown in Fig. 1.It has been used to carry out analysis and design phase: the analysis to gain an understanding of an application domain with its problem and the relative conceptual solution; the design to specify a system that meet these requirements. Although this chapter focuses on the analysis and design stages applied to the problem solving life cycle, it is said that there is a further phase of building systems in which one focuses on the actual construction of the system by an executive project.Fig. 13 summarizes the results from the life cycle of problem solving both for the analysis and for design.The designed system is made up of any number of components.Each one carries out part of the system function.They are important because can help to handle systems complexity and then improve understanding of a system.Components communicate between them by passing messages.A good system will made up of highly independent components with minimal flows between them. The components are identified according to the encapsulation strategy.It allows to hide the internal working of procedural elements so as to protect the other parts of the system from the changes which would occur in it, if this working was faulty, or if one decided to implement it in a different way. From the technological solution point of view, the decision support system functional components aims to widen the human being capabilities.For example the automatic retrieval of scientific literature is important for the performances of the system.In fact the web suggest several scientific documents and that slows down the manual research.Moreover the use of automatic equipments is an advantage for the texts research, their readings and the knowledge extrapolation, which represent a study more similar to the one made by an expert, with reduced times, big quantities of examined texts and a lasting memorization.However when dividing the tasks into simple activities, it is necessary to respect the main rules of the biomedicine knowledge to develop a functional knowledge based automatic system.Algorithms and data will be effective only if the study about the knowledge domain has been clearly made.Design decisions have specified functional blocks and their interaction needs to solve the problem.All of them can be regarded in terms of computational resources such as algorithms, data and computational power.As a consequence design phase must be specify also a computing model resource organization in order to satisfy user requirements and system performances.A grid computing based model has been discussed in order to support a knowledge based system development with high quality of services, based on interoperability, resource sharing, security and collaborative computing.Others computational aspects that are left open during analysis such as representational formats, computational methods used to computing inferences, dynamic data storage and the communication media, could be taken into account also. Finally must be considered that the study presented in this chapter shows a methodological solution or framework focused on knowledge to develop a knowledge based system very useful in biomedical engineering and the application here presented is a case study to examine the way in which the framework works. Conclusions Different Decision Support System applications are proposed in the literature to date highlighting the system performances in the biomedical field, without examining the methodological aspects on which the investigation was based.Since these systems are of considerable complexity, their development and adoption are a problem in the field of biomedical engineering.In this chapter has been presented a tool for Biomedical Engineering based on a framework of Knowledge Engineering specialized into develop applications of decision support computer systems.The framework has been applied successfully to the Biomedical Diagnostic Process.The framework application explores technologies for knowledge applied to the processes and resources enhancing data, information, collaboration and computing models in order to produce new expert knowledge over time.The study presented in this chapter has shown that new knowledge in the digital age can also be produced automatically and that instead the knowledge produced by human processes can be effectively supported by specific digital resources (or knowledge resources).To this end, it was revealed that the application of distributed computing to these problems based on knowledge reveals structural characteristics such as interoperability, resource sharing, security and collaborative computing.At the aim the application here presented has proposed the grid computing approach. Fig. 3 . Fig. 3. Logical Schema based on Classes to represent the Inference component of the Kmodel. Fig. 4 . Fig. 4. A Conceptual Description of the Biomedical Diagnostic Inferential Engine. Fig. 5.A representation of the Main Task based on Functional Blocks and Components. Fig. 6 . Fig.6.Task Analysis and Design Results: hierarchies of tasks and task composition by sequences of Activities using building blocks based tools. Fig. 7 . Fig. 7.A Scheme of a Computer Based DSS reference architecture. Fig. 13 . Fig. 13.Analysis and Design for the Biomedical Diagnostic Process. Instrumental ExaminaƟon Symptoms Physical ExaminaƟon Pathology Medical History Signs Biological EnƟty Info. paƟents Clinical Case Incorrect Value www.intechopen.com • Scheduler: it decides the execution order of the rules (object level);• Job memory: it contains a list of developed operations is (object level). ExecuƟve InformaƟon Level Knowledge Support level Management InformaƟon Level InferenƟal Support level Task Support level InformaƟon Tacit Knowledge Explicit Knowledge Basic Data Decision Fig . 8. Hierarchical Relationship among Information, Knowledge and Decision. Table 9 . List of functional elements.
2018-11-14T14:55:45.700Z
2011-01-08T00:00:00.000
{ "year": 2011, "sha1": "3da71fc3b12a5ce104e6074a6d605d72f560e9b9", "oa_license": "CCBYNCSA", "oa_url": "https://www.intechopen.com/citation-pdf-url/12928", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "6989ca70f062413fbdc19a8b1b63da184cf535c6", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
254106558
pes2o/s2orc
v3-fos-license
Horizon quantum mechanics of rotating black holes The horizon quantum mechanics is an approach that was previously introduced in order to analyze the gravitational radius of spherically symmetric systems and compute the probability that a given quantum state is a black hole. In this work, we first extend the formalism to general space-times with asymptotic (ADM) mass and angular momentum. We then apply the extended horizon quantum mechanics to a harmonic model of rotating corpuscular black holes. We find that simple configurations of this model naturally suppress the appearance of the inner horizon and seem to disfavor extremal (macroscopic) geometries. Introduction Astrophysical compact objects are known to be usually rotating, and one correspondingly expects most black holes formed by the gravitational collapse of such sources be of the Kerr type. The formalism dubbed horizon quantum mechanics (HQM) [1][2][3][4][5][6][7][8], was initially proposed with the purpose of describing the gravitational radius of spherically symmetric compact sources and determining the existence of a horizon in a quantum mechanical fashion. It therefore appears as a natural continuation in this research direction to extend the HQM to rotating sources. Unfortunately, this is not at all a conceptually trivial task. In a classical spherically symmetric system, the gravitational radius is uniquely defined in terms of the (quasi-)local Misner-Sharp mass and it uniquely determines the location of the trapping surfaces where the null geodesic expansion vanishes. The latter surfaces are proper horizons in a timeindependent configuration, which is the case we shall always a e-mail: casadio@bo.infn.it b e-mail: A.Giugno@physik.uni-muenchen.de c e-mail: andrea.giusti@bo.infn.it d e-mail: octavian.micu@spacescience.ro consider here. It is therefore rather straightforward to uplift this description of the causal structure of space-time to the quantum level by simply imposing the relation between the gravitational radius and the Misner-Sharp mass as an operatorial constraint to be satisfied by the physical states of the system [3]. In a non-spherical space-time, such as the one generated by an axially symmetric rotating source, although there are candidates for the quasi-local mass function that should replace the Misner-Sharp mass [9], the locations of trapping surfaces, and horizons, remain to be determined separately. We shall therefore consider a different path and simply uplift to a quantum condition the classical relation of the two horizon radii with the mass and angular momentum of the source obtained from the Kerr metric. This extended HQM is clearly more heuristic than the one employed for the spherically symmetric systems, but we note that it is indeed fully consistent with the expected asymptotic structure of axially symmetric space-times. Beside the formal developments, we shall also apply the extended HQM to specific states with non-vanishing angular momentum of the harmonic black hole model introduced in Ref. [10]. 1 This model can be considered as a working realization of the corpuscular black holes proposed by Dvali and Gomez [12][13][14][15][16][17][18][19], and it turns out to be simple enough, so as to allow one to determine explicitly the probability that the chosen states are indeed black holes. Furthermore, we will investigate the existence of the inner horizon and likelihood of extremal configurations for these states. The paper is organized as follows: at the beginning of Sect. 2, we briefly summarize the HQM and recall some of the main results obtained for static spherically symmetric sources; the extension of the existing formalism to the case of stationary axisymmetric sources, which are both localized in space and subject to a motion of pure rotation, is presented in Sect. 2.2; a short survey of the harmonic model for corpuscular black holes is given in Sect. 3, where we then discuss some elementary applications of the HQM to rotating black holes whose quantum state contains a large number of (toy) gravitons; finally, in Sect. 4, we conclude with remarks and hints for future research. Horizon quantum mechanics We start from reviewing the basics of the (global) HQM for static spherically symmetric sources [1][2][3][4][5][6][7][8], and then extend this formalism to rotating systems by means of the Kerr relation for the horizon radii in terms of the asymptotic mass and angular momentum of the space-time. In particular, we shall rely on the results for the "global" case of Ref. [3] and follow closely the notation therein. Spherically symmetric systems The general spherically symmetric metric g μν can be written as 2 where r is the areal coordinate and x i = (x 1 , x 2 ) are coordinates on surfaces of constant angles θ and φ. The location of a trapping surface is then determined by the equation where ∇ i r is perpendicular to surfaces of constant area A = 4 π r 2 . If we set x 1 = t and x 2 = r , and if we denote the static matter density by ρ = ρ(r ), the Einstein field equations tell us that where the Misner-Sharp mass is given by as if the space inside the sphere were flat. A trapping surface then exists if there are values of r such that the gravitational radius r H = 2 p m/m p ≥ r . If this relation holds in the vacuum outside the region where the source is located, r H becomes the usual Schwarzschild radius associated with the total Arnowitt-Deser-Misner (ADM) [20] mass M = m(∞), (2.5) and the above argument gives a mathematical foundation to Thorne's hoop conjecture [21]. This description clearly becomes questionable for sources of the Planck size or lighter, for which quantum effects may not be neglected. The Heisenberg principle introduces an uncertainty in the spatial localization of the order of the Compton-de Broglie length, λ M p m p /M, and we could argue that R H only makes sense if R H λ M , that is, M m p . The HQM was precisely proposed in order to describe cases in which one expects quantum uncertainties are not negligible. For this purpose, we assume the existence of two observables, the quantum Hamiltonian corresponding to the total energy M of the system, 3 where the sum is over the Hamiltonian eigenmodes, and the gravitational radius with eigenstateŝ General states for our system can correspondingly be described by linear combinations of the form but only those for which the relation (2.5) between the Hamiltonian and gravitational radius holds are viewed as physical. In particular, we impose (2.5) after quantization, as the weak Gupta-Bleuler constraint The solution is clearly given by which means that Hamiltonian eigenmodes and gravitational radius eigenmodes can only appear suitably paired in a physical state. The interpretation of this result is simply that the gravitational radius is not an independent degree of freedom in our treatment, precisely because of the constraint (2.5). 4 By tracing out the gravitational radius part, we recover the spectral decomposition of the source wave function, (2.11) in which we used the (generalized) orthonormality of the gravitational radius eigenmodes [3]. Note that Eq. (2.10) now ensures that the result of this operation of integrating out the gravitational radius is still a pure quantum state. Conversely, by integrating out the energy eigenstates, we will obtain the horizon wave function (HWF) [1][2][3] where m p R Hα /2 p = E(R Hα ) is fixed by the constraint (2.5). If the index α is continuous (again, see Ref. [3] for some important remarks), the probability density that we detect a gravitational radius of size R H associated with the quantum state | ψ S is given by P H (R H ) = 4 π R 2 H |ψ H (R H )| 2 , and we can define the conditional probability density that the source lies inside its own gravitational radius R H as (2.14) where P S (r < R H ) = 4 π R H 0 |ψ S (r )| 2 r 2 dr . 5 Finally, the probability that the system in the state | ψ S is a black hole will be obtained by integrating (2.14) over all possible values of R H , namely Note that now the gravitational radius is necessarily "fuzzy" and characterized by an uncertainty R H = R 2 H − R H 2 . This quantum description for the total ADM mass M and global gravitational radius R H will be next extended to rotating sources by appealing to the asymptotic charges of axially symmetric space-times. We would like to recall that in Ref. [3] a local construction was also introduced based on the quasi-local mass (2.4), which allows one to describe quantum mechanically any trapping surfaces. However, that local analysis cannot be extended to rotating sources without a better understanding of the relation between quasi-local charges and the corresponding casual structure [9]. Rotating sources: Kerr horizons Our aim is now to extend the HQM to rotating sources, for which there is no general consensus about the proper quasilocal mass function to employ, and how to determine the causal structure from it. For this reason, we shall explicitly consider relations that hold in space-times of the Kerr family, generated by stationary axisymmetric sources which are both localized in space and subject to a motion of pure rotation in the chosen reference frame. We assume the existence of a complete set of commuting operators { H , J 2 , J z } acting on a Hilbert space H connected with the quantum nature of the source. We also consider only the discrete part of the energy spectrum [3], and denote with α = {a, j, m} the set of quantum numbers parametrizing the spectral decomposition of the source, that is, where the sum formally represents the spectral decomposition in terms of the common eigenmodes of the operators From the previous discussion, one can also easily infer that j ∈ N 0 /2, m ∈ Z/2, with |m| ≤ j, and a ∈ I, where I is a discrete set of labels that can be either finite of infinite. Let us first note that Eq. (2.16) stems from the idea that the space-time should reflect the symmetries of the source. Therefore, our first assumption is that the source should obviously have an angular momentum in order to describe a rotating black hole. Now, for a stationary asymptotically flat space-time, we can still define the ADM mass M and, following Ref. [3] as outlined in the previous subsection, we can replace this classical quantity with the expectation value of our Hamiltonian, 7 (2.20) In general relativity, we can also define a conserved classical charge arising from the axial symmetry by means of the Komar integral. This will be the total angular momentum J of the Kerr space-time. However, in our description of the quantum source, we have two distinct notions of angular momentum, i.e. the total angular momentum 21) and the component of the angular momentum along the axis of symmetry Since, at least classically, we can always rotate our reference frame so that the axis of symmetry is along the z axis, it is reasonable to considerĴ 2 as the quantum extension of the classical angular momentum for a Kerr black hole, (2.23) In the following, we will further assume that Ĵ z is maximum in our quantum states, so that the proper (semi-)classical limit is recovered, that is, For the Kerr space-time we have two horizons given by provided J 2 < M 4 . Let us then introduce two operatorsR (±) and, for the sake of brevity, write their eigenstates aŝ The generic state for our system can now be described by a triply entangled state given by but Eq. (2.25) tells us that in order to be able to define the analog of the condition (2.9) for the rotating case, we have to assume some mathematical restrictions on the operator counterparts of M and J . First of all, the term J 2 /M 2 tells us that we should assumeĤ to be an invertible self-adjoint operator, so that For this purpose, it is useful to recall a corollary of the spectral theorem: Let be a self-adjoint positive semi-definite operator. Then has a positive semi-definite square rootŜ, that is,Ŝ is self-adjoint, positive semi-definite, and If is positive definite, thenŜ is positive definite. It follows that the operatorĤ 2 −Ĵ 2 (Ĥ −1 ) 2 should be, at least, a positive semi-definite operator. On defining the operatorŝ we see that the physical states of the system are those simultaneously satisfying These two conditions reduce to (2.34) By tracing out the geometric parts, we should recover the matter state, that is, which implies Now, by integrating out the matter state, together with one of the two geometric parts, we can compute the wave function corresponding to each horizon, 37) It is also important to stress that the Hamiltonian constraints imply a strong relation between the two horizons, indeed we Corpuscular harmonic black holes In the corpuscular model proposed by Dvali and Gomez [12][13][14][15][16][17][18][19], black holes are macroscopic quantum objects made of gravitons with a very large occupation number N in the ground state, effectively forming Bose-Einstein condensates. As also derived in Ref. [22] from a post-Newtonian analysis of the coherent state of gravitons generated by a matter source, the virtual gravitons forming the black hole of radius R H are "marginally bound" by their Newtonian potential energy U , that is, where μ is the graviton effective mass related to their quantum mechanical size via the Compton/de Broglie wavelength λ μ p m p /μ, and λ μ R H . A first rough approximation for the potential energy U N is obtained by considering a square well for r < λ μ , where is the Heaviside step function and the coupling constant α = 2 p /λ 2 μ = μ 2 /m 2 p . The energy balance (3.1) then leads to N α = 1 and, with λ μ R H , so that 4) A better approximation for the potential energy was employed in Ref. [10], which takes the harmonic form (3.6) yields the well-known eigenfunctions where N is a normalization constant, 1 F 1 the Kummer confluent hypergeometric function of the first kind and Y lm (θ, φ) are the usual spherical harmonics. The corresponding eigenvalues are given by where n is the radial quantum number. It is important to remark that the quantum numbers l and m here must not be confused with the total angular momentum numbers j and m of Sect. 2.2, as the latter are the sum of the former. At the same time, the "energy" eigenvalues E nl must not be confused with the ADM energy E aj of that section, here equal to N μ by construction. If we denote with n 0 and l 0 the quantum numbers of the highest "energy" state, and include the graviton effective mass μ in the constant V 0 (0), the condition (3.1) becomes E n 0 l 0 0, or which yields ω d 2 2h μ 2 n 0 + l 0 + 3 2 . (3.10) We now further assume that d λ μ R (+) H and use the Compton relation for μ, so that the above relation fully determines The potential can be finally written as and the eigenvalues as which of course holds only for n ≤ n 0 and l ≤ l 0 . Let us remark that the fact the above "energy" is negative for the allowed values of n and j is indeed in agreement with the post-Newtonian analysis of the "maximal packing condition" for the virtual gravitons in the black hole [22]. 8 In the following, we shall consider a few specific states in order to show the kind of results one can obtain from the general HQM formalism of Sect. 2.2 applied to harmonic models of spinning black holes. Rotating black holes We shall now consider some specific configurations of harmonic black holes with angular momentum and apply the extended HQM described in the previous section. We first remark that the quantum state of N identical gravitons will be a N -particle state, i.e. a vector of the N -particle Fock space F = H ⊗N , where H is a suitable 1-particle Hilbert space. However, both the Hamiltonian of the systemĤ and the gravitational radiusR H are global observables and act as N -body operators on F. Single eigenstates The simplest configuration corresponds to all toy gravitons in the same mode, and the quantum state of the system is therefore given by where | g represents the wave function of a single component. In particular, this | is a Hamiltonian eigenstate, for which the total ADM energy is simply given by and each graviton is taken in one of the modes (3.7). For the sake of simplicity, we shall set n = n 0 = 0, l = l 0 = 2 and m = ±2, that is where the normalization constant N = 4/( √ 15 π 1/4 λ 7/2 μ ). The total angular momentum is thus given by 17) where N + ≥ N − = N − N + is the number of spin up constituents (with m = +2). We also introduced the constant where the approximate expression holds for N 1. Note that L 2 = 0 for n + ≡ N + /N = 1/2 (the non-rotating case with N + = N − ) and grows to a maximum L 2 1+O(1/N ) for the maximally rotating case n + = 1 (or N + = N ). Since we are considering an eigenstate of both the Hamil-tonianĤ and the total angular momentumĴ 2 , the wave functions (2.37) for the two horizons will reduce to single eigenstates of the respective gravitational radii as well. In particular, replacing the above values into (2.30) and (2.31) yields The classical condition for the existence of these horizons is that the square root be real, which implies (3.20) The above bound vanishes for N + = N − = N /2, as expected for a spherical black hole, and is maximum for N + = N , in which case it yields (3.21) again for N 1. Since we are modeling black holes, it is particularly interesting to study in detail the consequences of assuming that all the constituents of our system lie inside the outer horizon. In other words, we next require that the Compton length of gravitons, λ μ = p m p /μ, is such that the modes (3.16) are mostly found inside the outer horizon radius R (+) H . In order to impose this condition, we compute the singleparticle probability density where we used |ψ 02+2 | 2 = |ψ 02−2 | 2 . From Fig. 1, we then see that this probability is peaked well inside R (+) H for λ μ = R (+) H /4, whereas λ μ = R (+) H /2 is already borderline and λ μ = R (+) H is clearly unacceptable. We find it in general convenient to introduce the variable (3.23) which should be at least 1 according to the above estimate, so that Eq. (3.19) reads which we can solve for x = 2 γ m 2 p /(N μ 2 ), that is, with the condition x ≡ (L/γ ) x ≤ 1 to ensure the existence of the square root. The only positive solution is given by for which the existence condition reads ( − 1) 2 ≥ 0 and is identically satisfied. The effective mass is then given by As a function of N /2 ≤ N + ≤ N , the above squared mass interpolates almost linearly between μ 2 0 = γ m 2 p /N for N + = N − = N /2 (so that L 2 = 0) andμ 2 (1 + γ 2 ) m 2 p /(γ N ) for the maximally rotating case N + = N 1 (for which L 2 1). The Compton length reads the ADM mass is (3.29) and the angular momentum for all values of L ≥ 0. This seems to suggest that N constituents of effective mass μ ∼ m p / √ N cannot exceed the classical bound for black holes, or that naked singularities cannot be associated with such multi-particle states. However, a naked singularity has no horizon and we lose the condition (3.1) from which the effective mass μ is determined. If naked singularities can still be realized in the quantum realm, they must be described in a qualitatively different way from the present one. 9 Let us now plug the effective mass (3.27) into Eq. (3.19), (3.31) One has L 2 = γ 2 for (3.32) Since 1/2 ≤ n + ≤ 1, the critical value n c becomes relevant only for γ N 1. For N 1 and γ 1, the horizon radii are thus given by and 2 γ λ μ R (+) H , as we required. The above horizon structure for 1/2 ≤ n + ≤ 1 is displayed for γ = 2 and N = 100 in Fig. 2, where we also recall that λ μ = R (+) H /4. It is particularly interesting to note that the extremal Kerr geometry can only be realized in our model if γ is sufficiently small. In fact, (3.34) For γ = 1 and N = 100, the horizon structure is displayed in Fig. 3, where we see that the two horizons meet at L 2 1, that is, the configuration with n + 1 in which (almost) all constituents are aligned. Note also that, technically, for N 1 and γ small, there would be a finite range n c < n + ≤ 1 in which the expressions of the two horizon radii switch. However, this result is clearly more dubious as one would be dealing with a truly quantum black hole made of a few constituents just loosely confined. Such configurations could play a role in the formation of black holes, or in the Finally, let us apply the HQM and compute the probability (2.15) that the system discussed above is indeed a black hole. We first note that, since we are considering eigenstates of the gravitational radii, the wave function (2.37) for the outer horizon will just contribute a Dirac delta peaked on the outer expectation value (3.19) to the general expression (2.14), that is, where r ≡ (r, θ, φ), the joint probability density in position space is simply given by where we used Eq. (3.22). It immediately follows that where we recall γ was defined in Eq. (3.23), and depends on N and n + . The single-particle (N = 1) black hole probability P (+) (γ ) is represented by the solid line in Fig. 4, from which it is clear that it practically saturates to 1 for γ 2. The same graph shows that the minimum value of γ for which P BH (n + , N ) = [P (+) (γ )] N approaches 1 increases with N (albeit very slowly). For instance, if we define γ c as the value at which P BH (n + , N ) 0.99, we obtain the values of γ c plotted in Fig. 5. It is also interesting to note that, for γ = 1, which we saw can realize the extremal Kerr geometry, we and the system is most likely not a black hole for N 1, in agreement with the probability density shown in Fig. 1. One might indeed argue this probability is always too small for a (semi)classical black hole, and that the extremal Kerr configuration is therefore more difficult to achieve. Analogously, we can compute the probability P IH that the inner horizon is realized. Instead of Eq. (3.35), we now have which analogously leads to It is then fairly obvious that, for any fixed value of γ , P IH (n + , N ) ≤ P BH (n + , N ) and that equality is reached at the extremal geometry with R (−) H R (+) H . Moreover, from 0 ≤ L 2 ≤ 1 and Eq. (3.33), we find R (−) H R (+) H /γ 2 , so that for γ = 2, the probability P IH (0.04) N is totally negligible for N 1. This suggest that the inner horizon can remain extremely unlikely even in configurations that should represent large (semi-)classical black holes. Superpositions The next step is investigating general superpositions of the states considered above, where i |a i | 2 = 1 and so that M i = N i μ i and J i = (2 N i+ − N i ) j i ≡ N i (2 n i+ − 1) j i . One can repeat the same analysis as the one performed for the single-mode case, except that the two HWFs will now be superpositions of ADM values as well. In practice, this means that Eqs. (3.35) and (3.42) are now replaced by (3.47) and the expectation values of the horizon radii are correspondingly given by As usual, we obtain the probability that the system is a black hole by considering the outer horizon, for which where (3.51) The explicit calculation of the above probability immediately becomes very cumbersome. For the purpose of exemplifying the kind of results one should expect, let us just consider a state where the two modes in superposition are given by N constituents with quantum numbers n 1 = 0, l 1 = 2 and m = ±2 in the state (3.16), here denoted with | g 1 ; the same number N of gravitons with quantum numbers n 2 = 1, l 2 = 2 and m = ±2 in the state where we further assumed that all constituents have the same Compton/de Broglie wavelength λ μ . It then follows that The probability (3.49) can be computed explicitly and is shown in Fig. 6 for N = 100, with a = b = 1. Beside the specific shape of those curves, the overall result appears in line with what we found in the previous subsection for an Hamiltonian eigenstate: the system is most certainly a black hole provided the Compton/de Broglie length is sufficiently shorter than the possible outer horizon radius (that is, for sufficiently large γ 1 and γ 2 ). Conclusions After a brief review of the original HQM for static spherically symmetric sources, we have generalized this formalism in order to provide a proper framework for the study of quantum properties of the causal structure generated by rotating sources. We remark once more that, unlike the spherically symmetric case [1][2][3], this extension is not based on (quasi-)local quantities, but rather on the asymptotic mass and angular momentum of the Kerr class of space-times. As long as we have no access to local measurements on black hole space-times, this limitation should not be too constraining. In order to test the capabilities of the so extended HQM, one needs a specific (workable) quantum model of rotating black holes. For this purpose, we have considered the harmonic model for corpuscular black holes [10], which is simple enough to allow for analytic investigations. Working in this framework, we have been able to design specific configurations of harmonic black holes with angular momentum and confirm that they are indeed black holes according to the HQM. Some other results appeared, somewhat unexpected. For instance, whereas it is reasonable that the probability of realizing the inner horizon be smaller than the analogous probability for the outer horizon, it is intriguing that the former can indeed be negligible for cases when the latter is close to one. It is similarly intriguing that (macroscopic) extremal configurations do not seem very easy to achieve with harmonic states. The results presented in this work are overall suggestive of interesting future developments and demand considering more realistic models for self-gravitating sources and black holes. For example, it would be quite natural to apply the HQM to regular configurations of the kinds reviewed in Refs. [23][24][25].
2022-12-01T15:40:12.717Z
2017-05-01T00:00:00.000
{ "year": 2017, "sha1": "91065a07f66c16dfe5c5525358aa031bdcf83afd", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1140/epjc/s10052-017-4882-x.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "91065a07f66c16dfe5c5525358aa031bdcf83afd", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [] }
134249234
pes2o/s2orc
v3-fos-license
Assessment of shallow groundwater on the bank of the ISTN lake through lakebank filtration based on aquifer properties, pH, total dissolved solids (TDS), and microbiological analysis A lakebank filtration assessment was carried out on the shallow groundwater surrounding the ISTN lake to evaluate of the shallow groundwater resources in the area. The objective of this research is to describe the shallow groundwater characteristics based on aquifer properties, pH, TDS and microbiological analysis. This research was conducted by making boreholes and observation holes at the bank of the ISTN area for 3 points in a single line perpendicular to the Lakebank together with 3 points in a single line perpendicular to the canalbank for doing the experiments and taking samples for pH, TDS, and microbiology analysis. Based on aquifer properties using boring and pumping test results, the aquifer layer with a thikness around 4 m show the normal storage coeficients between 0.00026 and 0.0316. From the pH, TDS, and microbiological analysis for sampling taken from boring 2.1, 2.2., and 2.3 with the distance around 10, 20, 30 m from the lake boundary were found in range of fresh water with zero patogent microbial population but the pH of some samples was lower than the pH of drinking water requirement in which that should be improved by using simple treatment before consumption. Introduction Lakes are a surface water resource that has a critical function for human life and other living things. Utilization of lake water to support human life, if not accompanied by wise action in its management, will cause damage to the water resources [1]. Lakes have a variety of purposes, such as water reservoirs, as a catchment area, flood control, water availability, a place to keep fish and as a place of recreation [2]. A lake is a body of water over the surface of the soil that is formed naturally and artificially, whose water comes from groundwater and surface water, is relatively small in size, and belongs to the open and dynamic freshwater ecosystem as a potential form of the protected area. The function of the lake can be viewed ecologically based on the water system of the surrounding area, and the water catchment area. Under certain conditions, a lake can recharge the water into shallow groundwater, be a power plant, be a deep groundwater basin and sea intrusion defender, be a raw water source, provide irrigation, flood control, and economic functions, and, play a role as recreation, fishery, etc. [3]. As a water reservoir, it has a specific capacity and can change due to natural activity and human activity. A decrease in the quality of water will decrease the usefulness, productivity, and capacity of water resources, which will ultimately reduce the wealth of natural resources. To maintain water quality in its natural condition, it is necessary to manage and control water pollution wisely. For example, river and in-situ pollution can result from high sediment contents derived from erosion, agriculture, mining, construction, land clearing and other activities, organic waste from humans, animals, and plants including the rate of increase of chemical compounds derived from industrial activities that dispose of their waste to the lake waters. These are the impacts of rising human populations, poverty and industrialization [1]. Institut National Science and Technology has a land area of about 12 ha, located on Jl. Moch Kahfi II, Srengseng Sawah, Jagakarsa, South Jakarta City, Special Capital Region of Jakarta. Most of the campus area of the National Institute of Science and Technology (ISTN) consists of open green spaces, so it is very well used as a watershed conservation area, such as an artificial reservoir. The campus of ISTN Jakarta serves as a catchment area in South Jakarta, accommodating rainwater and an abundance of wastewater from the surrounding area, and as a source of biodiversity. Injecting waste or other materials into it can lead to a decrease in water quality, even beyond the limits of water's ability to recover naturally, resulting in water pollution. Decreased water quality will affect the life of the water organisms in it so that it can lower the productivity of these waters [4]. Therefore, the focus of this research paper is the utilization of ISTN situ pillar to be used as a source of clean water, and lakebank filtration a tool used to filter the water. Although 70% of the earth is water, only a small amount of water is fresh water. The main freshwater source is surface water such as rivers, lakes, and sites available for abstraction. Water is a resource that is replenished through the process of deposition in the hydrologic cycle, yet water scarcity has become an increasingly alarming issue in recent years due to the acceleration of world population growth that demands large amounts of water for food consumption and production. In this case, the lack of water impacts the lack of cheap supplies, clean water, and drinking water. The world's population is expected to reach 8.3 billion, which is expected to increase energy and food global demand by 50% and increase demand for fresh drinking water by 30%; resources are already in short supply for one-third of the world population [5]. Water scarcity is further exacerbated by anthropogenic water pollution; an estimated 3,575 million deaths each year are caused by water-related diseases. A drastic increase in exploration of groundwater has also led to the depletion of groundwater in many places such as Asia and Africa due to groundwater aboard the ability to recharge [6]. Thus, the method of lakebank filtration (LBF) is expected to overcome this. LBF, a variant of conventional groundwater retrieval methods, draws water from a shallow aquifer recharged with the help of nearby lake water that creeps through the lake before mixing with shallow aquifer water. LBF is a cheap and sustainable yet efficient process of natural water treatment in mass producing large amounts of treated water. LBF filters out contaminants by natural attenuation processes such as filtration, absorption, acid-base reactions, oxidation, reduction, hydrolysis, biochemical reactions [7]. Dash et al. [8] also found that LBF is efficient in removing turbidity, bacterial coliform bacteria and over 70% of organic substances. LBF water treatment is generally found to be low in contamination compared to standard river water. Compared to pumping directly from river water, LBF is more effective for improving water quality by reducing color, coliform bacteria, UV-absorbance, and organic contaminants as well as halving water treatment time. However, to adequately remove cyanobacteria, the well distances from the lakebank should be sufficient to allow adequate time for biodegradation on account of less proportion of lake water uptake in LBF water mixtures, and thus, a balance of cyanobacteria removal and river absorption is required. As the LBF process depends on natural geographic conditions, there is a need to know the circumstances and settings that are suitable for optimizing LBF applications in different places. Based on the implementation of LBF and RBF by making boreholes perpendicular to the lake and river, it was found that a distance of 20 meters from the bank provides the best water quality from bacteria contamination and the water can be considered as A classification [5]. The objective of the research was to evaluate the water quality from the experiment boreholes to be utilized as a source of clean drinking water. Materials and methods This study examines the use of lake bank filtration (LBF), which will be applied on the ISTN lake and the ISTN front channel. The research area is located in the Campus Area of the National Institute of Science and Technology Jagakarsa, South Jakarta. The area of ISTN is the largest water reservoir in South Jakarta, which also accommodates rainwater, so the area of South Jakarta is rarely flooded. The location of the site is shown in Fig. 1 Research methodology The research method was done by preparing 6 main holes and 6 observation holes (3 boreholes and 3 observation holes at the lake and 3 boreholes and 3 observation holes at the canal) to define the geological layers of location sites and conduct the pumping test experiments to characterize the aquifer layers. Fig. 1 shows the borehole locations for both sites. Fig. 2 shows the breakdown of each borehole into a main hole and observation hole for both sites. For water quality evaluation, the water samples were taken from all six main holes and all six observation holes. At both the lake and canal sites, the first borehole was 10 meters from the bank, the second borehole was 20 meters from the bank, and the third borehole was 30 meters from the bank. Each borehole is in a straight line from the bank. Water samples were taken from each well after the borehole water was drained up to three times of the pipe volume for reducing the error and contamination of water samples. The well was drilled to a depth of more than 9 meters from the ground surface because bacteria Escherichia coli are not present at a depth of more than 9 meters. Cleaning as much as three times the volume of water in the wellbore was carried out each time before sampling to remove any impurities to obtain a representative sample of groundwater. The samples were bottled and sent to the cooling room in the laboratory to store them before being analyzed, except for microbiology analysis samples which were directly sent to the laboratory for analysis. Physicochemical properties were tested using a parameter (pH and TDS) probe during sampling (Hanna HI98130 pH, EC, TDS, Temp). Analysis of water samples was performed using conductivity analysis for TDS and bacterial (E. coli) for the microbiologic test, and the samples were immediately sent to Environmental Laboratory of Indonesia University (UI). To provide information on aquifer characteristics demand for shallow groundwater development, the pumping tests were carried out in different locations. Parameters of the aquifer such as drawdown and recovery transmissivity, specific capacity, aquifer thickness, and storage coefficient derived from pumping tests were evaluated and are described in this paper. This evaluation will provide the necessary hydrogeological information, which will provide an understanding of the aquifer potential so that the potential shallow groundwater zones for development can be located. The pumping test method is usually preferred for shallow groundwater development and determining aquifer characteristics. In this study, pumping tests were carried out at three open wells in the study area, and all the essential aquifer parameters, like conductivity hydraulics and transmissivity (T), were calculated using Cooper-Jacob and Chow methods [9]. The storage coefficient, optimum yield, recovery time, and aquifer thickness were calculated from the drawdown and recovery measurement data, well dimensions, water level, and discharge. Transmissivity can provide an understanding of the shallow groundwater potential for shallow groundwater development. Results and discussion Twelve points were drilled in this study, three main holes and three observation holes were drilled around the ISTN lake, and three main holes and three observation holes were drilled at the canal site. 4-inch diameter PVC pipes were used at the main holes, and 2 inch diameter PVC pipes were used at the observation holes. The observation holes were located 1 meter from the main hole. The boreholes at both the lake and canal site were located 10 meters from the bank, 20 meters from the bank and 30 meters from the bank. The drilling depth was planned to be around 20 meters or until the drill reached the sand layer, so if at a 1 Lithologies of ISTN canal bank After the drilling process was completed, the lithologies of each main borehole can be seen in Fig. 3 (a) Transmissivity and Hydraulic conductivity Transmissivity and hydraulic conductivity are among the most critical hydrogeological data needed for managing groundwater resources. Transmissivity describes the general ability of an aquifer to transmit water over the entire saturated thickness, while hydraulic conductivity measures this ability by unit area. The hydrogeological conditions of the area were evaluated based on the pumping test results. Information about transmissivity and hydraulic conductivity (K) in the present study area aquifer are presented in Table 1. specified period and the average storage coefficient of the formation. Besides, storativity usually varies directly with aquifer thickness and depends on grain size, shape, and distribution of pores, compaction of the stratum, and time of discharge. In the study area, results show that the storage coefficient varied from values 0.00026 to 0.0316, with the normal range between 0.01 until 0.35 [10]. Transmissivity is defined as the rate at which water of a specific prevailing kinematic viscosity is transmitted through a unit width of the aquifer under a unit hydraulic gradient [9]. The transmissivity of a soil or rock also depends on a variety of physical factors, including porosity, particle size, and the distribution and arrangement of particles. Microbiology, pH and Total Dissolved Solids, TDS Analysis Based on the analysis of microbiology, Escherichia coli and TDS for sampling taken from boring 2.1, 2.2, 2.3 with the distance around 10 meters, 20 meters, 30 meters from the lake boundary had TDS in the range of freshwater with zero pathogens microbial population. The sample fall under class a category. From Escherichia coli test, the water can be consumed without treatment. The effects were related to the results from the investigations of shallow groundwater potential through riverbank filtration for water resources development [5]. The TDS results show that the water moves from the lake to the surrounding bank which is shown by increasing TDS from borehole 2.1 to 2.3 and borehole 1.3 to 1.1 respectively (Table 2) and from TDS results obtained in the range of freshwater based on the standard of water quality from Indonesian Ministry of Health [11]. Conclusions Well boring is an effective method for LBF to predict aquifer layer. The general movement of shallow groundwater shows from the ISTN lake to the surrounding area of the bank based on the TDS analysis. Results show that the microbiology and TDS indicated of the shallow groundwater could be classified accepted, excet for the pH should be reduced before consumptions. The financial support for this study was provided by Ministry of Research, Technology and Higher Education of Indonesia (Kemenristekdikti).
2019-04-27T13:13:25.255Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "03991d2d74ced83791f780ad977e696126f2f091", "oa_license": "CCBY", "oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2019/25/matecconf_icancee2019_04011.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "a63b5edb343e370eca9979611fddde5dbf7acb7e", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
208641880
pes2o/s2orc
v3-fos-license
Modeling other minds: Bayesian inference explains human choices in group decision-making A Bayesian model suggests that when interacting with a group, humans simulate the “mind of the group” to choose an action. This PDF file includes: Supplementary Text Fig. S1. Distribution and change in belief parameters over multiple rounds. Fig. S2. Data generated by the POMDP model compared to experimental data. Supplementary Text Distribution of Belief Parameters in Each Round. Our statistical analysis showed that despite large prior values, the actions of others during the game played an important role in determining the policy of our POMDP model (and the policies of subjects). Figure S1a shows the average α t across all games and subjects in each round. More importantly, Figure S1b shows the distribution of the difference between α t and its initial value, i.e., (|α t −α 1 |), for each round. Similarly, Figures S1c and S1d demonstrate the evolution of β t over multiple rounds. As shown in these figures, the belief state parameters change quite drastically and this change increases as the game continues. Using the POMDP as a Generative Model To further investigate the ability of the POMDP framework to model our experimental data in the Volunteer's Dilemma task, we performed a posterior predictive check by using the POMDP as a generative model of data, i.e., actions were sampled from Beta(α 1 , β 1 ) and their probability changed according to the dynamics of the POMDP model (see equation 10 in Methods). Specifically, for each game, we sampled a θ = θ 1 from the initial belief state of the subject, i.e. Beta(α 1 , β 1 ), as the real initial state of the environment. In each round, contributions of others were generated based on the binomial distribution in equation 1 using the sampled θ of that round. The next θ were calculated based on α, β, and actions of that round as well as the decay rate, exactly as the POMDP model. In the resulting synthesized data, the general patterns of both success rate and contribution probability (with z obtained from the actual experimental data) for the fitted subjects matched the experimental data of the subjects closely ( Figures S2a-S2f). This result was robust to randomization -the same pattern was observed when the data was synthesized multiple times. Comparison with I-POMDP Model. Our framework models the effect of the subject's actions on others by increasing the average contribution rate of the group by each contribution. To model higher levels of theory of mind, one can utilize an interactive-POMDP (I-POMDP) which assumes that the subject responds to N − 1 policies generated by N − 1 POMDPs, each modeling another individual. Each POMDP models the game with a separate set of α 1 , β 1 , and γ parameters. The subject however does not know the parameters of the others' models (here α 1 , β 1 and γ). We tested a version of the I-POMDP model where the subject uses their own set of parameters for all members of the group (similar to our original POMDP model). We found that our original POMDP, where the subject reasoned directly about the parameters of the group state, outperformed this I-POMDP model which had a fitting accuracy 73% with SD = .12 (two-tailed paired t-test, t(28) = 4.91, p = 3.53 × 10 − 5, 95% CI difference =[0.06, 0.14]). The better performance of our original POMDP over the I-POMDP model could be at least partly due to the computer algorithm used to mimic human players. To examine this potential issue, give that the later rounds are potentially more affected by the dynamics of the game, we compared the difference in fitting accuracy between the original POMDP model and the I-POMDP model for the first 7 fitted rounds of the game versus the last 7 rounds (the first round excluded). The difference in the fits for the first and last 7 rounds was not significant (two-tailed paired t-test, t(28) = −0.58, p = 0.56, 95% CI difference =[−0.38, 0.21]). POMDP Model Capturing the Dynamics of Actions We also investigated games where all group members are optimal agents to see if our POMDP model is capable of capturing the dynamics of actions by optimal agents. We created a dataset where each subject (simulated by POMDP) played with 4 POMDP agents in each of 12 games. The parameter sets of these 4 POMDPs were drawn from parameter sets fit to experimental data. In other words, this dataset captured subjects playing with other human subjects. We compared the predicted success by the (simulated) subject to actual success in the game, similar to what we did with the experimental data. The average accuracy of this prediction was 66% (SD = .07). This accuracy was very robust across multiple runs of generated datasets. Figures S2g and S2h compare the actual success and the predicted success for the subject, similar to Figures 5e and 5d. Figure S2i shows that this match between the generated data and the model exists round by round. (a) A subject's contribution probability in each round (on average) when the actions are generated based on the hidden state of the POMDP model (synthesized data, white circles) compared to experimental data (black circles, same data as in figure 2c). (b) Same data as (a) but comparing synthesized versus experimental contribution probability for each k. (c) Same data as (a) and (b) but with the data points binned based on round of the game. (d) Comparison of probability of group success in each round (on average) for the synthesized POMDP data compared to experimental data (black circles, same data as in figure 2d). (e) Same data as (d) but comparing synthesized versus experimental for each k. (f) Same data as (d) and (e) but with the data points binned based on round of the game. (g) Average probability of success for each subject in a generated data set where the actions come from 4 random POMDPs whose parameters were fit to experimental data from 4 random subjects (black circles) compared to the subject-fitted POMDP model's prediction of the success (blue circles) (h) Same data as (g) but comparing the generated data with the subject-fitted POMDP model's prediction for each k. (i) Same data as (g) and (h) but with the data points binned based on round of the game.
2019-12-05T09:25:40.639Z
2019-11-01T00:00:00.000
{ "year": 2019, "sha1": "2379bb3a6024e748585e789ef0bdffac3efdc10b", "oa_license": "CCBYNC", "oa_url": "https://www.science.org/doi/pdf/10.1126/sciadv.aax8783?download=true", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "541ea59c9ca26d2cf44d7ae05e44acd9dcfba7f4", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
5633333
pes2o/s2orc
v3-fos-license
Lack of Any Relationship Between Circulating Autoantibodies and Interleukin–6 Levels in Egyptian Patients Infected with the Hepatitis C Virus Introduction: Elevated serum interleukin (IL) 6 has been reported in patients infected with the hepatitis C virus (HCV), but it remains debatable whether this influences the production of autoantibodies and the biochemical profile of HCV disease. Therefore, this current study was conducted to evaluate the relationship between IL-6 and circulating autoantibody levels in HCV positive patients. Methods: Levels of IL-6 in serum samples from 102 patients with HCV and 103 normal controls were determined by enzyme linked immunosorbent assay (ELISA). Autoantibodies were detected by immunofluorescence. Results: Levels of IL-6 were significantly higher (p=0.028) in patients infected with (HCV) compared with normal group. Autoantibodies were noted in in 43.1% of the patients; of these, 23.5% featured anti-nuclear antibodies (ANA+), 16.7% anti-smooth muscle antibodies (ASMA+), 7.8% anti-mitochondrial antibodies (AMA+), 17.6% anti-parietal cell antibodies (APCA+), 7.8% anti canalicular antibodies, and 2.9% anti reticulin antibodies (ARA+). No patients were found to be positive for anti-brush border antibodies (ABBA) or anti-ribosomal antibodies. (ARiA). No links with IL-6 levels were apparent. Conclusions: IL-6 levels are increased in patients infected with HCV disease and could influence the production of autoantibodies. However, this study did not provide evidence of a specific relationship between IL6 and circulating autoantibodies in such cases. Introduction Chronic infection with hepatitis C virus (HCV) is a life-threatening disease that causes progressive liver damage and different autoimmune manifestations (Bonkovsky et al., 2001;Kim et al., 2012). Autoantibodies are characterized by the loss of tolerance against self-antigen and activation of auto reactive lymphocyte and pathological damage of single or multiple organs (Dammacco et al., 2000). As a Secondary event Autoantibodies can be measured in different liver diseases associated with etiological factors as drugs and chemical induced autoimmunity, viral and microbial infection induced. (Christopher et al., 2012) In particular, IL-6 is a multifunctional potent, pleiotropic inflammatory cytokine that promotes the survival of plasma cells that secrete immunoglobulin or pathological autoantibodies. It is involved in the regulation of different cellular processes, including proliferation and differentiation and plays a functional essential role in acute phase response and in the control of the equation between pro-inflammatory and anti-inflammatory pathways (Shihara et al., 2000;Chihara et al., 2011). High circulating levels of IL-6 have been reported in Lack of Any Relationship Between Circulating Autoantibodies and Interleukin-6 Levels in Egyptian Patients Infected with the Hepatitis C Virus Mohamed Y Nasr 1 , Ammar S Ali Deeb 1 *, Gamal Badra 2 , Ibrahim H El Sayed 1 many clinical studies (Inflammatory, neoplastic diseases) and especially in several liver diseases (Martinez et al., 1993;Soresi M et al., 2006;Giannitrapani et al., 2011Giannitrapani et al., , 2013. Many specific and systemic autoantibodies are usually found in the serum of an infected patient with viral chronic hepatitis. Antinuclear Antibodies (ANA) are seen mostly in patients with Chronic systemic autoimmune disease as systemic lupus erythematosus (SLE), rheumatoid arthritis, and Sjogren's syndrome, also it may be detected in the serum of HCV infected patients, The propagation of (ANA) in HCV patients ranges between 6% and 22%, and they are usually found in the patient's serum at a low titer (Eva et al., 2005). Anti reticulin antibody (ARA) is seen in Crohn's disease, dermatitis herpetiformis, celiac disease and in low prevalence in chronic hepatitis C viruses (Eva et al, 2005 Anti-brush border antibodies (ABBA) is detectable in thyroiditis, scleroderma, and also were detected in chronic hepatitis c virus (Ezaki et al., 1992). HCV infection may lead also to the production of anti-parietal cell antibody (APCA) (Cassani et al., 1997). Every antibody is directed against a specific intracellular antigen emitted during (Apoptosis) cell death and presented to the immune system. Their pathogenic function and clinical significance still unclear (Eva et al., 2005;Campisi et al., 2016). There are only a limited number of studies had examined the relationship between circulating autoantibody and IL-6 levels in HCV patients. In this study, we aimed to evaluate the relationship of IL-6 and different circulating autoantibodies (ANA, AMA, ASMA, ARA, ABBA, Anticanalicular and Anti-ribosomal) in untreated Hepatitis C virus patients. Material and Methods One hundred and two consecutive Egyptian individuals; 74 males and 28 females aged from 19-69 years; with clinically and laboratory confirmed chronic HCV was included in the present study, other causes of chronic liver disease were ruled out. Patients were from Oncology Hospital, Shebein El-kom, Minufiya Governorate, Minufiya University, Egypt. One hundred and three unrelated healthy blood donors served as normal controls (donors are living in the same geographical area). Patients' medical history, complete blood count, liver and renal function tests include Serum (albumin, AST, ALT, bilirubin, craetinine, Thyroid-Stimulating Hormone (TSH), and Alpha-fetoprotein (AFP)). The study was previously approved by the Ethical Committee of The Institute of Genetic Engineering and Biotechnology Research, Written informed consent was obtained from all patients. Detection and differentiation of Circulating Autoantibodies by Indirect Fluorescence ANA detection was performed by Indirect Fluorescence (IIF) using HEp-2 cells (ANAFLUOR DiaSorin Kits Immunofluorescence assay; DiaSorin, Stillwater, Minnesota, USA). The cells were fixed on a microscope slide. IIF was performed according to the protocol suggested by the manufacturer. In brief, serum samples diluted 1:80 were incubated with the HEp-2 cell substrate for 30 minutes at room temperature. After washing with PBS-Tween, the slides were incubated for another 30minutes with goat anti-human IgG conjugated with fluorescein isothiocyanate and propidium iodide for counterstaining (ANAFLURO) to label precisely bound antibodies. After a second washing step and embedding, the slides were examined under a fluorescence microscope (Leica DM3000). The result of ANA test was considered positive when an apple-green fluorescence stain in the nuclei of the Hep-2 cells was observed. Circulated autoantibodies were detected using Fluro kits Immunofluorescence assay (DiaSorin, Stillwater, Minnesota, USA). Serum samples were diluted in 1:20 in phosphate buffered saline, and then applied to the tissue section: rat kidney, rat stomach, and a rat liver cryostat section which fixed on the microscope slides. The test was performed according to the protocol suggested by the manufacturer as described previously in ANA. The result was considered positive when an apple-green fluorescence stains was observed in specific tissue organelles as a following at AMA, when present stain the cytoplasm of the kidney distal tubules with a coarse granular fluorescence. ASMA will stain the muscularis mucosa and the muscularis externa of the stomach tissue as well as the muscle layer of the arterioles that may be present in any of the tissue sections. ARA will stain peritubular fibers, Bowman's capsule, vascular endothelium and perivascular fibers. ABBA will stain the internal feather edge of kidney proximal tubules. APCA stains only the stomach's gastric parietal cells and the parietal cell cytoplasm of the stomach tissue. Antiribosomal antibody, if present, stains the gastric chief cell cytoplasm. Anticanalicular antibody, stains the bile canaliculi which exist as minute channels between cells of the hepatic laminae branching laterally between cells. A Measurement of Serum IL-6 by Enzyme-linked Immunosorbent Assay (ELISA) Total concentrations of IL-6 in the serum samples were measured using a commercial ELISA kit (R and D System, Inc., Minneapolis, MN), according to the manufacturer's instructions. The intensity of the developed color was measured by reading optical absorbance at 450 nm using a microplate reader (SunriseTM, Tecan Group Ltd. Ma¨ nnedorf, Switzerland)) Results were expressed as pictogram of cytokine per milliliter plasma (pg/ml). Talaat et al., 2015. Statistical analysis Data was fed into the computer and analyzed using IBM SPSS software package version 20.0. Qualitative data were described using numbers and percent. Quantitative data were described using range (minimum and maximum), mean, standard deviation and median. A significance of the obtained results was judged at the 5% level. Comparisons between both groups were performed by Chi-square test For categorical variables to compare between different groups, Student t-test For normally quantitative variables to compare between two studied groups, Mann Whitney test For abnormally quantitative variables to compare between two studied groups . Table 1. Demographic, Biochemical Characteristics, Αutoantibodies and IL-6 Level of All Subject Patients' characteristics Qualitative data were described using number and percent and was compared using Chi square test; while normally quantitative data was expressed in mean ± SD and was compared using student t-test; abnormally distributed data was expressed in median (Min. -Max.) and was compared using Mann Whitney test; *, Statistically significant at p ≤ 0.05 Relationship between Circulated Autoantibody and Interleukins -6 Levels in HCV in 3 (2.9%). APCA was positive in 18 (17.6%), APCA in dilution 1: 20 was detected in 11 (10.8%), in dilution 1:40 in 6 (5.9%) patients and in dilution 1:80 was detected in one patient (1.0%). ARA in 3 (2.9%), one patient (1.0%) in dilution 1:20 and two patients (2.0%) were detected in dilution 1:40. Anti canalicular Ab was found in 8 (7.8%), in dilution 1:20 was positive in 6 (5.9%) and two patients (2.0%) were positive in dilution 1:40. No patients were found to be positive for ABBA and anti-ribosomal antibodies in any of the 102 serum tested. In the control group only ANA was positive in 5 (4.9%) individuals in dilution 1:80, the rest of autoantibodies were negative. patients group there were 74 (72.5%) male and 28 (27.5%) female patients and in the control group there was 55 (53.4%) male and female 48 (46.6%). The median age of HCV patients was 48.0 years (range: 23.0 -69.0 years) and 26 (range: 19.0-54.0).HCV patients had significantly higher in Creatinine, AST, ALT, and AFP than in control cases (P<0.01) ,No significant difference was found in the Hb and TSH. Detection of circulating autoantibodies ANA was the most frequent autoantibody detected in the HCV patients, it was found in 24 (23.5%), ANA in dilution 1:80 was detected in 18 (17.6%) patients and ANA in dilution 1:160 in 6 (5.9%) patients. ASMA was found in 17(16.7%), ASMA in dilution 1:20 was detected in 13 (12.7%), ASMA in dilution 1:40 in 3 (2.9%) patients, and ASMA in dilution 1:80 was positive in one patients (1.0%), the AMA was found in 8 (7.8%). AMA in dilution 1:20 was detected in 5 (4.9%) and AMA 1: Relation between serum levels of IL-6 with different circulated autoantibodies: As shown in Table 2, No correlation was observed between serum levels of IL-6 and different circulated autoantibodies (ANA, AMA, ASMA, ARA, APCA and Anticanalicular Ab). In spite of, serum level of IL-6 in infected male was higher than infected female. Discussion In this study, we aimed to evaluate the relationship between IL-6 and circulating autoantibodies in infected HCV patients, our study showed a highly increased in serological markers of autoimmunity among the patients infected with HCV. In other similar studies, these autoantibodies have been found to distinguish HCVinfected patients according to their clinical statuses. (Valentini et al., 1999: Zusinaite et al., 2005. (HCV) has been related to many autoimmune dis¬eases and can stimulate the production of non-organ specif¬ic autoantibodies (Sousa et al., 2011). These phenomena occur through molec¬ular mimicry, induction of Toll-like receptor hypersensitivity or by creating immortal B and T cells (Barzilai et al., 2007). Autoantibodies were detected in 43.1% of the patients; of these, 23.5% positive ANA, 7.8 % positive (AMA+), 16.7 % positive ASMA, 2.9% positive ARA, 17.6% positive APCA and 7.8 % positive Anti canalicular Ab which is similar to other studies (Clifford et al., 1995;Joanna et al., 2011;Rev et al., 2013;Deng et al., 2014). on the other hand no patients were found to be positive for ABBA and anti ribosomal antibodies in any of the 102 serum tested. The study display a relation between the ages of HCV infected patients and the presence of different autoantibodies. This could be connected with the aggravation of the mechanisms protecting against autoimmune reactions and the longer duration of HCV infection. The contribution to worse outcomes of viral hepatitis in the elderly may be associated with several physiological changes (Carrion et al., 2012). Autoantibodies in HCV infected group were more prevalent in female and in older patients (Table1). The increased female tendency for autoimmune disease may relate to estrogenic effects that modulate the autoreactive response directly by affecting the pro-and anti-inflammatory cytokine pathways of lymphocyte differentiation (Markle et al, 2014). Interleukin-6, a multifunctional cytokine produced by a variety of cells, plays a central role in regulating the immune system, acute phase and hematopoiesis (Zekri et al., 2005). The results regarding the investigated Serum IL-6 levels showed that were higher in patients with chronic HCV infection in comparison to healthy adults, which is in accordance with that reported by Fallahi et al 2012;Comanescu et al., 2015. Serum IL-6 levels related with viral load and histological index . On the other hand, lower levels of IL-6 associated with sustained virologic response, at most, in men (Ueyama et al., 2011). In our group of patients, we found significant differences between serum levels of IL-6 in male versus female (p=0.006) in contrast of Comanescu et al,. 2015. Although previous studies have suggested pathogenic roles for raising levels of IL6 and different autoantibodies in autoimmune disease (Ripley et al., 2005), but no correlation was found between IL6 and different autoantibodies, TSH or white blood cell (TLC) levels in HCV patients. When levels of IL6 were compared between HCV patients and healthy group, levels of IL6 were clearly higher. Furthermore, high levels of different circulating autoantibodies in HCV patients were found. As a result, this study suggests for the first time a link between raised IL6 levels and different autoantibodies in HCV patients.. In conclusion, we investigated the relationship between IL6 and different circulated autoantibodies levels with infected HCV patients. We found no correlation between IL6 and autoantibodies in HCV. However, on further analysis, this apparent correlation was explained by the relationship between IL6 levels and autoantibodies levels in different autoimmune disease, which is not specific to HCV. Further studies are required, to understand more completely the mechanisms which IL6 might influence the development of autoantibodies production in HCV, both at the cellular and the molecular level.
2017-09-29T06:41:30.751Z
2016-11-01T00:00:00.000
{ "year": 2016, "sha1": "5f7bc777b03ed2c442db6c682a4b33f66187786a", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "5f7bc777b03ed2c442db6c682a4b33f66187786a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
152801584
pes2o/s2orc
v3-fos-license
A new space-time model for volatility clustering in the financial market A new space-time model for interacting agents on the financial market is presented. It is a combination of the Curie-Weiss model and a space-time model introduced by J\"arpe 2005. Properties of the model are derived with focus on the critical temperature and magnetization. It turns out that the Hamiltonian is a sufficient statistic for the temperature parameter and thus statistical inference about this parameter can be performed. Thus e.g. statements about how far the current financial situation is from a financial crisis can be made, and financial trading stability be monitored for detection of malicious risk indicating signals. Introduction The foundation of the Ising model became an very important event in the modern physics. It was the basic tool explaining critical temperatures for which phase transitions occur in physical systems (see Domb et al [1]). It is one of the most studied models with wide range of applications in different sciences. Weidlich used Ising model to explain the polarization phenomena in sociology. It also has been adapted in economics to explain the diffusion of technical innovations. New technologies were stated as the result of the interaction with neighboring firms. Into general equilibrium economics Ising model was introduced by Föllmer [2]. There are several sources of influence on the price of the stock. One important component is real firm data; another one, correlation between amount of buyers and sellers. The action of each trader (he is a buyer or seller) was taken as the value of the trader. Kaizoji [5] introduced an Ising-type model of speculative activity, which explain bubbles and crashes in stock market. He introduced the market-maker, who adjusts the price on the market in dependence of correlation of the buyers and sellers. After Curie had discovered the critical temperature, Weiss developed a theory of ferromagnetism based on a spin system. It appears by replacement of the nearest-neighbor pairs interacting of the Ising model by assumption that each spin variable interacts with each other spin variable at any site of the lattice with exactly the same strength. Some space-time models have been suggested. One model based on the Ising model, was suggested by Järpe [4]. The partition function of the model contains two Hamiltonians, one of which describes previous moment of the time. This model may be used for describing volatility clustering on the market. The purpose of this paper is to develop a new space-time model, which is also describes the volatility clustering without assumptions about the structure of the lattice. Today information about trading on the markets is available, e.g. in Internet, to everyone. Since there is less space restrictions for this reason it motivates creating a space-time model baced on the Curie-Weiss model. Such a model is formally defined in this paper, and some results about critical temperature of the market is derived according to the distribution of the Hamiltonian in this model. In Section 2 the model and methods which are used are described. In Section 3 all main results are presented. The implications are discussed in Section 4. Model and Methods The model which is introduced in this paper is based on two models. The first is the Curie-Weiss model, which is a simple modification of the Ising model. It allows all agents in the system to interact with each other with a constant strength. The second model is the spatio-temporal model of Järpe [4] which possesses both spatial and time dependence. The state of a site in a lattice is depending on the states of its nearest neighbors and on the global degree of clustering of the previous pattern. We took from the Curie-Weiss model the idea of the global interaction and from the model of Järpe the structure of the partition function and obtained a new space-time model which is appropriate for describing the volatility clustering on the market. Let us consider a market which contains N traders symbolically denoted by i = 1, 2, . . . , N. In this simple model every trader in a time-period can buy a fixed amount of stock or sell the same amount. In the first case the "Trader's decision" is X i = 1, in the second case X i = −1. Then X = (X 1 , X 2 , . . . , X N ) represents the investment attitude of the market. All traders are neighbors. That means that each trader knows about the "Trader's decisions" of all others traders, so his decision is under influence of the others. A configuration of the model is a specification of "Trader's decisions" of all traders of the market. With each configuration x = {x i : i = 1, 2, . . . , N} a Hamiltonian or interaction energy, x i x j is connected. We will consider this Hamltonian without investment environment. Let p(x k ) = P (X = x k ) be the probability of observing the state x k where (x 1 , x 2 , . . . , x 2 N ) is an enumeration of the distinct states of X. Obviously we have that 0 ≤ p(x k ) ≤ 1 and 2 N k=1 p(x k ) = 1. Further we assume that all states x are possible (i.e 0 < p(x k ) < 1). Now, wanting to minimize the entropy we have an optimization problem of minimizing with respect to measure P . Suppose that the energy of each configuration x i has been determined. The probability, P , that the system has configuration x with energy H if the configuration at the privious moment of time is given is: ) is a partition function and β is the market temperature describing the strenght of interaction between the traders. Sufficient statistic The statistic H(X t ) is minimal sufficient for inference about the temperature paramter conditional on the previous state, H(X t−1 ). The Critical Temperature Of The New Time-Space Model The behavior of the system in the Curie-Weiss model is described by the equation where m(x) = 1 N i x i represents magnetization of the configuration. This equation allows us to obtain the property of the temperature of the market and the critical value of the temperature. We will use the method of Hartmann and Weigt [3] to obtain the equation of the behavior of new model. Theorem 1 The equation of the behavior of the system for the new model is When N → ∞ the partition function has the form We know the function f and the form of the f ′′ is deduced from Since the Hamiltonian H is sufficient for the temperature parameter, we are interested in obtaining the distribution of the Hamiltonian. Testing for dependence will make a null hypothisis assuming independence, and thus we first consider the distribution of H assuming β = 0. We are interested in analysis of dependence between X i and X j for i, j = 1, . . . , N. Theorem 3 Let the variables {X i : i = 1, . . . , N} be independent of each other and take their values in {−1, 1} with equal probability 1 2 . Then all nonidentical pairwise products are independent, i.e. X i X j ⊥X k X l if i = j, k, l or j = i, k, l for any dimension N. In this paper, we consider a model where a and b are decisions of the traders. For every trader the probability of their decision to 'buy' or 'sell' is . . , 0 which is to say that M 2 could be in N 2 + 1 different states. If N is odd, then 0 is not a value of M and in the case when N is even. If N is odd we get Hamiltonian distribution with dependent traders Let us now consider the case when the decisions of the traders are not independent. Time dependent process From now on we consider the space-time process X = {X t : t ∈ Z} where X t = {X i,t : i = 1, . . . , N}. Then we have a corresponding sequence of Mean fields, and of Hamiltonians, Results Theorem 6 The sequence of Mean fields, {M t }, is a Markov chain with transition probabilities Theorem 7 The sequence of Hamiltonians, {H t }, is a Markov chain with transition probabilities Theorem 8 The conditional expectation of the Hamiltonian is and the conditional variance is Asymptotics Theorem 9 In case with independent trander we have for large N P i<j Theorem 10 For large N the conditional expectation of the Hamiltonian is ) and the varinace is Stationarity The process {X t } is a time-homogeneous Markov chain because for all states x and x ′ and time-points t. Theorem 11 The process {X t } is time-reversible and has stationary distribution Exact calculations Example 1 We obtained the exact distribution of the statistic i<j X i,t X j,t . Now let us calculate this distribution in the case with 10 traders. We wrote a program in the R language of programming which calculates the probability of all possible configurations of the system containing N traders, by determining the value of the Hamiltonian for each configuration and build the matrix Vec Figure 1 we can see the distribution of H for N = 10. In the Table 1 are the exact numerical values of the energy distribution. Hypothesis test of independence If there is only weak interaction between the traders, then the Hamiltonian is more likely to attain smaller values. If the interactions are stronger, then larger values of the Hamiltonian is more likely and one talks about magnetization, which could be an indicator of, or even a cause of, bubbles and crashes on the market. Therefore methods to state wheter the values of the observed Hamiltonian is in some dangerous region is of vital importance to decision makers and inderictly to the whole society. In order to see if the value of the Hamiltonian deviates from zero to such an extent that dangerous development is indicated, the correct distribution is needed. If the Hamiltonian approaches the critical value, the system may be in a dangerously instable state and a bubble or a crash on the market can appear. Assume that we make n observations x 1 , x 2 , . . . , x n of the system. Then the corresponding Hamiltonian values h 1 , h 2 , . . . , h n are calculated and categorized into K classes. If the number of traders are 10 (as in the previous example), then the we may have 6 classes corresponding to the 6 possible states of the Hamiltonian. But of course we may define these classes in any way we choose. Strong interactions as opposed to near independence is reflected by the hypotheses To have argument for dependence, i.e. to prove H 1 , the statistic where O k is the number of observations in klass k, and E k is the expected number of observations in class k according to the distribution of S under H 0 which is χ 2 k−1 . Thus the null hypothesis is rejected at level α of significance for values of S greater than C which is 1 − α percentile of the χ 2 distribution with k − 1 degrees of freedom. Example 2 Let us consider the data from the Swedish steel market for the October 22, 2008. We have the traders and buyers with certain moments of trading. To analyze these data we will collect information about trading by dividing the time on parts of each about 10 minutes. We than have the ten most active traders (AVA, CSB, DBL, ENS, EVL, MSI, NDS, NON, SHB, SWB) and twenty intervals of activity. Let us consider the Hamiltonian for these traders. The result is in Table 2 This means that we can not reject the hypothesis of independence on level 5% of significance (or any lower level). As a matter of fact the p-value of S in this case is 0.15175 so dependence can not be proved on any reasonable level of significance. Discussion In this thesis we investigated a new space-time model for interacting agents in the financial market. First we reviewed the history of the Ising model and some other Ising-type models. Then the Ising model, Curie-Weiss model and some modifications of these models were formally presented. Also we considered one way of finding the critical temperature of the market. A new space-time model was developed and necessary and sufficient conditions for its stationarity were found. The non-linear sensitivity of market global properties in terms of temperature parameter changes was investigated. The critical temperature for this model was analytically derived. The distribution of the Hamiltonian was analyzed using its dependence with the magnetization of the market and the exact distribution was calculated. The conditional expectation and variance of the Hamiltonian were found and the stationary distribution was obtained. Then the exact distribution of the Hamiltonian for 10 traders was calculated, and the expected distribution was confirmed. Hypothesis test for independence between agents was considered for the Swedish steel market and it showed that there is no evident critical situation on the market at the time of this dataset. The parameter reflects how strongly traders are influenced by each other in the market. It can signalize risk for a crash or a bubble in the market. Therefore it is very important that its analysis is accurate in such situations. What remains for future work? We could try a lot of real data and compare inferential results relying on this model to observable quantities generally accepted as a measure of health of the situation on the market. Also we can consider a bigger ammount of traders to have a more exact p-value in the hypothesis testing. Then we can estimate the amount of interaction for our model using e.g. maximum likelihood estimator. Here exists also the possibility to develop hypothesis testing based on a time dependent model. Also interesting to find out how good this model is to explain volatility clustering.
2010-02-03T07:32:21.000Z
2010-02-03T00:00:00.000
{ "year": 2010, "sha1": "a01229d8a98dbaf5478ee9fc0ee77da2744cdd62", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "a01229d8a98dbaf5478ee9fc0ee77da2744cdd62", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Economics", "Computer Science" ] }
211680045
pes2o/s2orc
v3-fos-license
Controlling of an Under-Actuated Quadrotor UAV Equipped With a Manipulator Unmanned aerial vehicles (UAV) equipped with a manipulator offer an additional flexibility and smart way to grasp the desired objects from inaccessible locations where the access of ground vehicles (GV) are not possible. In this research, we design an adaptive control based regulation, pole-placement and tracking (RST) control scheme for controlling the nonlinear behavior of an under-actuated quad rotor aerial vehicle. The overall performance of the system dealt by MIT rules. The aerial vehicle is equipped with a camera and a gripper that helps us to locate the wanted object from inaccessible location. The model of quad rotor UAV has six degrees of freedom (6-DOF) and the equipped gripper is about (2-DOF). For a successful flight operation UAV requires a reliable controller to stabilize the aerodynamic effects, disturbances that produced by the gripper. Due to aforementioned issue, design an adaptive RST controller, to control the dynamic behavior of the highly nonlinear complex system. Moreover, the effectiveness of the designed controller proven by applying computer-based simulation and it will verify experimentally. Lastly, it observes that the designed controller shows better robustness and good steady state performance to accomplish the given task. I. INTRODUCTION The use of unmanned aerial vehicles (UAVs) has become more popular in the last few years. They are capable of doing all the major stuff for example; surveillance, military applications, and professional video photography. Transporting cargo from one place to another and even transporting passengers. Due to its flying ability, it can easily cover large distance quickly with perfect maneuvering [1]- [4]. The advancement of this technology encourage to execute the grasping of an object using the electronic vision based gripping (EVBG) mechanism which is mounted on the base of UAV [5] and [6]. The additional capability of this technology to grasp the desired object by utilizing the ''eye in hand'' technology in which camera is equipped on the center of the clutch (gripper), that could increase the gasping efficiency [7] and [8]. The associate editor coordinating the review of this manuscript and approving it for publication was Zheng Chen . Mobile manipulation, which is highly vigorous field of study, which mainly focus on unmanned vehicles that provide better stability during the clutching of object. In reference [9], [10], researchers implement the algorithm to grasp the wanted object by using a gripper with 1-DOF and 2-DOF. In past, many researchers implement the different methods due to valuable results rather than complex grasping techniques. Earlier, many researchers have implemented the idea of aerial gripping. For example, a group of European universities has been involved in one appropriate project called aerial robotics cooperation assembly system (ARCAS) [11]- [14]. Moreover, many researchers have presented mechanical design, modeling and construction of a quadrotor with a fixed mechanical gripper, having capability of gripping lightweight objects [11]. The modeling of UAV based on the Newton Euler classical approach in order to illustrate the dynamic and kinematic behavior of the aerial vehicle. The model contains large equations because of the classical approach of helicopter theory that contain many non-linear sine and cosine functions. Some authors have already presented different control approaches to reduce the effects of non-linarites and to achieve the better stability. In [15], a simplified control theory proposed using an inverse kinematic algorithm for the motion, position and yaw angle control of the vehicle and it is equipped with a gripper. Earlier, different researchers that could only increase the complexity in the mathematical model already proposed another example of these types of systems with more number of actuators. By using more than four rotors rather than tri-rotor and quadrotor, UAVs it will directly affect the duration of flight timings. In this research, that is the main reason to use the quadrotor UAV, it may help to save the battery and increase the flight duration [16]- [18]. A multi rotor multifunction aerial vehicle is constructed on a frame of a quad rotor along with dual multiple DOF manipulators to perform aerial manipulating operation [19]. However, a combined control algorithm designed to control the dynamic model of UAV along with its equipped gripper [20]. In reference [21], hybrid adaptive control scheme was developed to stabilize the dynamic behavior of a single DOF manipulator equipped on a quadrotor UAV. The hybrid adaptive controller is a combination of gain scheduling and Lyapunov based model reference adaptive control (MRAC). A novel adaptive back stepping and sliding mode controller (SMC) was designed for quadrotor aerial vehicle with a 2-DOF robotic arm [22]. In reference [23] dynamic model of hex-rotor UAV equipped with an integrated manipulator was formulated by using Newton Euler's approach. Previously, a camera was installed at the end effector of the manipulator also called ''eye in hand camera'' in which the camera position is totally depends on the movement of the manipulator because of its fixed structure [24]. On the other hand, we use gimbal camera, which is best suited for the mapping as well as detecting and grasping of object. In result to this, the camera angle varies with the motion of manipulator as well as the movement of aerial vehicle. Moreover, the performance of an under-actuated aerial vehicle equipped with a gripper verify by computer-based simulation as well as experimentally. The state of the art work is recently, in [25], [26] a novel adaptive robust control based techniques designed for the trajectory tracking of cable driven robots by using time delay in the controller. Firstly, the system lumped dynamics are estimated by using time delay estimation and afterwards provide the model free configuration. Moreover, nonlinear-based innovative adaptive laws designed to enhance the control performance of the entire system. Secondly, the designed control architecture based on three constraints; time delay estimation, added system dynamics and adaptive laws. Furthermore, the chattering free and constant gains of the adaptive controller utilized to modify the performance under time varying instabilities. In reference [27], [28], a new robust control based control strategy designed for driving the cable driven gripper under disturbance and lumped uncertainty. To attain the precise and quick control in the dynamic performance of the robot, the authors proposed a switching component in the control hierarchy. In addition the designed method divided in to three sub modules termed as time delay estimation, modified super twisting approach and fractional order nonsingular terminal sliding mode controller to detain the dynamic error in the system. The newly improved strategy verified by using computer based simulations and experimentally. Lastly, the authors achieve 20 percent improvement to apply the design algorithm for the refereed path. The major contributions of this research are (i) to design a novel control scheme using classical RST controller combined with MRAC in cooperation the MIT rules (ii) to clutch the desired object by using rotorcraft UAV which is equipped with a camera and griper to grab the object from inaccessible location; (iii) the quadrotor based UAV that is fully customized, with the developer kit, that is installed in it which is quite appropriate for the grasping task, it can carry the weight which is about 0.5 to 5 kg approximately; (iv) For gripper manipulation the parameters are manipulated by using Denavit-Hartenberg approach; (v) adaptive controller is capable to stabilize the overall dynamic and kinematic behavior of UAV, however the gains of adaptive controller is being fine tune by RST controller. The rest of the manuscript structured as follows; Section II defines the modeling of quadrotor UAV and center of the mass, moment inertial distribution of complete system it followed by modeling of aerial manipulator in section III. Section IV, presents the combine's dynamics of aerial vehicle with gripper Section V, discusses the dual control model and its structure. The simulation results discuss in section VI. Section VII, defines the overall complete hardware configuration of UAV whereas flight controller, wireless transmission controlling, purpose built propulsion system, global positioning system and guidance system is defined. In section VIII, the experimental results provided. Lastly, the whole article concluded in section IX. II. MODELING OF QUADROTOR UAV AND CENTER OF MASS, MOMENT OF INERTIA DISTRIBUTION The main objective of the mathematical model is to simulate, combine the dynamic and kinematic model of quadrotor and equipped gripper on it. Resulting the complexity in the mathematical model, the different aerodynamic effect (i.e. ground effects, flapping of blade, etc.) are considered to be neglected. The gripper dynamics are developed through the recursive Newton-Euler technique. It is feasible to separately model the translational and rotational movements of UAV via Newton-Euler based equations. The two different models couple together to develop a complete model of the proposed UAV along with a gripper [29]. Initially, the rotors that is proportional to the square of rotor speed δ a produce the aerodynamic effects, i.e., the generated torque ''τ '' and thrust ''T ''. The rotational speed is proportional to the applied voltage of the rotors is rewritten as δαU [V]. When a gripper, grasp some payload / object it will VOLUME 8, 2020 directly concern to the stability of UAV due to the grasp payload and inertia that exists in a system. At that instant of time electronic speed controllers (ESC's) to amplify, the controller governs the power of all motors, which may able to stabilize the whole flight scenario and the things. The fundamental dynamics of ESC model in the form of transfer function can be written as and that could be taken from [16] and [30]. Whereas k m , t m are the propulsion system, δ a is the rotational velocity and f q (U ) is the control inputs of the fixed body frame of UAV. The induced thrust by the actuators that is equivalent to the air passing through the unit time circle. The complex non-linear functions i.e., thrust and the torque depends on vertical and horizontal speed, air density and wind, which are the aerodynamic conditions of the quadrotor UAV. The total thrust of the rotorcraft depends on the sum of all forces. The torque depends on the speed and thrust of each rotor. where ''T t '' defined as total thrust, C T a is denoted as the thrust of aerodynamic coefficient thrust and δ a is the angular velocity of the system. The fixed body frame forces denoted by f B . The UAV forces acting on the Earth's co-ordinate system rewritten as: The relationship between the air resistance and the velocity of the quad-rotor can be: In equation (6) the drag coefficients and velocities are C dx , C dy , C dz ; V x , V y , V z and D x , D y , D z along with Earth fixed frame respectively: where the length of each arm of the UAV denoted by ''l'', ''m ψ '' is the yaw moment, which produced by the difference of the velocity, and ''d'' is the coefficient of yaw moment. The quadrotor UAV is an under-actuated, multivariable and highly nonlinear in nature due to its complex aerodynamics. It has six degree of freedom (6-DOF) along with four actuators and the number of actuators is less than the DOF, which is why categorized in under-actuated systems. The direction of the rotor 1, 3 and rotor 2, 4 are same and fixed parallel to each other, which shown in figure 1 [21]. It is obvious, that UAV equipped with a gripper, the center of mass of body C MB will change as the link of gripper moves and their turning effect means torque becomes a non-linear function of the gripper link angles that rewritten as [31]. To control the altitude and attitude of quad rotor UAV, equation (1) to (7) utilize to control the dynamic behavior of UAV. By considering the generated torque and thrust force of the system which are entirely dependent on the velocities of its actuators. The sum of all torques does not depend on the developed thrust and velocities of each rotor. The two major components of torque generated by the drag of propeller and due to the movement of propeller from the C MB . The varying center of mass of body C MB easily calculated by summing all the body part distances from the center of quad rotor structure [32] and [33]. The torque τ q (U , q j A ) of the quadrotor thrust system is a nonlinear function and C MB (q j A ) is the centroid, due to change the position of gripper. The joint angle of the centroid shifts, the distance between each propeller and the centroid O i C MB (q j A ), which may varies and written as [21], where i denotes each propeller which is due to the configuration, placed in a manner that each propeller closes an 45 0 angle between its closest coordinate system axis. The total torque applied on the quadrotor UAV becomes [21]; By considering the ideal structure of quadrotor, the resulting torque is zero out the first in the sum of all rotors, The total moment of inertia changes but still it is controllable in two different steps; by changing the moment of inertia of each part of body from its own principle axis to the body origin coordinate system R 0 I A,i G . Whereas R 0 is a 3 × 3 transformation matrix of rotation part [21]. The following rotational matrix defines the relationship between the ground co-ordinate system and fixed body co-ordinate system as (13), shown at the bottom of this page. Consequently, the adjustment of coordinate system it is possible to apply the parallel axis theorem in order to make an appropriate expression for system moment inertia [34]. where ''C'' is a vector distance from the body part center of mass to the center of structure, ⊗ represent the outer product of the two vectors andĪ again denotes 3 ×3 identity matrix. By combining it all together to get the equation for the moment of inertia variations with respect to gripper joint angle changes and taken from [32] and [34]; III. MATHEMATICAL MODEL OF GRIPPER The proposed 2-DOF gripper equipped in a quadrotor UAV shown in figure 1. Table 1, shows the Newton-Euler based recursive method by using the constraints of Denavit-Hartenberg (DH) for advancing the kinematics of manipulator. The movement of gripper modeled by using chain rule that taken from [21]. The connection between the body frame of UAV and primary joint of the gripper statically fixed with a constant rotation. The DH constraints for the gripper is defined in table 1, where θ, a and α are the typical DH convention q 1 i and q 2 i are the combined variables of gripper arm, where i = [A]. The DH constraints that could be used to link between arm and UAV and the dynamic equations of each link are consequent. However, the whole technique applied for the gripper specific link movement, where the first joint of arm is L 1 and second joint L 2 . The specific links of the gripper are steady in weight, size and shape i.e. their kinematic constraint ''a'', weight or mass ''m'' and inertia of tensor I G which is identical. By considering the links of the gripper, it would balance with respect to the body frame of UAV and selected coordinate system [21] and [29]. The respective inertia of tensor express as: When the gripper starts moving, the inertia of tensor changed conferring to the linked angle changes. The matrix used for the transformation is essential for calculating the overall inertia and positively conceived by using DH constraints. By adjusting the configuration of gripper link is defined in table 1, it shows the vibrant transformation for each links are T i i−1 , whereas i ∈ 1, 2 due to 2-DOF [21]. The moment of inertia and the end effector (link or joint) with a constant angle, have their own transformation matrices for the link [21], [29] and [32]; By using recursive Newton-Euler method, neglecting friction forces, it can be drive the overall torque and forces that is created from all the joints and states that is; gravitational force and ∂γ A = [γ A ,γ A ], represents dual link angles, rotational velocity and acceleration [21] and [29]. IV. QUADROTOR UAV WITH GRIPPER DYNAMICS The mathematical model of overall system yields by joining the dynamic equations of the motion of an aerial vehicle with the attached gripper. The left-hand side equation consists of gripper dynamics and quadrotor propulsion system. The right-hand side equation is a vector form of rigid body. The concluded equation is in non-linear form where, m is the total mass of the body, I is a 3×3 identity matrix, I (q j A ) is a non-linear term. The total movement of inertia,v andω are the rotational velocities of complete system respectively. Formerly, many researchers work in this field and but they ignored the dynamics of gripper, they only focused the load of gripper attached with UAV and its stability concern matters. They used 1 DOF gripper to grasp the desired payload, instead of 1 DOF to attach 2 DOF gripper with the dynamics of UAV its little complex to solve the stability issues along with the payload. In this research, the main objective is too focused on gripper having 2 DOF along with the 6 DOF of quadrotor UAV. The dynamics of gripper introduced in the system dynamics that written in the left-hand side of equation (19 ) that focuses on the stability and may cause disturbance to the quadrotor control system. Moreover, the two fundamental effects that are analyzed directly, to the impact of quadrotor control are the change in the moment of inertia I and in total mass of body C MB . The overall mathematical model to form stability criteria for this UAV with gripper presented depending upon the result [21]. Table 2, defines the parameters of quadrotor UAV and its equipped gripper on it. V. DESIGNING OF CONTROLLER To control the position, altitude and attitude of quadrotor UAV equipped with a gripper is achieved by classical control approach regulation, pole-placement and tracking RST controller conjunction with model reference adaptive control (MRAC) shown in figure 2. Mostly, in autonomous type of systems MRAC quite commonly used hereinafter-adaptive controller deals with the unwanted uncertainties of the system. Now, the overall system illustration depending on time, U (t), U c (t), Y (t) are the input of the system, control input of the system and output of the system. Where , R/S, T /S are the RST controller and 1/S is the integrator. On one hand, the integrator in the feedback loop of the figure 2 increases the order of the system. On the other hand, it reduces the unwanted noises form the system, excludes steady state errors and improving the convergence rate of entire system. At the time when the aerial manipulator grasps the desired object the system inertia will change, due to this the entire air vehicle stability is affected. The integrator in the feedback loop of the controller helps to improve the rate of convergence by neglecting the unwanted noises and it reduces the steady state error in the system. The MRAC is responsible to deals with the uncertainties of the flight and ensure its better stability. Whereas, RST controller is capable for tuning the parameters of the system and overall stability concerned by MIT rule [36]. The complete system model including the rotorcraft and its motor dynamics taken from [30], and convert the continuous signal it in to discrete because RST controller works on it. The complete adaptive RST controller loop is shown figure 2, the overall dynamics of quadrotor and its equipped actuator belongs to the 4 th order dynamic system. By initializing the sample time is about 0.2 seconds and now the continuous transfer function taken from [21] and convert in to discrete from as per the requirement of the applied control scheme which is rewritten as; Now from the transfer function, the degree of A ac = 4, making 3rd order RST controller, hence gives deg A c = 7, deg R = 3, deg S = 3, and deg T = 3. Then, RST polynomials written as, The general RST controller equation is; where Y ac given as and Suppose A 0 = 1 then A c = A req . Moreover, the desired output response written as, The system error is calculated via MRAC based scheme, such that, Now taking the partial derivative of RST polynomials, one by one, w.r.t error. For polynomial of R; For polynomial S; Finally, for polynomial T; Now, for system stability, MIT rule cost function for yaw moment expressed as, ∴ The same cost function for Roll and Pitch moment as well. Finally, by putting RST polynomials in above equation gives Thus, the designed hybrid controller written as VI. SIMULATION RESULTS In this part of the article to check the validity and effectiveness of the proposed controller, by simulating the model of UAV along with the gripper by clutching the targeted object on Simulink Matlab. The simulation results shown in figure 3 to 11 respectively. Figure 3, shows the referred position of UAV along (x,y,z), the scenario initializing from the originated state (0, 0, 0) respectively at the time when the aerial gripper start movement and reach to the desired position where desired the object is placed in its local frame. In figure 4 and 5, UAV position errors and velocities during the flight are presented in the highlight of control algorithm, during the flight there are certain fluctuations which are shown in (x, y, z) axis respectively, due to the change of altitude of UAV which is directly proportional the speed of UAV rotors. By increasing or decreasing the speed of rotors of the UAV, it will directly affect the altitude of UAV, whereas the outcome wanted forces in the inertial frame of moment, shown in figure 6 respectively. The overall system torques and adaptive tuning parameters of Euler angles displayed in figure 7 and 8 respectively. Lastly, the placement of robotic arm, errors of the manipulator and control torques of the manipulator shown in figure 9 to 11, it shows that the end effector of the gripper will converges to the preferred state asymptotically. VII. HARDWARE CONFIGURATION OF COMPLETE SYSTEM The system hardware configuration of our designed quadrotor UAV along with the gripper shown in figure 12 below. The quadrotor type UAV is used for this experiment, our designed UAV is fully customized and developer type drone that is most suitable for engineering works, suited best for proposed work to carry payload from remote areas [37]. The selected UAV is equipped with servomotors based (2-DOF) gripper to rotate and grab the objects from the desired locations. The rotor craft consists of four arms (propellers) distributed at equidistance from the center of the mass. The gripper placed on the center bottom of UAV as shown in figure 12. The selected rotorcraft is equipped with easy-to-fly expertise and enhanced thru the modest programming via software development kit (SDK). The idea to deliver the object that is the reason to attach a gripper with UAV to carry and transport payload and return back to the originated position. The selected UAV has a great ability to extend its services to carry any actuator or device that need to take in the sky for transportation. It can manipulate the information while finalizing the difficult tasks from a pigeon eye view using equipped zenmuse X3 HD gimbal based camera. Moreover, the rotorcraft is able to carry an additional battery to extend the flight time about 45 minutes with maximum payload of about 5 kg. This may result payload gripping ability within required flight interval. The duration of flight directly depends on the payload but it can adjusted. When UAV carrying payload, the controller is quite flexible to match the desired flight requirements, by modifying the arm angle. This rotorcraft has an ability to tilt their arms about 3 degrees and amplify the yaw torque for better stability. The major specifications of our designed quadrotor UAV are as follows; A. FLIGHT CONTROLLER The brain of the system (i.e. controller) of our designed UAV is Pixhawk PX4 flight controller, which is able to handle all flight operations. This flight controller is smart enough that it alone can control all the motors, power distribution system, buzzers, switches, radio transceiver, GPS, compass, servo motors all the telemetry systems and the robotic gripper. It has a 32-bit microprocessor 256KB RAM/2 MB flash equipped with a 16-bit gyroscopic sensor, barometer and 14-bit accelerometer/magnetometer. There are some advance features of Pixhawk PX4 flight controller such as, 14 PWM outputs with integrated backup system for in flight recovery and with the latest autopilot mode and GPS. B. WIRELESS TRANSMISSION AND CONTROLLING UAV controlled by wireless remote control (WRC) having frequency range is about 2.400 to 2.483 giga-hertz (GHz). For this purpose, an integrated Lightbridge type controller used because of flexible control and connection via cell phone device. Moreover, it is a perfect tool of modern era to control the system or device over cellular communication in the air or ground. The transmission distance is about 1.7 to 4.5 km's having receiver sensitivity of −101dBm to (+) (−) 2dBm. The operating range of temperatures are −10 to 50 degree Celsius and it supports android / iPhone operating system (IOS) based cellular phone version. It has fully customized flying features using SDK of DJI app. The selected flight controller is capable for transmitting and receiving the data from the ground station with high definition (HD) cam view via X3 camera. C. PURPOSE BUILT PROPULSION SYSTEM An advanced form of DJI E800 electrical built in propulsion system that energize our aerial vehicle and remain it in flight. Four electronic speed controllers (ESC) that control the speed of powerful brushless DC motors can control the whole flight scenario. By increasing the payload and power, it ensures the flexibility to generate the system as per our desired requirement. D. GLOBAL POSITIONING SYSTEM (GPS) AND GUIDANCE It is equipped with an enhanced GPS tracking system, which can track the UAV in real time by supporting faster satellite communication. Along with this feature tracking and path, planning ability enhanced drastically. It is equipped with a guidance system having innovative optical sensing enriched with advanced core processors, integrated photosensitive cameras and ultrasonic sensors. The most innovative guidance enables high level of safety and stability for longrange flights. The guidance may also include one central core processor and five modern sensors are as follows; a. Optical-odometer: Highly precise optical odometer, which is especially built-in for the developers / engineers that measures the speed of the guided area with a precision of centimeters. It contains the measurements of the UAV movement in the bounded / unbounded environment. b. 3D Sensing: It gives complete assurance to the programmers / developers to build their own app and acquire data from the guidance, which could be able to sense the three dimensional space. It computes its compact depth imaginings in real-time with a precision of very few centimeters. c. Obstacle Sensing: The proper guidance may help us to monitor the surrounding environment continuously and it may enable to detect the obstacles in real world scenario. The developer UAV has an ability to dodge the smash even at very high velocity. Modern guidance systems are enriching its communication to the main flight controller automatically to avoid collision with obstacles. It monitored by the advance vision, ultrasonic sensors and with the help of other advanced sensors. Dodges or obstacles identified accurately at an extensive distance and optical sensors are able to identify objects like leaf of tree etc. d. Multi Sensor Fusion: The information of all guidance sensors merged automatically. Guidance will automatically select the appropriate sensor to achieve the better positioning performance. It means that if there is any failure of data from one side, it will not disturb the whole flight scenario. Highly Precise Vision based Positioning System: Guidance will provide optical or visual position if there are no GPS signals present at remote areas, even when UAV is flying with very high velocity and altitude about 20 meters. VIII. EXPERIMENTAL TEST BENCH This part of the manuscript, presents experimental test bench by utilizing our designed quadrotor UAV with grasping capability to detect the targeted object via gimbal-based camera equipped in it. Keep in mind that the simulation and experimental scenario is different. The aerial gripper that weighs about 250 grams, which defined in table 2. It is equipped with PIXHAWK X4 flight controller for intelligently control the inputs, outputs and visual sensing by using camera. The altitude of the UAV control manually using remote control (RC). To control the attitude of the quad-rotor UAV the reference system estimation done by the equipped controller. Moreover, the position of the UAV sensed by a highly precise sensor via GPS. The complete experimental setup includes clutching of an object using gimbal based camera and a robotic gripper. Firstly, the UAV takes off from the ground station and keep in mind that to search the targeted object via camera. Secondly, when the object seen it will decrease the altitude of UAV and the end effector is close enough to the targeted object. After that, the pilot can gives signal to the end effector of the gripper to grasp the targeted object after that the UAV goes back to the originated position and lands at there. The whole scenario completed in 30 seconds from takeoff to landing with successfully clutching the targeted object. Firstly, the UAV takeoff and hover the flight for several seconds to check the stability of flight in the direction towards the desired object at 4-meter altitude lock. After that, the UAV heading towards the targeted object via camera based sensing in next 6 seconds. Moreover, when UAV reached at the desired position it will decrease the altitude to clutch the wanted object this done in 8 to 10 seconds. Lastly, the UAV will maintain its altitude, goes back to the originated position, and landed there with the desired payload. The experimental results; are shown in figure 13, 14 and 15 respectively, whereas the gripper is able to grasp the object horizontally and vertically. This performance by adjusting the movement of gripper with respect to suitable position at that time rotorcraft is in hovering mode. In figure 13, it shows that the rotorcraft heading towards the target position, while the gripper will grasp the targeted object horizontally (gripper position). Whereas figure 13(a) and (b) shows that the rotorcraft is adjusting their minimum altitude to grip the targeted object precisely. In figure (c), the rotorcraft is in hovering state moreover in figure (d) it can see that gripper used to clutch the targeted object easily. In figure 14, it shows the rotorcraft direction on the way to the targeted object, while the gripper will grasp the targeted object vertically. Whereas figure 14(a) and (b) shows that the rotorcraft is adjusting their minimum altitude to clutch the targeted object exactly. In figure (c), the rotorcraft is in hovering state whereas in figure (d) it can see that gripper is being used to grasp the targeted object accurately. In figure, 15 it shown that the attached gripper equipped in a UAV is clutched the desired object perfectly. However, in figure 5 (b) it is shown that the UAV in a hovering state along with the gripper is clamped the targeted object. The real time experimental attitude errors during the flight is shown in figure 16 respectively. IX. CONCLUSION This research presents an aerial gripper equipped in an underactuated quadrotor UAV. To grasp the desired object, form the targeted position by using the combination of the proposed controller via vision based sensing. However, adaptive RST controller designed to settle the dynamic behavior of UAV along with the equipped gripper. At the time of grasping the desired object (movement of robotic arm) fluctuations occurred, which effect the stability of the flight. The designed controller is rapidly tuned the overall system, and provides better stability with less error margins. Furthermore, the designed controller validated experimentally as well as simulation results are also verified its validity and effectiveness. X. FUTURE ENHANCEMENT In this research, we choose a quadrotor UAV along with (2-DOF) gripper that grasp the desired object and deliver it to the originated state. We will enhance the clutching scenario by increasing the link of manipulator it will results quick grasping of the desired object. The model of the gripper, which attached on UAV, will be expandable with the help of Denavit-Hartenberg chain parameters for the manipulators. Lastly, the proposed controller algorithm is able to handle multiple links of the manipulator along with multiple DOF's.
2020-02-20T09:05:52.515Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "e8934482869d64c53ca82c4b583602d2b0f179dc", "oa_license": "CCBY", "oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/8948470/09000827.pdf", "oa_status": "GOLD", "pdf_src": "IEEE", "pdf_hash": "726b44df90b500deb39583c2f97962666a585b8e", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
118539455
pes2o/s2orc
v3-fos-license
Methyne Capping in the Boron Buckyball : A Viable Possibility We report the electronic structure of methyne boron buckyballs B68(CH)12 and B72(CH)8 obtained by substituting respectively 12 and 8 boron cap atoms by methyne CH groups on the boron buckyball B80. DFT calculations and minimization techniques have been employed to characterize the structural and electronic properties of endo and exo isomers of these molecules in Th symmetry. A vibrational frequencies analysis predicts that only endo-B72(CH)8 corresponds to a true minimum on the potential energy hypersurface with a cohesive energy similar to the boron buckyball. The viable existence of this carboron buckyball opens new perspectives for a synthesis of large boron clusters. B 80 , the boron buckyball, 1 is probably the most interesting new molecule coming out of current quantum chemical studies. Actual synthesis still seems a remote possibility, but in the mean time relentless theoretical activity is going on predicting many new properties of this boron cluster. Theoretical studies revealed that the boron buckyball has a geometry which is slightly distorted from I h to T h symmetry 2,3 and the analysis of chemical bonding demonstrated a perfect match between the symmetries of the bonding orbitals of B 80 and the original buckminsterfullerene C 60 . 4 Similar to C 60 , B 80 can also condense to form a simple cubic (sc), face centred cubic (fcc) or body centred cubic (bcc) solid cluster. The fcc solid cluster appears more stable than the bcc lattice or the sc. 5,6 The electronic transport transmission in B 80 carried out by ab initio calculations is shown to be higher than in C 60 in the Fermi region. 7 A DFT study on alkali metal doped B 80 revealed that the Na 12 B 80 and K 12 B 80 molecules have high capacity to store hydrogen molecules up to 72 molecules and they are excellent candidates for hydrogen storage. 8 It is well known that for every fullerene isomer C n , there is a leapfrog fullerene C 3n that has the same idealized point group and a closed-shell electronic structure. [9][10] In the same way a boron leapfrog transformation from C n to B 3n+n clusters can be defined. In a recent communication, Yan et al. found that the leapfrog operation could be considered as a generic constructing scheme that can produce a large family of novel stable boron nanostructures. 11 The bonding analysis in B 80 shows that the capping borons centers donate their valence electrons to the πbonding in the truncated icosahedral frame. This suggests the viability of an isoelectronic substitution of the caps by fragments containing 3 valence electrons. A likely candidate is the methyne fragment CH which also substitutes for boron in a multitude of carborane cages. Hexacoordinate carbon has also been suggested by Exner and Schleyer as electron source for an aromatic boron rings. 12 Initial studies on a fully substituted boron buckyball where all 20 cap atoms were replaced by methyne did not yield stable structures, so we have focussed our study on partially substituted isomers with either 8 or 12 caps replaced by CH groups. These isomers can be realised in T h symmetry, which also corresponds to the lowest energy structure of B 80 itself. The methyne groups are oriented in the direction of the radius vector from the centre of the cluster, with the hydrogen either inside or outside of the cage. We will refer to this as endo and exo orientations respectively. Hence four B 80-x (CH) x structures (x=8,12) were examined, as indicated in Fig.1. These methyne boron fullerenes are not isoelectronic to B 80 but have the same frame and open a way to extend the boron buckyball cage to large derivatives boron fullerenes. A full geometry optimization was performed using Gaussian 03 Revision D02 13 and TURBOMOLE V-5-8-0 program packages. 14 The hybrid functional Becke's three-parameter (B3) incorporating the exact exchange functional in combination with Lee, Yang, and Parr's (LYP) correlation functional is used with the split-valence plus polarization SVP basis set. To examine that the optimized structures have reached the global minimum energy at the potential energy surface, we have analysed the vibration frequencies of these isomers. The The two B 72 (CH) 8 , isomers have all 8 carbons isolated by 3 pentagons and 3 hexagons (Fig.2) whereas in B 68 (CH) 12 six pairs of adjacent hexagons carry methyne caps. Of the four structures only the endo-B 72 (CH) 8 was found to be stable in T h symmetry. Its softest vibrational mode occurs at 150 cm -1 at the B3LYP/SVP level, and corresponds to the typical quadrupolar squashing mode of a vibrating sphere. 15 The three other isomers are metastable. The orientation of methyne groups, their number and localization on the buckyball cage seem to play an important role in the thermodynamic stability of the B 80-x (CH) x compounds. The calculated HOMO-LUMO gap, the cohesive energy and the number of imaginary frequencies are listed in Table 1. The only stable isomer endo-72 has the highest HOMO-LUMO gap and the cohesive energy obtained at B3LYP/SVP is close to the cohesive energy of B 80 at the same level. 18 (Table 2). In the optimized geometry of endo-72 molecule there are four different types of boron atoms present (see Fig.3). Typically the two adjacent cap atoms of type B1 approach the common edge in between them, so as to form together with the B4 atoms a four-centre bond. The B-C bond lengths are more uniform. The carbon itself shows a large pyramidal distorsion towards the inside of the cage, with a BCH angle of about 115 degrees. Table 3. Charge distribution per atom for differents atoms The distribution of charges based on a NBO calculation at B3LYP/6-31G(d) level reveals that the carbon atoms are strongly negative. The hexagonal ring with the carbon atom in the centre contains boron atoms of types B2 and B3 with alternating small negative and positive charges. a. iso value 0.119 b iso value 0.107 Fig. 4 Total density distribution The boron cap atoms transfer nearly 2 electrons in the B4-B4 bond directions. Total density maps (Fig.4) show that the density of the methyne caps is nearly cylindrical with slight localisation in the direction of the B2 atoms. a. b. c. d. On the other hand there is a clear evidence for formation of a 4centre bond between the B1 and B4 atoms. In total six such bonds can be realised in the structure with 12 boron caps. This will contribute to the stability of endo-72. The HOMO has t u symmetry and has a shape similar to the HOMO of boron buckyball ( Fig. 5.a-b). The LUMO is also of t u symmetry but corresponds to the level LUMO +2 in B 80 . (Fig 5.cd). All these orbitals have π character like the boron buckyball and the original buckministerfullerene. 2-4 The 28 t u is the HOMO and the three last orbitals in italic are virtuals. (Table 4). In conclusion methyne substitution of boron caps in the boron buckyball leads to a " viable" carbo-boron structure in the Hoffmann -Schleyer -Schaefer sense, 19 provided the remaining boron caps can be stabilized by 4-centre bonds as in the case for B 72 (CH) 8 . There is a strong endohedral pyramidalisation of the carbon atoms. The most interesting aspect about this new structure is that the hydrogen sites provide possibilities for further chemical modifications which may constitute possible synthetic routes. We gratefully thank the Flemish Science Fund (FWO-Vlaanderen) and the K.U.Leuven Research Council for its continued financial support.
2019-04-12T19:18:59.210Z
2009-05-22T00:00:00.000
{ "year": 2009, "sha1": "33f96ec8b8b748a2084ea737805be076534a0a00", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "33f96ec8b8b748a2084ea737805be076534a0a00", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Materials Science", "Physics" ] }
236995104
pes2o/s2orc
v3-fos-license
Forest monitoring and analysis based on Earth observation data services The paper presents an overview of thematic services providing Earth observation based products for forest monitoring. The authors analyzed both global and regional (in particular - Russian) forest services including input satellite data, output thematic products and features of data access. Based on gathered information, the main advantages and limitations of existing services were highlighted. The results of performed research confirm the need to develop the system integrating data from various forest remote monitoring services for the efficient and timely analysis of forests (especially - in cross border regions). Introduction To perform a reliable analysis of forests state, areal monitoring data are required, since, due to the continuity and complex mosaic of the forest cover, point observations do not provide the required amount of information. Today, one of the most effective methods of forest vegetation areal monitoring is Earth remote sensing (ERS) from space. The main areas of using ERS for the forest cover analysis are: • obtaining new data on the analyzed objects and (or) processes (including the use of remote sensing data as the only available source of information); • obtaining additional information about objects and (or) processes in combination with other monitoring methods; • verification of results obtained by other survey methods; • using as a spatial (cartographic) basis for visualization of monitoring / analysis results. At the same time, the amount of data used can vary from individual images and (or) thematic products created on their basis obtained for specific dates and intended for a one-time analysis of objects, to time series of images and (or) thematic products for object monitoring and analyzing the dynamics of changes over a period of time. With regard to forest vegetation, existing ERS technologies allow observation in a wide spectrum of electromagnetic radiation with different spatial and temporal resolutions. Depending on the electromagnetic spectrum used for the analysis of forest vegetation, the following imaging techniques can be used: • optical imaging using the light spectrum (visible, near and middle infrared), which allows obtaining the optical characteristics of objects (their spectral brightness); • infrared imaging, which transmits the temperature characteristics of objects; • radio imaging (microwave radiometric and radar). Multi-and hyperspectral optical imaging is the most commonly used type of imaging, in which sounding is carried out simultaneously in several spectral channels. The difference is in the number of spectral channels used: from 3-5 to 12 for multispectral imaging to several hundred for hyperspectral imaging [1,2]. However, this type of imaging has a significant limitation. It depends on the state of the cloud cover, therefore, even with a high frequency of surveys of a certain area, the total number of usable images (for example, for the summer season) may turn out to be significantly lower than the number of survey cycles performed. In addition, to solve most of thematic problems (including monitoring of forest cover), optical survey must be carried out in good lighting conditions of the investigated area. Unlike optical sensing, radio imagery provides data regardless of cloudiness and illumination of the Earth's surface (which is especially important for areas with a low percentage of sunny days). Thermal infrared imaging does not depend on lighting conditions; however, this type of sensing is hampered by the presence of cloud cover. Therefore, the joint use of various types of data is becoming more and more popular, which makes it possible to compensate for the individual disadvantages of individual methods [3][4][5][6]. In order to obtain the required characteristics of the forest cover from original satellite images, it is necessary to perform their preliminary processing (radiometric, atmospheric, and geometric correction, etc.), as well as thematic processing. Methods of thematic processing of remote sensing data for the purpose of monitoring and analyzing forest vegetation include: • calculation of vegetation indices -combinations of values of spectral brightness coefficients in separate spectral channels (for optical data) or values of radio signal backscattering coefficients at different polarizations (for radar surveys) [7]; • various satellite images classification methods -dividing image pixels into groups (clusters) based on the similarity of spectral characteristics (with preliminary formation of training samples or without) [8][9][10]; • radar interferometry -processing of radar data time series to assess the dynamics of vegetation [11]; • radar polarimetry -processing of radar data obtained in different polarizations; • polarimetric interferometry -a combination of interferometric and polarimetric processing of radar data [12]; • texture analysis of radar images -segmentation of radar images based on their texture features [13]; • joint analysis of data obtained in different spectral channels (data fusion) [3][4][5][6]9]. The listed methods of remote sensing data processing make it possible to determine a wide range of qualitative and quantitative characteristics of forests: the biomass volume, stand density and height, crown diameter, species composition and forest age, as well as the dynamics of changes in these characteristics [14]. Based on remote sensing data, forest use is monitored, including identification of violation of forest legislation (areas of illegal logging). Space imagery is also used to monitor emergency situations, in particular, forest fires, often being the only available source of operational spatial data on the disaster area due to the impossibility or limited use of other observation methods (ground-based or unmanned aerial vehicle imagery). Despite potentially high applicability of remote sensing data in monitoring and analysis of forest vegetation, the actual use of space imagery materials for solving specific practical problems is currently very limited. This is caused, first of all, by the existing difficulties in obtaining and processing space data, namely, the lack of convenient tools for automating the ordering, receipt and use of satellite remote sensing data. Today, for the effective use of space data, a specialist in the field of forestry, environmental management or emergency situations must have a high level of knowledge and a wide range of practical skills in the area of processing and interpreting imagery from space. An effective way to bridge the current gap between the available volume of remote sensing data from space (and thematic information products built on their basis) and the end users (specialists from various fields of professional activity) is to develop tools that provide convenient access to space imagery materials and the possibility of simplifying them for solving specific practical problems. In this regard, one of the main trends in the development of remote sensing in general is the development of methods and tools for simplified access to data, the so-called thematic remote sensing services [15]. The article provides an overview and analysis of the most popular remote sensing services aimed at monitoring and analyzing forest vegetation. The European Forest Fire Information System developed under the Copernicus program is part of the Emergency management service (CEMS) [22]. EFFIS provides access to fire hazard class maps based on such indicators as: Fire Weather Index (FWI), Initial Spread Index (ISI), Build Up Index (BUI), Fine Fuel Moisture Code (FFMC), Duff Moisture Code (DMC), Drought Code, etc. The service also allows you to receive information about current fires and burnt areas, obtained from the MODIS and VIIRS satellite data. In addition to the results of satellite monitoring, the service provides data for forecasting fire hazardous weather: temperature and rain anomaly maps, the source of which is the European Center for Medium-Range Weather Forecasts (ECMWF) [23]. Methods and Materials Forestry TEP is one of the platforms for thematic use of remote sensing data developed by the European Space Agency. This platform provides access to space imagery (Sentinel, Landsat), and also includes the following functionality: mapping vegetation changes using Sentinel-2 data; mapping vegetation types (land cover) using Sentinel-1 and Sentinel-2 data, the Random Forest algorithm and training sample; biomass assessment according to Sentinel-1 data; calculation of vegetation indices, etc. [18]. The Fire Information for Resource Management System was developed by the National Aeronautics and Space Administration (NASA) and provides access to operational and archived data on fires (hot spots) obtained from MODIS and VIIRS satellite imagery. Global Forest Change provides access to maps showing the dynamics of changes in forest areas over the years and built on the basis of data from Landsat satellites [24]. Access to data from FIRMS and Global Forest Change is also carried out on the Global Forest Watch platform, which also includes data from the forestry ministries of a number of states, results of an analysis of the negative effects of forest decline, and other data [21]. 2. "VEGA-Science" [26]. 3. Information system for remote monitoring of the Federal Forestry Agency ISDM Rosleskhoz [27]. 4. GIS "Cascade" [28]. 5. Service TerraTech "Forestry Monitoring" [29]. 6. "Service of hazardous natural phenomena" [30]. Service "Map of fires" was developed by SC "Scanex" and is a dynamic map of fires (in the form of hot spots) on the territory of Russia. The service is based on the use of data from MODIS satellites (product MOD14A1 Thermal Anomalies / Fire) and VIIRS. The global coverage of this data is provided by the aforementioned FIRMS system. "VEGA-Science" is an information service developed by the Space Research Institute of the Russian Academy of Sciences (IKI RAS) and focused primarily on the analysis of vegetation on the territory of Russia and neighboring states. The service provides access to long-term archives of space imagery materials (from the Landsat, Sentinel -1/2/3, MODIS-Terra / Aqua, Suomi -NPP, NOAA 20, Proba-V satellites) and the thematic products obtained on their basis. The latter include the results of calculating vegetation indices and their use for analyzing the state of vegetation; the results of analyzing the fire situation (points of occurrence of fires, size of burnt areas, etc.), maps of agricultural areas, maps of forests by prevailing species, etc. [14]. Also on the basis of "VEGA-Science" the development of the service "VEGA-Les" has been declared, intended for monitoring and analysis of forest cover (and including the basic functionality of "VEGA-Science") [31]. Technologies developed at the IKI RAS are also used in the Remote Monitoring Information System of the Federal Forestry Agency (ISDM-Rosleskhoz) for monitoring wildfires and their consequences [27,32]. For this purpose, the system, along with ground and aerial observation data, uses space imagery from Landsat, Sentinel-2/3 / 5P, Meteosat, NOAA, MODIS-Terra / Aqua, "Electro-L", "Meteor-M" "Resurs-P", "Kanopus-B". Among the open data of the system are: Landsat, MODIS-Terra / Aqua and "Meteor-M" composite images, maps of forest and non-forest fires based on MODIS-Terra / Aqua and VIIRS-NPP data, daily summary report on forest fires (thermal anomalies) based on the results of space monitoring, cards of individual fires, etc. GIS "Cascade" is a system for space monitoring of emergencies of the Ministry of Emergencies of Russia, providing data on fires (forest and non-forest), while the FIRMS service is used as one of the sources. For a number of territories, thematic products are available, including data on hot spots combined with images from the Terra / MODIS satellite. "Forestry Monitoring" is one of the branch geoinformation services, the development of which has been announced by the Russian company "TerraTech". According to the developer company, the service is aimed at providing industry-specific information to forestry organizations for monitoring forest use and identifying changes in the forest fund (clear-cut areas, fires, windfalls, dead stands). The possibility of obtaining data on fires, confirmed by the Ministry of Emergency Situations, and on hot spots detected by satellite data, is also declared in the Service of Natural Hazards, the developer of which is Space Communications LLC together with Russian Space Systems JSC [30]. Results and Discussion Analysis of existing solutions in the field of thematic remote sensing services showed that such systems and services have the following main advantages: • areal nature of data, efficiency and regularity of their delivery (in comparison with the results of ground monitoring); • possibility of obtaining ready-made thematic products without the need for additional processing (as opposed to services that provide satellite imagery source materials);
2021-08-13T20:05:33.209Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "05097748c2a03ad4bb5e5296fbe52798d757527e", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/806/1/012003", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "05097748c2a03ad4bb5e5296fbe52798d757527e", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Physics" ] }
270686836
pes2o/s2orc
v3-fos-license
Study the response of different genotypes of sesame against larval population of leaf webber and capsule borer [ A n tigastra catalaunalis, Dup.] A study was conducted at experimental farm of PC Unit, Sesame and Niger, College of Agriculture, JNKVV, Jabalpur, Madhya Pradesh, during Kharif 2021. Seventy five diverse genotypes of sesame were screened against leaf webber and capsule borer ( Antigastra catalaunalis Dup.). The larval population of leaf webber and capsule borer was recorded at weekly intervals started from one week after germination to till maturity of the crop. The highest mean number of larvae per plant was recorded in genotype Prachi (3.10 larvae/plant/week) followed by JTS-8 (2.67 larvae/plant/week) and IC-204200 (2.47 larvae/plant/week). The average number of larvae (0.60 larvae/plant/week) was recorded lowest in SI-250 followed by IS-1672 (1.03 larvae/plant/week) and TKG-306 (1.23 larvae/plant/week). The observation recorded at an interval of seven days varied significantly from each other in respect of average number of larvae per plant. The highest larval population was recorded in between 49 th (3.50 larvae/plant) and 56 th (3.07 larvae/plant) day after sowing which was coincide with the flowering and capsule formation stage of the crop. Introduction Sesame (Sesamum indicum L.), known as the "queen of oil seeds," is one of the oldest oilseed crop known globally and is cultivated extensively throughout the India.It belongs to the family Pedaliaceae.Both East Africa and India are considered its native regions (Nayar and Mehra, 1970; Bedigian, 1985) [5,1] .The crop popularity has increased due to its highquality edible oil and its rich content of carbohydrates, protein, calcium, and phosphorus (Seegeler, 1983) [7] .However, sesame faces significant challenges from various insect pests, more than 67 species of insect pests reported to damage the crop from germination to maturity.Among these, leaf webber and capsule borer, Antigastra catalaunalis (Dup.) is the most critical pest, which attacks on sesame at all growth stages starting about two weeks post-emergence (Suliman et al., 2004) [9] .This pest affects almost all parts of the plant (Shoot, leaf, flower, and capsule) and under severe early-stage attacks, it can cause complete crop failure (Karuppaiah, 2014) [3] .The damage is notably more severe during dry seasons and after flowering has begun.To manage A. catalaunalis effectively, cultivar resistance is seen as the most desirable and economical tactic.This approach is an excellent alternative to synthetic insecticides, offering an eco-friendly and environmentally safe strategy.Therefore, identifying resistant sesame genotypes to A. catalaunalis is essential for sustainable pest management. Sesame seeds were sown in rows of three-meter length, replicated thrice using a randomized block design.The spacing between rows was maintained at 30 cm, while the distance between individual plants within a row was kept at 10 cm.This arrangement allows for systematic observation and assessment of each genotype's performance.To monitor the infestation of insect pests, particularly the larval populations of the leaf webber and capsule borer, weekly observations were conducted starting from one week after germination and continuing until crop maturity.Larval populations were recorded from five randomly selected plants representing each genotype.This rigorous monitoring process provides valuable data on the susceptibility of different genotypes to pest infestations and helps in identifying potentially resistant varieties/donor.Overall, this experiment will helps to provides valuable insights for evaluation of different sesame genotypes against leaf webber and capsule borer and providing resistance donor for the development of improved cultivars for sustainable sesame cultivation. Results and Discussion Seventy five diverse genotypes of sesame were screened against leaf webber and capsule borer on the basis larval population per plant under natural infestation condition (Field condition).The data on larval population are presented in (Table 1).The significant differences among the genotypes in terms of average number of larvae per plant per week were observed.The larval population of leaf webber and capsule borer was observed from 10 days after sowing (14 DAS) but the incidence on almost all the entries at this stage of the crop growth was low and varying from 0.00 to 2.67 larvae/plant.The incidence of leaf webber and capsule borer was increased as the crop age was increasing.At initial stage (14 th DAS) the mean weekly larval population per plant was very low 0.34 larvae/plant, which was gradually increased as the crop age was increased as 10.3 larvae/plant to 1.85, 2.20 and 2.84 larvae/plant at 21, 28, 35, 42 days after sowing respectively.The highest weekly mean population (3.50 and 3.07 larvae/plant) of leaf webber and capsule borer was observed on 49 to 56 th days after sowing and coincide with the flowering to capsule formation stage of the crop.At this stage, 42 nd and 49 th days after sowing the larval population was varied from 1.00 to 4.33 and 0.67 to 5.00 larvae/plant respectively.Present findings are corroborated with the findings of Makwana et al., (2021) [4] they screened different genotypes of sesame on the basis of number of larvae/plant/week.They observed the maximum incidence on 56 th DAS (1.82 larvae/ plant) and minimum was recorded on 14 th DAS (0.28 larvae/ plant).56 th days after sowing the declined trend in weekly mean population of leaf webber and capsule borer was observed.The results of overall mean larval population observed in different weeks on different genotypes revealed that all the genotypes were differed significantly to each other in respect to record the overall mean larval population of A. catalaunalis per plant per week.The average larval population of leaf webber and capsule borer recorded on different genotypes were ranged from 0.60 to 3.10 larvae/plant/week.Among the screened genotypes the lowest mean larval population (0.75 larvae/plant/week) was recorded on genotype SI-250 followed by ( [2] regarding the mean larval population of A. catalaunalis, ranging from 3.04 to 6.58 per five plants across different genotypes, support the observations made in the present study.These findings indicate the variability in susceptibility to infestation by the sesame leaf webber and capsule borer among different sesame genotypes.Other entries which recorded less larval population of leaf webber and capsule borer were BM-59 (1.30 larvae/plant/week), T 71 -TKG-21 (1.32 larvae/plant/week), S-0644 (1.33 larvae/plant).Present findings are corroborated with the findings of Makwana et al., (2021) [4] they screened different genotypes of sesame on the basis of number of larvae/plant/week.They also recorded the least larval population (0.26 larvae/ plant/ week) on SI-250 followed by IS-178-C (0.36 larvae/ plant/ week).Among the screened genotypes, the genotypes viz., SI-250 RC (0.75 larvae/plant/week), IS-1672 (1.03 larvae/plant/week), TKG-22 (1.15 larvae/plant/week), were found promising against leaf webber and capsule borer and may be utilized in resistance breeding programme for the development of resistant variety against leaf webber and capsule borer. Conclusion The significant difference among the genotypes in terms of average number of larvae per plant per week was observed. The overall mean population of A. catalaunalis was ranged from 0.75 to 2.98 larvae/plant/week.At initial stage (14 th DAS) the weekly mean larval population per plant was very low 0.34 larvae/plant, which was gradually increased as the crop age was increased and reached maximum (3.50 and 3.07 larvae/plant) 49 to 56 th days after sowing and coincide with the flowering to capsule formation stage of the crop.Among the screened genotypes, the genotypes viz., SI-250 RC (0.75 larvae/plant/week), IS-1672 (1.03 larvae/plant/week), TKG-22 (1.15 larvae/plant/week), were found promising against leaf webber and capsule borer and may be utilized in resistance breeding programme for the development of resistant variety against leaf webber and capsule borer. Table 1 : Response of different genotypes of sesame against leaf webber and capsule borer (Antigastra catalaunlis) Figures within parentheses are square root transformed values, *Days after Sowing International Journal of Advanced Biochemistry Research https://www.biochemjournal.com
2024-06-23T15:14:57.774Z
2024-01-01T00:00:00.000
{ "year": 2024, "sha1": "6d1678a71ad2b2113c27f25b8cc55ec1caccf6c3", "oa_license": null, "oa_url": "https://doi.org/10.33545/26174693.2024.v8.i6sf.1341", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "01caedee9b9b9d0dd04c7aa50569f4ee04597a47", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [] }
119137904
pes2o/s2orc
v3-fos-license
Solution of a class of reaction-diffusion systems via logarithmic Sobolev inequality We study global existence, uniqueness and positivity of weak solutions of a class of reaction-diffusion systems of chemical kinetics type, under the assumptions of logarithmic Sobolev inequality and appropriate exponential integrability of the initial data. Introduction A mixture one gets after esterification of one mole of ethyl alcohol by one mole of ethanoic acid contains products (ethyl acetate and water), but also reactants. This is an example of a double displacement reaction (see [34]). We consider here chemical reactions between q 2 species A i , i = 1, . . . , q, as follows where α i , β i ∈ N. We assume that for any 1 ≤ i ≤ q, α i − β i = 0 which corresponds to the case of a reaction without a catalyst. If u = (u 1 , · · · , u q ) denotes the concentration of the species A i then the law of action mass proposed by Waage and Guldberg in 1864 (see again [34]) implies that the concentrations are solutions of the system, for all i ∈ {1, · · · , q}, where k, l > 0 are the rate constants of the two reactions. When considering substances distributed in space, the concentrations change not only under the influence of the chemical reactions but also due to the diffusion of the species over the space, one gets the following kinetic model for a chemical reaction-diffusion equation where for all i = 1, . . . , q, L i is an operator which modelizes how the substance diffuses. We will assume that L i = C i L for some C i 0 and some reference operator L. Moreover, by a change of variables, one can assume that there exist (a posteriori two) constants λ i > 0 such that the system of reaction-diffusion is given by where u(t, x) = (u 1 (t, x), · · · , u q (t, x)) with t 0 and x belongs to the underlying space. The two-by-two system, one of the simplest non trivial example, describes the chemical reaction and the system of equations can by formulated as follow where λ,λ > 0 and u i denotes the concentration of the specie A i and v i the concentration of the specie B i for i = 1, 2. To make things even simpler, we will assume later that λ =λ. More general reaction-diffusion systems, of the following form with prescribed boundary conditions, were intensively studied in the past. Here, Ω is a (possibly unbounded sufficiently smooth) domain of R n , u takes values in R q , C is a usually diagonal q × q matrix which can be degenerate, and F (t, x, ·) is a vector field on R q . Depending on specific choices for C and F (t, x, ·), such systems can present various behaviours with respect to global existence and asymptotic behaviour of the solution. Paragraph 15.4 in [43] is a nice introduction with a lot of classical references. In the above setting, local existence follows from general textbooks on parabolic type partial differential equations (see [23], [30], or for fully general boundary value problems [1]). Global existence question (or how to prevent blow up) gave rise to extensive efforts and to different methods adapted to specific cases (see [2], especially remark 5.4. a), [40], [37] and references therein). Most of these methods consist in deducing L ∞ bounds on the maximal solution from bounds in weaker norms. The survey [37] provides a lot of references, positive and negative results, together with a description of open problems. Its first observation is that, for numerous reaction-diffusion systems of interest in applications, the nonlinearity satisfies two general conditions which ensure respectively positivity and a control of the mass (i.e. the L 1 norm) of a solution. M. Pierre investigates how these L 1 estimates (as well as L 1 bounds on the nonlinearity) help to provide global existence. Further works provide results on asymptotic behaviour. Spectral gap, logarithmic Sobolev inequality and entropy methods are often used to quantify exponential convergence of the solution of an equation to equilibrium, and in the context of reaction-diffusion equations (mostly of type (2)) they were used to study the convergence (to constant steady states) in [16,17,15,25]. Geometric characteristics and approximations of global and exponential attractors of general reaction-diffusion systems may be found in [49,20,50] (and references therein) in terms of precise estimates of their Kolmogorov ε-entropy. In these papers, C is of positive symmetric part and the nonlinearity must satisfy some moderate growth bound involving the dimension n to ensure global existence. Other cross-diffusion systems are studied by entropy methods in [11]. One way or another, local or global existence results in the above setting rely on regularity theory for the heat semigroup, the maximum principle, and Sobolev inequality through one of its consequences, Gagliardo-Nirenberg inequalities or ultracontractivity of the semigroup (as well as Moser estimates). (Note nevertheless that an approach based on a nonlinear Trotter product formula is proposed in [43], but seems to impose some kind of uniform continuity of the semigroup). The aim of this article is to prove global existence of a non-negative solution of the reaction-diffusion system (2) with unbounded initial data in a setting where Sobolev inequality possibly does not hold, as e.g. in infinite dimensions or when underlying measure does not satisfy polynomial growth condition. We restrict ourselves to some nonlinearities for which in a finite dimensional setting, L ∞ bounds of the solution (and so global existence) come for free, [37]. Nevertheless, Sobolev inequality has to be replaced by the weaker logarithmic Sobolev inequality (or other coercive inequalities which survive the infinite dimensional limit; see [8], [6], [39], [7]). The celebrated paper [26] of L.Gross established equivalence of logarithmic Sobolev inequality and hypercontractivity of the semigroup. No compactness embeddings hold in this context. The paper is organized as follows. In the next section we describe the framework and the main result of the paper: in the two-by-two case, assuming 1) that C 1 = C 3 and C 2 = C 4 , 2) that the linear diffusion term satisfies logarithmic Sobolev inequality and 3) that the initial datum f is nonnegative and satisfies some exponential integrability properties (made more precise later), then there exists a unique weak solution of the system of reaction-diffusion equation (3) which is moreover nonnegative. Section 3 presents the iterative procedure we follow to approximate weak solutions of our reaction-diffusion type problem. This is based on some cornestone linear problem which is stated there. The two following sections are devoted to the details of the proof : section 4 to the convergence of the iterative procedure to the unique nonnegative weak solution of the nonlinear Cauchy problem, whereas section 5 focuses on the cornerstone linear problem. In Section 6 we extend our result to the general case of system (2), and present how operators C i L can be modified. (To give a comprehensive proof we focus in the rest of the paper on the two-by-two case which already contains non trivial difficulty). We recall or detail tools used in the proof in three appendices: the entropic inequality, basics on Orlicz spaces, and finally some further topics on Markov semigroups and Orlicz spaces. Framework and main result An abstract Reaction-Diffusion equation In the following we will consider an underlying Polish space M equipped with a probability measure µ. Let L be a (linear) densely defined selfadjoint Markov operator on L 2 (µ) ≡ L 2 (M, µ), that is the infinitesimal generator of a C 0 Markov semigroup (P t ) t 0 symmetric with respect to µ. It is well known that under these assumptions there exist a kernel p t (x, dy) on (M, B M ), that is a measurable family of probability measures such that, for any t 0, any f ∈ L 1 (µ), and for µ almost every x ∈ M, Let us consider the following equation where, in the two-by-two case, • the unknown u(t, x) = (u 1 (t, x), u 2 (t, x), u 3 (t, x), u 4 (t, x)) is a function from [0, ∞) × M to R 4 ; and L u = (Lu 1 , Lu 2 , Lu 3 , Lu 4 ) is defined componentwise. • the nonlinearity G is quadratic: • C is a diagonal matrix of the following form where we assume that C 1 = C 3 and C 2 = C 4 . (This condition is weakened in section 6). Dirichlet form and logarithmic Sobolev inequality Let (E, D) be the Dirichlet form associated to (L, µ) (see [14], [24], [33], [10]; or [22] for a minimal introduction). For any u ∈ D(L) (the domain of L) and v ∈ D (the domain of the Dirichlet form), one has We will denote E(u) ≡ E(u, u), for any u ∈ D. Recall that D is a real Hilbert space with associated norm u D = (µ(u 2 ) + E(u)) 1/2 . We will assume that the Dirichlet structure (E, µ) satisfies logarithmic Sobolev inequality with constant C LS ∈ (0, ∞), that is for any u ∈ D. Classical function spaces Let I = [0, T ]. For any Banach space (X, · X ), we shall denote by C(I, X) the Banach space of continuous functions from I to X equipped with the supremum norm sup t∈I u(t) X . Let also L 2 (I, X) be the space of (a.e. classes of) Bochner measurable functions from I to X such that T 0 u(t) 2 X ds < ∞. As for vector valued functions, let L 2 (I, All these are Banach spaces. We'll furthermore consider the space L ∞ (I, X) of Bochner measurable X-valued functions on I such that ess sup 0≤t≤T u(t) X < +∞. The reader may refer to [41] for Bochner measurability, Bochner integration and other Banach space integration topics. Bochner measurability in an Orlicz space Let Φ : R → R + given by Φ(x) = exp(|x|) − 1 and Φ α (x) = Φ(|x| α ), α 1. These are Young functions and the Orlicz space associated to Φ α is denoted by L Φα (µ). This is the space of measurable functions f such that for some γ > 0 (or functions whose α power is exponentially integrable). An important closed subspace E Φα (µ) of L Φα (µ) consists of those functions such that (7) holds for any γ > 0. This is the closure of the space of simple functions (finitely valued measurable functions) in L Φα (µ). A stricking property of Markov semigroups is that C 0 property in L 2 (µ) implies C 0 property in any L p (µ); 1 ≤ p < +∞ (see [14]). We will need the following weakened result in the context of Orlicz spaces. First regularity result and weak solutions The following lemma exhibits the main role the entropic inequality (see appendix A) and the logarithmic Sobolev inequality play to deal with the nonlinearity we consider. In short, the multiplication operator by a function in L Φ 2 (µ) is a bounded operator, mapping the domain of the Dirichlet form D to L 2 (µ). The reader may note that we will use this lemma to define properly a weak solution of the nonlinear problem below. As for continuity of (8), what precedes shows that, for any t a.e., Integrating w.r.t. t on [0, T ], one gets the result. Finally, continuity of the trilinear mapping follows by Cauchy-Schwarz inequality in L 2 . ⊲ Weak solutions. Let T > 0. We say that a function is a weak solution of (RDP) on [0, T ] provided, for any φ ∈ C ∞ ([0, T ], D 4 ) and any t ∈ [0, T ), When this is satisfied for any T > 0, we'll say that u is a weak solution on [0, ∞). In section 6, we will state the extension of this theorem to the general problem (2). In short, to prove this theorem, we linearize the system of equations by means of an approximation sequence ( u (n) ) n . We show recursively that u (n) (t) is nonnegative, belongs to L ∞ ([0, T ], L Φ 2 (µ)) so that lemma 2.1 guarantees u (n+1) is well defined. This propagation is made precise in a lemma studying the linear cornerstone problem which underlies the recursive approach. We will first focus our efforts to prove convergence of the approximation sequence in the space 4 . Afterwards, we detail a way to study the cornerstone existence lemma. Remark 2 We will exhibit in appendix B a sufficient condition to ensure that f ∈ E Φα (µ), namely, that there exist β > α and γ > 0 such that µ(e γ|f | β ) < +∞. In particular, it implies that, provided f 0 belongs to (E Φ 2 (µ)) 4 , one may choosẽ γ > 0 large enough such that 4 min(C 1 , C 2 ) λC LS <γ, and which will be useful in the proof of existence and uniqueness. Iterative procedure Let us define the approximation sequence ( u (n) ) n∈N in the following way. (First of all, note the parenthesis in u (n) has nothing to do with differentiation, and has been introduced to distinguish the index from powers). Lemma 3.1 (Cornerstone existence lemma) Let L be a Markov generator satisfying logarithmic Sobolev inequality with constant C LS ∈ (0, ∞). Let T > 0 and has a unique weak solution on [0, T ]. Futhermore, provided f , A and B are assumed nonnegative, then the solution u is nonnegative. Recursive equivalence of both systems (RDP n ) and (14) may be seen as follows. Starting from (RDP n ), one easily gets and writting u 1 (t) (and similarly for the other coordinates) gives the announced decoupled system. Conversely, deducing from the decoupled system that u and similarly) follows by induction and uniqueness in lemma 3.1. To be able to define u (n+1) , and hence prove that the iterative sequence is well defined, it remains to check that u , for all i = 1, . . . , 4. This is based on results stated in appendix C and can be shown as follows. We may focus on u , the contraction property of the semigroup stated in lemma C.1 implies that, for any γ > 0, for any t a.e., So that, in particular, for any t ∈ [0, T ], u (n) . Following lemma C.2, what remains to be checked is Bochner measurability of the mapping t → u From the corresponding weak formulation (weak-CS) applied to a constant (in Proof of Theorem 2.2 Convergence of the approximation procedure (RDP n ) From now on, we'll use the notation The main idea is to show that, with for some κ > 0 (specified later), the supremum sup t∈[0,T ] Σ n (t) goes to 0 exponentially fast as n goes to ∞ provided T > 0 is small enough. From lemma 3.1, u (n) is defined recursively as a weak solution of the cornerstone linear problem. To make things simpler at this stage, we here perform formal computations to get a priori estimates. Getting the estimates rigorously makes use of Steklov regularisation, which we will illustrate in the proof of the next proposition. * see [46], [19] or [41] for a proof, [21], appendix E.5, theorem 7, for a statement. Estimate of the L 2 -norm derivative We will focus on the L 2 -norm of u and after natural multilinear handlings, Since u (n−1) is nonnegative, using the quadratic inequality ab ≤ a 2 /2 + b 2 /2, one gets All the similar terms are then estimated thanks to the relative entropy inequality (36). For instance, The logarithmic Sobolev inequality (6) and bound (15) give Using the same arguments for all the terms leads to Completely similar terms are obtained when dealing with the L 2 -norms of the other components. After summation in all the components, one gets which is positive thanks to the assumed constraint (13). Use the absolute continuity and the positivity of Reminding the definition (16) of Σ n and that u (n) (0) = u (n−1) (0), after integration over [0, t], t ∈ [0, T ], we obtain the following main estimate Gronwall argument and convergence Gronwall type arguments applied to the estimate (18) give for any t ∈ [0, T ], It follows that sup Performing a similar estimate for 1 It follows that Hence, ( u (n) ) n∈N is a Cauchy sequence: it converges to some function Global existence of the weak solution Let T > 0 fixed as in the previous computation. We will first prove that the limit We now show we can pass to the limit n → ∞ in all the terms. (Dealing with other coordinates u (n) i is similar by symmetry). Thanks to the continuity of the scalar Moreover, as the convergence also holds in C([0, T ], L 2 (µ)), then lim n→∞ µ(u Dealing with the convergence of the term t 0 µ(φu belongs to L ∞ ([0, T ], E Φ 2 ) which will follow indirectly. The details are as follows. By lemma 2.1, τ n ≡ τ . Let us show that this sequence is Cauchy, and so converges to, say, τ (12) But by (15), and again entropic and log-Sobolev inequalities, . This goes to 0 as n, m → +∞. From lemma C.2, what remains to do is to prove E Φ 2 Bochner measurability. Let us summarize what we obtained. One has after taking limit n → +∞, In particular, choosing φ(t) = ϕ ∈ D, the mapping t ∈ [0, T ] → µ(ϕu (∞) 1 (t)) ∈ R is continuous. Then, arguments detailed on page 9 ensure that u Letting separately n (resp. m) to +∞ in (20) shows that All this implies that Assume the diffusion coefficients C 1 and C 2 , the logarithmic Sobolev constant C LS of L, the reaction rate λ and the exponential integrability parameter γ are linked by the constraint Then a weak solution of the Reaction-Diffusion problem (RDP) with initial datum f is unique. We recall basics on Steklov calculus (see [30] for instance), i.e. appropriate time regularization to deal with weak solutions. For any Banach space X, and any v ∈ L 2 ([0, T ], X), the Steklov average, defined by ) in X, and a h (v)(t) converges to v(t) in X, for every t ∈ [0, T ]. The space X will be here L 2 (µ) or D depending on the context. Proof of Proposition 4.1 ⊳ Let u and v be two weak solutions of (RDP) with the same initial datum f 0. Let M ∈ (0, ∞) such that, ∀i = 1, . . . , 4, µ(e γ|u i (t)| ) ≤ M , t a.e., (and similarly for v). Let w ≡ u − v and a h (w i )(t) the Steklov average of the i component of w as defined before. Integrating 1 We then use the definition of a weak solution with the constant test function And the other term is bounded from above by We can deal with the four similar terms by the same way: let us focus on the first one. One first uses Once gain, entropic inequality followed by logarithmic Sobolev inequality give Note that, up to a constant, the first term of the RHS is the Steklov average of the , so that, as h → 0, it converges in L 1 ([0, T ]) to that function. Going back to (21) and performing all the explained bounds before passing to the limit h → 0, one gets the estimate (note that w i (0) = 0) Summing over all i's, one gets provided the announced constraint 4 λC LS γ ≤ min(C 1 , C 2 ) is satisfied. Uniqueness follows by Gronwall arguments. ⊲ 5 Proof of Lemma 3.1 Our approach to study the cornerstone linear problem introduced in lemma 3.1 will be as follows. We first complete regularity lemma 2.1 by another preliminary lemma (relative to differentiability) which allow us to perform a recursive approximation of the solution of a mollified problem (with a small action of the semigroup on the extra affine term). On the way, we show a priori estimates which will be useful later to remove the mollification and get a solution of our initial problem. Uniqueness and preservation of positivity are tackled in specific sections. Such an approach was already proposed in [22], and computations look quite similar. The main difference consists in the fact that, as A(t) ∈ L Φ 2 (µ), then one has µ(e γ|A(t)| ) < ∞ for any γ (see appendix B), so that, using of the entropic inequality, contribution of the affine extra term may be made small enough to be dominated by the log-Sobolev constant without further constraint. We now turn our attention to the second term, by another use of (24). (Strong) Absolute continuity follows. Indeed, we deal with the first term as for absolute continuity of Ψ ε (z) above. One has which goes to 0 as h → 0. Convergence of (II) to 0 in L 2 (µ), and this for any t a.e., follows from the easy part of the fundamental theorem of calculus for Bochner integrable functions with values in L 2 (µ) (proved via comparison with strongly Henstock-Kurzweil integrable functions and Vitali covering arguments in [ Finally, we focus on (III). For any s a.e., as 0 < h goes to 0, in L 2 (µ) as P ε z(s) ∈ D(L). And we can use dominated convergence theorem as, for g ε (τ, s) ≡ P τ (P ε z(s)), still using (24). At the end of the day, u is a solution a.e. of (22). Deducing that u is a weak solution is easy. If φ ∈ C ∞ ([0, T ], L 2 (µ)), by bilinearity, uφ is absolutely continuous in L 1 (µ) on [a, T ], 0 < a < T , and so is the real valued function t → µ(u(t)φ(t)). The weak formulation follows when a → 0 in the integration by parts formula The proof is complete. ⊲ A mollified problem Remark 3 In sections 5.2 to 5.4 below, we use notation introduced in the statement of lemma 3.1. So T > 0 is fixed, Let us fix ε > 0 and let us consider the following mollified problem We will prove that, for any ε > 0 (and with some more work still at the limit ε → 0), the problem (CS ε ) has a weak solution in [0, T ] that is u (ε) ∈ L 2 ([0, T ], D) ∩ C([0, T ], L 2 (µ)) and, for any φ ∈ C ∞ ([0, T ], D), and any 0 ≤ t ≤ T , To handle this problem, let us consider the following iteration scheme which, as we will prove later, converge to the unique weak solution u (ε) of our problem (CS ε ). Initially, 0 |t=0 = f and then define It follows from Lemmas 2.1 and 5.1 that, for any f ∈ L 2 (µ), u The convergence scheme we detail below is adapted from the one presented in [22] in another context. Proposition 4 (Uniform bound) Fix ε > 0 and f ∈ L 2 (µ). Let u (ε) n be the recursive solution of the mollified problem introduced above. There exists β ∈ (0, +∞) and 0 < T 0 ≤ T both independent of ε and of the initial condition f such that for any n ∈ N, n . For any t a.e., Note that 1 ≤ M γ < ∞ for any γ > 0 since A ∈ L ∞ ([0, T ], L Φ 2 (µ)). By a similar argument, the entropic and the logarithmic Sobolev inequalities give and similarly for the other term. So that . ) 2 ) and integrating with respect to t, Choosing γ > C LS 2 , κ γ ≡ 1 − C LS 2γ > 0 and setting the above inequality implies Hence, by Gronwall type arguments, one gets Let us denote Z n = sup t∈[0,T 0 ] θ n (t). Now, provided we choose γ > C LS , C LS 2γ−C LS < 1, so that, for T 0 > 0 small enough, we end up with Hence, by induction, Note that since the map s → µ(P t (f ) 2 ) + 2 t 0 E(P s (f ))ds is decreasing. It follows that, for any n 0, which is the expected bound. ⊲ Proposition 5 (Existence for mollified problem; ε > 0) For any ε > 0 and any initial datum f ∈ L 2 (µ), there exists a weak solution u (ε) on [0, T ] of the mollified problem (CS ε ) as defined in (weak-CS ε ). n ). For any t 0 a.e., Again thanks to the entropic and the logarithmic Sobolev inequalities, where M γ were defined in the proof of Proposition 4. By the same arguments as before, where T 0 has been defined in the previous proposition, and mimicking what we have done to prove that proposition, this leads to n+1 )(s)ds and wherẽ If we choose γ > C LS , we may take 0 <T 0 ≤ T 0 small enough (and independent of the initial condition f ) so thatηT 0 < 1. Iterating and using uniform bound (27) for n = 1 (and n = 0), one gets . It converges to some u (ε) which is a weak solution in [0,T 0 ] of (CS ε ) (see page 12, but note that things are much simpler here). AsT 0 does not depend on f , one easily extends the solution to the entire interval [0, T ]. ⊲ Uniqueness We now state uniqueness of a weak solution for both cases : with or without a mollification. We omit the proof which is quite similar to the one of proposition 4.1. Proof ⊳ Let ε 1 > ε 0 > 0 and let u 0 = u (ε 0 ) and u 1 = u (ε 1 ) be the associated solutions of the mollified problem (weak-CS ε ). Using Steklov calculus as in the previous proof, we get the same estimate as if we were dealing with strong solutions. Here we avoid such technicalities to focus on the main arguments. Let us denote w = u 1 − u 0 and w = P ε 1 w. One has . Term (I) is bounded by µ(w 2 (t)) as in the previous proof. After integration, using symmetry of the semigroup, one gets , (which is the estimate we would get rigorously after letting h → 0 in the Steklov regularisation). After using Gronwall type arguments and taking the supremum over t ∈ [0, T 0 ], 0 < T 0 ≤ T , we note that, if we prove term (II) goes to 0 as ε 1 > ε 0 > 0 both go to 0, then (u (ε) ) ε>0 is Cauchy (as ε goes to 0) in the Banach space L 2 ([0, T 0 ], D) ∩ C([0, T 0 ], L 2 (µ)). Now, by Cauchy-Schwarz inequality, Following lemma 2.1, Choosing T 0 as in Proposition 4, one may pass to the limit n → ∞ in the uniform bound (27) to get that, for any ε > 0, So the second factor of (29) is bounded uniformly in ε 0 . In order to prove convergence to 0 of the other factor t 0 dsµ [(P ε 1 − P ε 0 ) (w(s))] 2 when ε 1 > ε 0 > 0 both go to 0, one makes use of spectral theory and the above uniform bound (30). Details are given in [22,Theorem 4.10]. Non-negativity We prove here that, provided A and B are nonnegative, the weak solution u of problem (CS), with a nonnegative initial datum f , is nonnegative. Rigorous arguments to get this are as follows. We consider the Steklov average a h (u)(t) and its negative part a − h (u)(t) ≡ max(0, −a h (u)(t)). Recall that, as h goes to 0, for any t . Namely, from any sequence going to 0, extract a subsequence (h n ) such that, for any t a.e. in [0, T ], a hn (u)(t) → u(t) in D. By continuity of contractions [3], it follows a − hn (u)(t) → u − (t) , in D, t a.e. and one may check easily that the sequence ( a − hn (u)(t) − u − (t) 2 D ) n is uniformly integrable in L 1 ([0, T ]). Moreover, in W 1,2 ((0, T ), L 2 (µ)), where χ denotes the indicator function. Hence, using the definition of a weak solution (with the constant test function a − h (u)(s) ∈ D), we get We can pass to the limit with h → 0 which yields (as µ (f − ) 2 = 0) for the same reason as above. The proof of Lemma 3.1 is complete. Extension to the general case The chemical reactions we consider here are of the following form for some given integers α i = β i , for any i ∈ F . F = {1, . . . , q} is a finite set. The associated reaction-diffusion equation is (after appropriate change of variables) This equation is a particular form of the abstract equation (RDP) on page 4 with constant vector λ i (β i − α i ), i = 1, . . . , q and nonlinearity G( u) = q j=1 u α j j − q j=1 u β j j . The method we detailed for the two-by-two case may be adapted to this general situation provided the following assumptions hold. Linearity assumptions We assume that i. F may be partitioned as F = ⊔ k∈K F k so that, L i only depends on which F k , k ∈ K, the index i belongs to. We denote byL k the common operator for any i ∈ F k . For any k ∈ K, one has the following: ii.L k is a Markov generator with (selfadjoint in the L 2 space associated with the) invariant probability measure µ k on (M, B M ) (with the same assumptions as in page 4). iii. (L k , µ k ) satisfies logarithmic Sobolev inequality with constant C k . iv. The measures (µ k ) k∈K are mutually equivalent in the strong sense that there exists a measure µ on (M, B M ) and C ∈ (1, +∞) such that Nonlinearity assumptions We assume that, for any k ∈ K, F − k and F + k are not empty. (Note that this replaces, in the present context, the hypothesis we made in the two-by-two case that C 1 = C 3 and C 2 = C 4 .) Initial data assumptions We assume the following common exponential integrability on the initial data. Common integrability assumption. We assume that, for any Iterative sequence We now define an approximation sequence ( u (n) (t)) n∈N which converges to the solution of problem (31). It is obtained recursively as solutions of the following linear problems. Let us fix a nonnegative initial datum f satisfying the integrability assumptions introduced before. For any n 0, we will impose u (n) (0) = f and, for n = 0, ∂ t u the other case is similar by symmetry). Let us label elements of F ± k in the following way We consider an onto mapping ν k : Define furthermore, for any i, j ∈ F , α and similarly for β's. Let us note here that, for any i ∈ F + k and j ∈ F − k , β i > 0 and α j > 0. Finally, The iterated sequence is then defined as follows † . In the case i ∈ F − k , And, in the case i ∈ F + k , where Z k,i = r∈ν −1 k (i) δ r . Why the sequence is well defined. Recall Young inequality: for any a 1 , . . . , a q 0, Hence, using also Hölder inequality, ⊲ To prove recursively that the sequence ( u (n) ) n is well defined, we have to split the cornerstone existence lemma into the following two lemmas. Lemma 6.1 (Matrix cornerstone existence lemma) Let (L, µ) be a Markov generator satisfying logarithmic Sobolev inequality with constant C LS ∈ (0, ∞). Let T > 0 and A = A(t) be an N × N matrix with coefficients in L ∞ ([0, T ], L Φ 2 (µ)) and B ∈ (L 2 ([0, T ], L 2 (µ))) N . Then the Cauchy problems and with u + = ((u 1 ) + , . . . (u q ) + ), both have a unique weak solution on [0, ∞) Note that we use that u → u + is a contraction so that it contracts both the L 2 (µ) norm and the Dirichlet form E. In the system defined by (32) and (33) only blocks made of some i ∈ F + k and j's in ν −1 k (i) interact. We now focus on these coordinates. The following lemma ensures that positivity and Bochner measurability (34) propagate along the approximation sequence. Lemma 6.2 (Positivity and propagation of measurability.) Let N 2 and let δ 1 , . . . , δ N −1 0 such that Z ≡ N −1 i=1 δ i > 0. Assume furthermore B(t) = 0 and A(t) is of the following form where a i ∈ L ∞ ([0, T ], E Φ 2 (µ)), i = 1, . . . , N , are all nonnegative. Assume the initial datum f ∈ (L 2 (µ)) N is nonnegative. Then the solution u of (MCS) is nonnegative. Moreover, one has We detail a bit positivity argument (the remaining is similar to the two-by-two case). Let v be the unique weak solution of problem (MCS + ) with initial condition f . We now show v is nonnegative and so it coincides to the unique solution of (MCS) with initial condition f . Thanks to Steklov calculus, the following computation is made rigorous. We focus on the last component (which is the most complicated one). And the third term is trivially nonpositive as the a i 's are assumed nonnegative. Hence, We can state the following theorem. Theorem 6.3 Let L i , i = 1, . . . q, be Markov generators satisfying the linearity assumptions described before. Assume the nonlinearity assumptions are satisfied as well and that f 0 belongs to E Φ 2θ (µ), with θ as in the initial data assumption. Then, for any reaction rates λ i > 0, there exists a unique nonnegative weak solution u of problem (31) on [0, ∞). Lemma A.1 (Entropic inequality) Let µ be a probability measure and let f and g be two measurable functions. Assume f 0 (excluding f = 0 µ-a.e.) such that f log + f ∈ L 1 (µ) and µ(e γg ) < +∞ for some γ > 0. Then f g ∈ L 1,− ext (µ) and The proof is based on the following inequality ∀x ∈ R + , ∀y ∈ R, x y ≤ x log x − x + e y . B Basics on Orlicz spaces Classical properties of Orlicz spaces can be found in [38]. Young functions Let Φ be a Young function, that is Φ : R → R convex, even such that Φ(0) = 0 and Φ is not constant. Note that from this, it follows that Φ(x) 0, that Φ(x) → +∞ when x → ∞ and that Φ is an increasing function on [0, +∞). Gauge norm From these properties, it follows that the gauge norm associated to B Φ is indeed a norm. One has The space (L Φ (µ), · Φ ) is a Banach space. Comparison of norms We often have to compare Orlicz norms associated to different Young functions. We already have seen in a footnote that any Young function Φ satisfies |x| Φ(x). It leads to the following lemma. Lemma B.5 (Separability) Assume M is a separable metric space. Then, for any Young function Φ, E Φ (µ) is separable. (Use that B M is countably generated, monotone class theorem and density of simple functions). Duality What follows may be found in [13]. In the case of Young functions with rapid growth (as the Φ α 's introduced before), ∆ 2 condition fails. Consequently E Φ (µ) is a proper Banach subspace of L Φ (µ) (assuming the support of µ is infinite) and L Φ (µ) is not separable. Recall that the conjugate function Ψ * of a Young function Ψ is the Young function defined by Ψ * (y) ≡ sup x 0 (x|y| − Ψ(x)). C Markov Semigroups and Orlicz spaces C.1 Contraction property Lemma C.1 Let Φ : R → R + be a nonnegative convex function. Let (P t ) t 0 be a Markov semigroup on L 2 (µ), for a probability measure µ, as introduced in section 2. In particular, in the case when Φ is a Young function (with domain R), provided f ∈ L Φ (µ), then P t f ∈ L Φ (µ) and (P t ) t 0 is a contraction semigroup on L Φ (µ). Then (43) follows by integration w.r.t. µ and invariance property of P t . C.3 Bochner measurability Let X be a Banach space. Recall that an X-valued function u : I → X defined on a compact interval I is Bochner measurable provided it is an a.e. limit of a sequence of X-valued simple functions on I (see [41] for instance). Proof of proposition 1 By density of L 2 (µ) in L Φ * α and contraction of P t in L Φ * α , C 0 property of P t in L Φ * α follows from C 0 property in L 2 (µ). Indeed, let f ∈ L Φ * α . ε > 0 being fixed, let g ∈ L 2 (µ) such that f − g Φ * α < ε 3 . Then allows to conclude. As a consequence, provided f ∈ E Φα , t → P t f ∈ E Φα is weakly continuous, and so Bochner measurable as E Φα is separable, following Pettis measurability theorem (see page 9 for references).
2014-05-06T07:06:57.000Z
2014-05-06T00:00:00.000
{ "year": 2014, "sha1": "f91717d84a12dc5b3e6dd4dc1e794cf0b1b0967e", "oa_license": "CCBY", "oa_url": "https://ambp.centre-mersenne.org/article/AMBP_2017__24_1_1_0.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "84fe39ec28d1f4b5cf8ef04d2937585b714e7cdf", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
13108793
pes2o/s2orc
v3-fos-license
Elemental Diet Accelerates the Recovery From Oral Mucositis and Dermatitis Induced by 5-Fluorouracil Through the Induction of Fibroblast Growth Factor 2 Mucositis and dermatitis induced by anticancer agents are common complications of anticancer therapies. In this study, we evaluated the efficacy of Elental (Ajinomoto Pharmaceutical Ltd, Tokyo, Japan), an elemental diet with glutamine in the treatment of 5-fluorouracil (5-FU)-induced oral mucositis and dermatitis in vivo and tried to clarify the underlying mechanisms of its action. Oral mucositis and dermatitis was induced through a combination of 5-FU treatment and mild abrasion of the cheek pouch in hamsters and the dorsal skin in nude mice respectively. These animals received saline, dextrin or Elental suspension (18 kcal/100 g) by a gastric tube daily until sacrifice. Elental reduced oral mucositis and dermatitis more effectively than dextrin in the animal model. Moreover, growth facilitating effects of Elental on HaCaT cells were examined in vitro. MTT assay, wound healing assay, and migration assay revealed that Elental could enhance the growth, invasion, and migration ability of HaCaT. ELISA and Western blotting showed upregulated FGF2 in Elental-treated HaCaT. These findings suggest that Elental is effective for the treatment of mucositis and dermatitis, and may accelerate mucosal and skin recovery through FGF2 induction and reepithelization. Introduction Oral mucositis and dermatitis are common complications of cancer chemotherapy and radiotherapy. Mucositis causes acute oral pain, and can compromise nutritional intake and oral hygiene in head and neck cancer patients. 1 The detailed mechanism of chemotherapy-induced mucositis is still unclear; however, it might be triggered by multiple factors. Chemotherapeutic agents may damage rapidly dividing immature intestinal crypt cells in the gut, as well as more superficial immature mucosal cells in the oropharynx, oral cavity, and skin. [2][3][4][5][6] In addition, anticancer agent may harm dividing stem cells. 3 It was previously reported that chemotherapy can damage the basal epithelial cell layer directly, which causes the loss of the renewal capacity of the epithelium, with subsequent clonogenic cell death, atrophy, and ulceration. However, recent investigations involving morphologic findings, pro-inflammatory cytokines, platelet aggregation, endothelial and connective tissue injury, and tissue apoptosis have suggested that mucositis is not exclusively an epithelial process but involves all the tissues of the mucosa. 2 Moreover, in the case of gut-related toxicity of chemotherapy and radiotherapy, the phenomenon of bacterial translocation across a malfunctioning gut epithelium may play a role. 2,7,8 Although numerous types of therapy have been introduced for preventing or decreasing chemotherapy-induced mucositis, the efficacy of these treatments remains limited, [9][10][11][12][13][14][15][16] which is also true in case of chemotherapy-induced dermatitis. 6 Elental (Ajinomoto Pharmaceutical Ltd, Tokyo, Japan), an elemental diet with l-glutamine has been used in Japan for decades as a treatment of malnutrition in patients, which has 721014I CTXXX10.1177/1534735417721014Integrative Cancer TherapiesHarada et al research-article20172017 1 Yamaguchi University Graduate School of Medicine, Ube, Japan an easily digestible nutrition formula that combines amino acids, carbohydrates, vitamins, minerals and minimal fat. 17,18 Animal studies have shown that supplementation of an elemental diet with glutamine may protect the gut from chemotherapeutic agents and radiation. 3,6 Several authors have reported the benefits of Elental against Crohn's disease, [18][19][20][21][22] and chemotherapy-induced mucositis and stomatitis in cancer patients. 23,24 We have used elemental diet with glutamine (Elental) for improving malnutrition of patients undergoing chemotherapy in these years, and our clinical study revealed the efficacy of Elental for ameliorating chemotherapyinduced oral mucositis and dermatitis in head and neck cancer patients. 25 In this study, we have used animal models to investigate the efficacy of Elental against chemotherapyinduced mucositis and dermatitis in vivo. Moreover, we used Elental in cell cultures to check its growth facilitating effects in vitro and to identify the mechanism of its healing action. Animals Thirty-six male Syrian hamsters were purchased from Japan SLC, Inc (Hamamatsu, Japan) at 4 weeks age. Fifteen female athymic nude mice with CAnN.Cg-Foxnlnu/CrlCrlj genetic background (CLEA Japan, Inc, Tokyo, Japan) were also purchased at 4 weeks age. They were housed in temperature-controlled rooms and received water and food ad libitum. Surgical procedures and animal treatments were conducted in accordance with the Guidelines for Animal Experimentation of Yamaguchi University. Induction of Experimental Oral Mucositis and Dermatitis Oral mucositis was induced in hamsters by 2 intraperitoneal (i.p.) administrations of 5-FU (Wako, Osaka, Japan) on the first and third days of the experiment (60 mg/kg and 60 mg/ kg, respectively) and by superficial scratching on the cheek pouch with a metal brush on the second and third day under anesthesia (pentobarbital, 30 mg/kg, i.p.) according to an experimental oral mucositis model. [13][14][15] In the same way, dermatitis was induced in nude mice by administrations of 5-FU (60 mg/kg, i.p.) twice, on the first and third days of experiment and by superficial scratching on the dorsal skin in nude mice with a metal brush on the second and third days under anesthesia (pentobarbital, 30 mg/kg, i.p.). The metal brush was dragged 3 times in linear fashion across the cheek pouch of hamsters or the dorsum skin of nude mice until erythematous changes were noted. Figure 1A shows the experimental design of our in vivo study. We set up the following 3 groups of hamsters and nude mice, with 12 hamsters or 5 nude mice per group. The 5-FU + abrasion group received saline (1 mL/body/day); the dextrin group received dextrin (18 kcal/100 g body weight/day, Wako); and the Elental group received Elental (18 kcal/100 g body weight/day), which were orally administered daily until sacrifice and 1 hour after the injection of 5-FU on the first and third day of the experiment. In Vivo Experimental Groups The hamsters were sacrificed on the fifth, sixth, seventh, and eighth days under anesthesia (pentobarbital, 300 mg/ kg, i.p.) and then the cheek pouches were removed for the measurement of mucositis area. In case of nude mice, dermatitis area was observed and measured every day. Each lesion was calculated by multiplying the major axis by the minor axis. Wound Healing Assay Cells (1.5 × 10 4 cells per well) were seeded into 24-well plate (Becton Dickinson Labware) and were cultured in DMEM/Ham's F-12 with 10% FBS and 1% penicillin/ streptomycin until a monolayer of cells were formed. The cell layer was then gently wounded through the central axis of the plate using a 200 μL pipette tip (yellow tip). After scratching, the cells were treated with different concentrations of Elental (0, 0.1, 0.5, 1, 5, 10, 50, and 100 μg/mL) which was dissolved in DMEM/Ham's F-12 medium without FBS. The migration of cells into the wound was observed at 24 hours by microscope (BX-51-33-FLD2, Olympus, PA, USA). Cell Migration Assay Cell migration assay was performed using a Boyden chamber according to the manufacturer's instructions (Neuro Probe, Gaithersburg, MD, USA). Briefly, 25 μL DMEM/ Ham's F-12 without FBS plus different concentrations of Elental (0, 0.1, 0.5, 1, 5, 10, 50, and 100 μg/mL) was added as chemoattractant in the lower chamber. Next, 5 × 10 3 cells in 50 μL DMEM/Ham's F-12 medium without FBS were seeded on a gelatin-coated polycarbonate membrane in the upper chamber. After the cells were incubated for 24 hours at 37°C in a 5% CO 2 atmosphere, the polycarbonate membrane was washed with phosphate buffered saline, and cells on the top surface of the polycarbonate membrane were removed with a cotton swab. Cells adhering to the lower surface were fixed with methanol, stained with hematoxylin solution and counted under a microscope in 5 predetermined fields (200×). All assays were independently repeated at least three times. Enzyme-Linked Immunosorbent Assay for Quantitative Determination of FGF2 FGF2 contained in cultured medium without FBS either from untreated control or from Elental-treated cells was measured by a microtiter-based sandwich enzyme immunoassay system, which is commercially available and specifically estimates the total amount of FGF2. According to the protocol of the enzyme-linked immunosorbent assay (ELISA) kit, cultured medium was subjected to the ELISA using immunoassay kits for FGF2 (R&D Systems, Inc, Minneapolis, MN, USA). Statistical Analysis All data were expressed as mean ± SD. The significance of the experiment results was determined by Student t test. The differences were considered statistically significant when P < .05. Effect of Elental on Oral Mucositis of Hamster Cheek pouch THE combination of 5-FU administration followed by mechanical trauma to the oral mucosal tissue resulted in oral mucositis on the cheek pouch of the hamsters. After the second mechanical irritation (on day 2), all hamster groups showed severely ulcerated mucosal tissue. Figure 1B shows that the healing rate was faster in animals that had been treated with Elental than in the control group or dextrintreated group. It was interesting to note that the treatment with Elental reduced oral mucositis more than dextrin of the same caloric value in hamster cheek pouch mucositis model. Effect of Elental on Dermatitis of Mouse Dorsal Skin The combination of 5-FU administration followed by mechanical trauma to the dorsum skin tissue resulted in dermatitis on the dorsum of the nude mice. Ulcerated skin tissue was observed in all mice after the second mechanical irritation (on day 2). As shown in Figure 1C and D, the Elental group showed better healing rate than the control or dextrintreated groups. Similar to our observation in the hamster cheek pouch mucositis model, Elental reduced dermatitis more effectively than dextrin of the same caloric value. Effect of Elental on Human Keratinocyte Cell Morphology and Proliferation We could not detect any difference between the morphology of the untreated HaCaT and Elental-treated HaCaT cells. As shown in Figure 2, both cells had the same cobblestone morphology and 5-FU treatment induced apoptosis in HaCaT cells. MTT assay was used to measure the growth rate of the Elental-treated and untreated HaCaT cells. Figure 3 shows that, in the good nutrition condition (10% FBS medium), Elental® (5 μg/mL)-treated HaCaT cells had higher proliferative ability than that of untreated HaCaT cells at 24 hours of culture. Also, in the nutritionally-poor condition (0% FBS medium), the growth rate of Elental (0.5-10 μg/mL)-treated HaCaT cells was higher than that of untreated HaCaT cells at 24 hours of culture. Moreover, Elental (0.5-10 μg/mL) Effect of Elental on Wound Healing Ability Wound healing assay revealed that Elental-treated HaCaT cells had higher invasive capacity compared with that of untreated HaCaT cells at 24 hours of culture. As shown in Figure 4, Elental® exerted wound healing effects dose-dependently. Effect of Elental on Migration Ability The migration activity of the Elental-treated HaCaT cells was measured with the Boyden chamber. Figure 5 shows that Elental-treated HaCaT cells had significantly higher migration ability than that of untreated HaCaT cells, while 5 μg/mL Elental showed the most noticeable effect on migration than the other concentrations. Expression of FGF2 in Elental-Treated Cells To clarify the healing acceleration mechanism of Elental against mucositis and dermatitis, we examined the expression of FGF2 in cells by Western blotting. Figure 6A shows that Elental (0.1-50 μg/mL) enhanced the expression of FGF2 in cells compared with the untreated cells, while 5 μg/mL Elental could strongly increase the expression of FGF2. We measured the amount of FGF2 secreted into the culture medium by ELISA. As shown in Figure 6B, the amount of FGF2 secreted from Elental-treated HaCaT cells was significantly higher than that from untreated HaCaT cells, especially from HaCaT cells treated with 5 to 10 μg/ mL Elental. Discussion Mucositis, dermatitis, dysphagia, xerostomia, and hematological toxicities are well known as major side effects of chemotherapy, including molecular-targeted agents. Incidence of severe oral mucositis or dermatitis leads to higher unplanned breaks and delays in cancer treatments with radiation or chemotherapy, which is invariably associated with poorer outcome. [26][27][28][29] However, effective treatments for radiation-or chemotherapy-induced mucositis and dermatitis have not been established yet. [30][31][32][33][34] Elental has an easily digestible nutrition formula that combines amino acids, carbohydrates, vitamins, minerals, minimal fat, and l-glutamine, and its safety has been established. 17 It has been approved and covered by public insurance as a prescription treatment indicated for malnutrition in Japan. Elental is inexpensive, costing <US$4.00 per day and the estimated cost for a 7-week course of Elental is about $112. This elemental diet was reported as useful in the treatment of a number of diseases and in reducing mucosal inflammation in acute Crohn`s disease by lowering the mucosal proinflammatory cytokine production. 18,21,22 Moreover, the effectiveness of Elental in reducing the severity of chemotherapy-induced mucositis and dermatitis was also reported in patients with colorectal cancer and esophageal cancer. 23,24 We previously reported the effectiveness of Elental for the treatment of oral mucositis and dermatitis induced by chemotherapy without any adverse effects related to its clinical use. 25 In this study, we examined the efficacy of Elental against chemotherapy-induced mucositis and dermatitis in vivo and tried to understand the detailed mechanisms of its action in vitro. Elental had dramatic effects in the recovery of chemotherapy-induced mucositis and dermatitis in our animal models as shown in Figure 1B-D, which was more than our expectations. Interestingly, the treatment with Elental reduced oral mucositis and dermatitis more efficiently than dextrin of the same caloric value in the hamster cheek pouch mucositis model and in the nude mouse dorsum dermatitis model. These findings suggested that Elental might possess actions similar to healing accelerating agents. We therefore investigated the influence of Elental on HaCaT. As Figures 2 and 3 show, Elental did not have any adverse effect on the cells and could exert profound growth-stimulating effect on damaged cells or on cells in nutritionally poor conditions, and mild growth-stimulating effect on cells in good nutritional condition. These data suggest the safety and usefulness of Elental in the treatment of malnutrition. Moreover, the treatment of Elental promoted the wound healing ability and the migration ability of HaCaT cells, as shown in Figures 4 and 5. The above findings imply that Elental might promote the healing of oral stomatitis and dermatitis directly. So, we took particular note of FGF2 because it is reported to play an important role in healing of wounds and skin ulcers. 35 Several studies demonstrated that FGF2 significantly inhibits scar formation after burn injury and can treat wounds that are difficult to cure. 35,36 In fact, the treatment of Elental enhanced the production and secretion of FGF2 as shown in Figure 6A and B. Briefly, Elental may accelerate mucosal and skin recovery through induction of FGF2, and reepithelization. The human recombinant FGF2 agent, trafermin (Fiblast Spray, Kaken Pharmaceutical, Tokyo, Japan), is thought to be a promising agent for the management of severe oral mucositis and dermatitis with tissue defect. 37 Regardless of its benefit for decreasing oral mucositis and dermatitis, we cannot use the trafermin in cancer patients because trafermin has the potential to promote cancer, as FGF-2 has been found to be involved in cell division, angiogenesis, vascular remodeling, hematopoiesis, and tumor progression. 38,39 Therefore, it is not clear how the FGF2 inducing effect of Elental might affect the tumors in cancer patients. However, as Elental is not itself a growth factor like trafermin, it might not induce FGF2 as strongly as trafermin. Therefore, we think Elental might be available for the management of severe oral mucositis and dermatitis in cancer patients. There could be other factors that are responsible for the efficacy of Elental against mucositis and dermatitis. Future investigations should aim to identify these unknown factors that would enable us to understand the mechanism of action of Elental more clearly. Authors' Note Some of the data of this article were presented as a poster (SUN-pp130) at the 37 th ESPEN Conference (September 5-8, 2015, Lisbon, Portugal). Declaration of Conflicting Interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This study was supported in part by a Grant-in-Aid from the Japanese Ministry of Education, Science and Culture (Grant No. 15K11292). Figure 6. (A) Western blotting revealed that treatment of Elental promoted the production of FGF2 in HaCaT cells, especially 5 μg/mL of Elental increased the expression of FGF2 more profoundly than other concentrations. (B) ELISA was used to measure the amount of FGF2 secreted into the culture medium after Elental treatment. The amount of FGF2 secreted from Elental-treated HaCaT cells was significantly higher than that from untreated HaCaT cells. In particular, a high amount of FGF2 was secreted from HaCaT cells treated with 5 to 10 μg/ mL Elental.
2018-04-03T00:38:57.978Z
2017-07-26T00:00:00.000
{ "year": 2017, "sha1": "08badf429ccea16f867215b52cf559c70a17e696", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/1534735417721014", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "08badf429ccea16f867215b52cf559c70a17e696", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
1055107
pes2o/s2orc
v3-fos-license
Trial-based cost-effectiveness analysis comparing surgical and endoscopic drainage in patients with obstructive chronic pancreatitis Objective Published evidence indicates that surgical drainage of the pancreatic duct was more effective than endoscopic drainage for patients with chronic pancreatitis. This analysis assessed the cost-effectiveness of surgical versus endoscopic drainage in obstructive chronic pancreatitis. Design This trial-based cost-utility analysis (ISRCTN04572410) was conducted from a UK National Health Service (NHS) perspective and during a 79-month time horizon. During the trial the details of the diagnostic and therapeutic procedures, and pancreatic insufficiency were collected. The resource use was varied in the sensitivity analysis based on a review of the literature. The health outcome was the Quality-Adjusted Life Year (QALY), generated using EQ-5D data collected during the trial. There were no pancreas-related deaths in the trial. All-cause mortality from the trial was incorporated into the QALY estimates in the sensitivity analysis. Setting Hospital. Participants Patients with obstructive chronic pancreatitis. Primary and secondary outcome measures Costs, QALYs and cost-effectiveness. Results The result of the base-case analysis was that surgical drainage dominated endoscopic drainage, being both more effective and less costly. The sensitivity analysis varied mortality and resource use and showed that the surgical option remained dominant in all scenarios. The probability of cost-effectiveness for surgical drainage was 100% for the base case and 82% in the assessed most conservative case scenario. Conclusions In obstructive chronic pancreatitis, surgical drainage is highly cost-effective compared with endoscopic drainage from a UK NHS perspective. Design: This trial-based cost-utility analysis (ISRCTN04572410) was conducted from a UK National Health Service (NHS) perspective and during a 79-month time horizon. During the trial the details of the diagnostic and therapeutic procedures, and pancreatic insufficiency were collected. The resource use was varied in the sensitivity analysis based on a review of the literature. The health outcome was the Quality-Adjusted Life Year (QALY), generated using EQ-5D data collected during the trial. There were no pancreas-related deaths in the trial. All-cause mortality from the trial was incorporated into the QALY estimates in the sensitivity analysis. Setting: Hospital. Participants: Patients with obstructive chronic pancreatitis. Primary and secondary outcome measures: Costs, QALYs and cost-effectiveness. Results: The result of the base-case analysis was that surgical drainage dominated endoscopic drainage, being both more effective and less costly. The sensitivity analysis varied mortality and resource use and showed that the surgical option remained dominant in all scenarios. The probability of costeffectiveness for surgical drainage was 100% for the base case and 82% in the assessed most conservative case scenario. Conclusions: In obstructive chronic pancreatitis, surgical drainage is highly cost-effective compared with endoscopic drainage from a UK NHS perspective. Chronic pancreatitis is a progressive inflammatory disorder, which can cause abdominal pain, various local complications and endocrine-exocrine pancreatic insufficiency. When chronic pancreatitis is associated with an obstructed pancreatic duct, a suitable therapy is ductal decompression, using an endoscopic or a surgical approach. Published evidence having compared endoscopic and surgical procedures in patients with chronic pancreatitis and an obstructed pancreatic duct showed that surgical drainage of the pancreatic duct was more effective than endoscopic drainage in terms of pain relief and number of follow-up procedures. [1][2][3] However, surgery is a more costly procedure than endoscopy and is believed to be associated with a higher risk of mortality. This trial-based economic analysis aimed to assess the cost-effectiveness of surgical drainage of the pancreatic duct compared with endoscopic drainage, for patients with chronic pancreatitis and an obstructed ARTICLE SUMMARY Strengths and limitations of this study ▪ The robustness of the results was assessed in the sensitivity analysis by varying relevant estimates using outcomes from reviews of the literature. ▪ All analyses were probabilistic; applying probability distributions to each model parameter and allowing estimation of the empirical distribution of the cost-effectiveness results. ▪ The limited randomised evidence on the topic led to develop this analysis based on a single trial with a relatively small sample size. ▪ The analysis did not include primary care costs associated with the follow-up of patients in the community. However, such costs are likely to be small compared with the cost of procedures and hospitalisation. This analysis was developed from a UK perspective and using collected data in the Netherlands. Caution is recommended before the results are extrapolated to other settings. pancreatic duct, and combined resource use, cost, mortality and patient's quality-of-life data. A cost-utility model was originally developed by the National Clinical Guideline Centre, Royal College of Physicians of London based on the 24-month aggregated resource use data from the Cahen trial 1 This was conducted as part of the development process of the Clinical Guideline on Alcohol Use Disorders, which was commissioned and funded by the National Institute for Health and Care Excellence (NICE). 4 This original analysis concluded that surgery was highly cost-effective compared with endoscopy and led to the recommendation by NICE that National Health Service (NHS) healthcare providers should 'Offer surgery, in preference to endoscopic therapy, to people with pain from large-duct (obstructive) chronic alcohol-related pancreatitis'. 4 Second, when the long-term follow-up data from the Cahen trial (mean 79 months) became available, it was found that the cost per patient was $6006 higher in the endoscopy group, but this difference was not statistically significant (95% CI, $16 188 to $27 786; p=0.29). 3 At this time point, there were no longer differences in quality-of-life (SF-36) and health utility (EQ-5D based) scores. We present a trial-based cost-utility analysis, based on this long-term follow-up data. The Cahen trial 1 3 (the trial) included symptomatic patients with chronic pancreatitis and a distal obstruction of the pancreatic duct (without an inflammatory mass). Thirty-nine patients underwent randomisation: 19 to endoscopic transampullary drainage of the pancreatic duct and 20 to operative pancreaticojejunostomy. In the endoscopy group, following a sphincterotomy and dilation of the pancreatic duct stricture, a 10-French Amsterdam biliary stent was inserted and the stricture reassessed at 3 months. Persistent strictures were treated by repeated dilation and sequential insertion of multiple stents. Extracorporeal shock-wave lithotripsy was used if there were one or more intraductal stones more than 7 mm in diameter. In the surgical group, a pancreaticojejunostomy was performed in 18 patients by the method of Partington and Rochelle. The pancreatic duct was incised over the full length up to 2 cm from the ampulla. When retrieval of concretions from the head area required further opening of the duct towards the ampulla, a limited wedge resection of pancreatic tissue was performed. In one patient, a Whipple procedure was performed because of peripancreatic inflammation. In another patient, stone extraction required a Frey procedure. The baseline demographic and clinical characteristics of patients in the two treatment groups were similar, with the exception of ongoing alcohol abuse (n=5 in the surgery group; n=0 in the endoscopy group; p=0.05). 1 One patient was lost to follow-up at 6 months after undergoing surgery and was excluded from the analysis. 3 The results of the trial 1 3 concluded that initial surgical drainage of the pancreatic duct is superior to endoscopic treatment in symptomatic patients with advanced chronic pancreatitis, not only based on short-term outcomes but also in the long term. These benefits apply for pain relief and the need for reintervention. METHOD Overview This cost-effectiveness analysis was built from the trial data from randomisation to end of the long-term patient follow-up (mean of 79 months). 1 3 The trial was approved by the medical ethics committee of the Academic Medical Center, Amsterdam, controlled Trials number ISRCTN04572410. The analysis was developed from an England and Wales NHS perspective using the NICE reference case. 5 The measure of health outcome was the Quality-Adjusted Life Year (QALY). The time horizon used was the mean follow-up of 79 months of the trial. 3 An annual discount rate of 3.5% was applied to both costs and health outcomes incurred after 1 year, as is standard practice for health economic evaluations conducted for the NHS. 5 Utility scores In the trial, 1 3 the EuroQol 5 dimensions questionnaire (EQ-5D) 6 was completed by patients (3-level EQ-5D). The EQ-5D is a generic health state preference measure. 7 Data were collected for each arm at baseline, 6 weeks, 3, 6, 12, 18, 24 and 79 months. We used patientlevel EQ-5D data from the trial and generated utility scores for both arms at every follow-up point. The health state preference values (utilities) for EQ-5D profiles were based on time-trade-off valuations by members of the UK general public. 7 Mean imputation was used to manage missing data. Because the baseline utility scores differed slightly between arms (0.335 vs 0.275; table 1), the between-arm difference in utility score at each time point was adjusted for differences in baseline utility as proposed by Manca et al (2005), 8 by applying ordinary least squares linear regression in SPSS 15.0 with baseline utility and trial arm as the only covariates. Because longterm EQ-5D data ( post 24 months) were collected only at 79 months, and no difference between groups was demonstrated at 79 months (endoscopy 0.79±0.21; surgery 0.82±0.26; difference −0.03, 95% CI (−0.20 to 0.14), p=0.75) 3 , after 24 months we assumed no difference in utility score between the cohorts and applied a constant utility score of 0.79 (from the endoscopy group) to both groups. The QALYs in the endoscopy group were estimated by assuming a linear transition between the mean utilities at each time point (using the data from the endoscopy column of table 1). For the surgery group, the QALYs were also calculated assuming a linear transition (but at each time point the utility for surgery was the sum of the endoscopy utility and the increment of that time point from table 1). Mortality During the mean follow-up time of 79 months (SD, 24) of the study, 3 three patients died in the endoscopy group and four in the surgery group. One early death was reported within 24 months: this endoscopically treated patient died of a perforated duodenal ulcer. After 24 months, another six patients died at a median time of 45 months (range, 27-59 months): two in the endoscopy group ( pulmonary carcinoma; cardiovascular disease) and four in the surgery group (myocardial infarction; sepsis; neuroendocrine tumour; oropharyngeal carcinoma). As these deaths were unrelated to pancreatitis, these were not considered in the base case of this cost-effectiveness analysis but were included in the sensitivity analysis. In the sensitivity analysis, a utility of zero was applied from the death for each death in the trial. Resource use and costs The details of the use of diagnostic and therapeutic procedures, the treatment of pancreatic exocrine and endocrine insufficiency, and the time in hospital were collected during the trial. We combined this resource-use data with the most recent UK unit costs. [9][10][11] Diagnostic procedures and therapeutic procedures (including the hospital stay) were costed using the 2010-2011 National Schedule of Reference Costs. 9 Tables 2 and 3 present the diagnostic and the therapeutic procedures performed during the trial and their UK unit cost. 1 3 Changes in pancreatic function (endocrine and exocrine) were assessed during the trial. Based on the 79-month results, 3 and adjusting for baseline function, the proportion of patients for whom insufficiency persisted, resolved, developed and for whom sufficiency persisted were estimated for each trial arm. For exocrine insufficiency, treatment with pancreatic enzyme supplementations was costed for 79 months in patients whose insufficiency persisted (endoscopy 74%; surgery 63%), and for 39.5 months in patients whose insufficiency developed (endoscopy 26%; surgery 26%) or resolved (endoscopy 0%; surgery 11%). All patients were recorded as having exocrine insufficiency at some point. The treatment for exocrine insufficiency was assumed to be eight capsules a day of Creon 25 000 current practice in England (daily cost of £2.26). 10 For costing endocrine function, we used a yearly cost of £939 for a regimen of two injections per day of a biphasic insulin preparation. This is the lower yearly cost reported by the GINGER study economic evaluation assessing treatments for type 2 diabetes. 12 This cost is still current according to the most recent MIMS, April 2013. 11 The treatment cost for endocrine function was calculated for 79 months in patients whose endocrine insufficiency persisted (endoscopy 26%; surgery 26%), and for 39.5 months in patients whose insufficiency developed (endoscopy 43%; surgery 20%). The remaining patients did not experience any endocrine insufficiency (endoscopy 31%; surgery 54%). Patient-level EQ-5D data from the trial were used to generated utility scores for both arms at every follow-up point using the UK time trade-off tariff. 7 As the baseline utility scores differed slightly between arms (0.335 vs 0.275), differences in utility scores were estimated controlling for baseline utility by applying ordinary least squares linear regression in SPSS V. 15 Sensitivity analysis Sensitivity analyses were performed to assess the robustness of the results. Two scenarios were tested. The first incorporated all-cause mortality from the long-term follow-up of the trial. 3 The second was a more conservative case scenario, in which estimates for key parameters were used that were less favourable to surgical drainage. In this scenario, parameters associated with the number of therapeutic procedures (the highest cost component) were altered from the base case: (1) conversion to surgery in the endoscopy group; (2) additional endoscopic drainage required in the endoscopy group and (3) additional surgical drainage required in the surgery group. In addition, the all-cause mortality from the trial Statistical analysis This economic analysis, conducted in MS Excel 2010, presents all results probabilistically including sensitivity analyses. A probabilistic analysis, using Monte Carlo simulation, applies probability distributions to each model parameter, allowing estimation of the empirical distribution of the cost-effectiveness results. 30 A γ distribution was applied to cost estimates (bounded at 0). The cost of therapeutic and diagnosis procedures, taken from the 2010 to 2011 National Schedule of Reference Costs, 9 were varied using their IQR: the SE of each mean unit cost was estimated manually so that the 25th and 75th centiles of the γ distribution to most closely fitted the published IQR of the unit cost. The costs for pancreatic insufficiency treatments were varied in a range of ±20% using a uniform distribution. The distributions were applied to each unit cost before the unit costs were combined with the resource use frequency taken from the trial, and before discounting. For each item of resource use the frequency was given a β distribution (bounded between 0 and 1). A β distribution was also applied in the same context to the probability estimates for pancreatic function from the trial. A β distribution was also applied to the mortality risks from the trial considered in the sensitivity analysis. In addition to the adjustment for baseline imbalance from the trial applied to pancreatic function (endocrine and exocrine), baseline adjustment was applied to one other trial estimate as appropriate: the utility scores. As mentioned earlier, the between-arm difference in utility score at each time point was adjusted for differences in baseline utility by applying ordinary least squares linear regression. The resulting coefficient (increment for surgery) and its SE were then used as inputs in the probabilistic cost-effectiveness. More specifically, at each time point up to 24 months, a β distribution was applied to the utilities of the endoscopy arm and a normal distribution to the increment for surgery estimated when adjusting for baseline differences (table 1; the distribution for each utility in the surgery arm was the sum of the utility in the endoscopy arm and the increment but was truncated at 1). Then, from 24 to 79 months, a β distribution was applied to the constant same utility score considered for compared arms as explained earlier. Results of the base-case and sensitivity analyses were recalculated 5000 times, with all model parameters set simultaneously, each selected at random from the respective parameter distribution. Results presented are the mean of the 5000 computed simulations. This approach was chosen to account for the uncertainty around the unit cost parameters as well as trial outcomes. To estimate a two-sided p value for the incremental cost we took the proportion of the 5000 simulations where costs were lower for endoscopy than for surgery and then multiplied by two. We applied the same approach to the QALYs gained. To estimate CIs, we took the 2.5th and 97.5th centiles from the 5000 simulations. A limitation of our approach is that it does not capture the covariance between the utility at different time points or between resource use and utility, as each is considered independent. This is a limitation with regard to estimating the level of statistical significance but not with regard to our point estimates. Quality-adjusted life years We used the utility scores (endoscopy and increment) presented in table 1 to calculate QALYs for the 24-month trial duration, and applied, from 24 to 79 months to both groups the constant utility score of the endoscopy group at 79 months (0.79±0.21). 3 Considering the higher score at 24 months for the surgery group, this assumption after 24 months was conservative, that is to say biasing against surgery. When no difference in mortality was assumed, the QALY difference at 79 months was 0.44 in favour of surgery (p<0.001; table 4). When all-cause mortality from the trial was captured, the QALY difference still favoured surgery (difference of 0.22, CI: −0.77 to 0.36), but the difference was no longer statistically significant (table 5). Resource use and cost Cost results are reported in 2011 pound sterling. Combining the frequency of each diagnostic and therapeutic procedure performed during the trial 1 3 with UK unit costs from the 2010 to 2011 National Schedule of Reference Costs 9 (tables 2 and 3), we found a higher cost for endoscopy for both diagnostic and therapeutic procedures. However, the difference only reached statistical significance for the therapeutic procedures (£5943/ patient, 95% CI: £86 to 13 290; table 4). The costs of treating exocrine and endocrine insufficiency were higher for the endoscopy group but these differences did not reach statistical significance (table 4). The total cost for the base-case favoured surgery with a statistically significant difference of £7033/patient (95% CI 869 to 14 638). In the sensitivity analysis, data from reviews of the literature were used to vary the cost of therapeutic procedures, so as to test a more conservative case scenario for surgical drainage. The surgery group showed a slightly lower cost for therapeutic procedures (£81, 95% CI: − £5574 to £5780). However, this difference was no longer statistically significant (table 5). The total cost for the sensitivity analysis favoured surgery, but this difference of £1170 did not reach statistical significance (95% CI −4671 to 7066). Cost-effectiveness The result of the base-case analysis was that surgical drainage of the pancreatic duct dominates endoscopic drainage (it was more effective and less costly-tables 4 and 6). The sensitivity analysis showed that the surgical option remains dominant (cost-saving and QALY increasing) in all scenarios, even under the assessed most conservative case scenario (table 6). In the sensitivity analysis, the total cost and QALY differences between surgery and endoscopy were not statistically significant (table 5). However, the probability that surgery is cost- effective compared with endoscopy, assuming a threshold of £20 000/QALY gained was 82% (100% in the base case; table 6 and figure 1). The probability that surgery is cost-saving compared with endoscopy was 65.8% in the sensitivity analysis (98.8% in the base case). DISCUSSION On the basis of the 24-month aggregated results from the trial, 1 an original cost-effectiveness model was developed to inform recommendations for the NICE Clinical Guideline on Alcohol Use Disorders. 4 Based on the results from the model showing that surgery is costeffective compared with endoscopy, NICE recommended to 'Offer surgery, in preference to endoscopic therapy, to people with pain from large-duct (obstructive) chronic alcohol-related pancreatitis'. 4 The trial was extended to a long-term patient follow-up of a mean of 79 months, 3 and the results led to conclude that, in patients with chronic pancreatitis and an obstructed pancreatic duct, initial surgical drainage of the pancreatic duct is superior to endoscopic treatment in symptomatic patients with advanced chronic pancreatitis, not only based on short-term outcomes but also in the long term. These benefits include greater pain relief and reduced need for reintervention. 3 However, surgery is a more costly procedure than endoscopy, and therefore it has been unclear whether these benefits are greater enough to justify both the initial investment and the risks associated with surgery. We thus aimed to combine resource use, quality of life and mortality data from the trial, along with UK unit costs, to assess the cost-effectiveness of surgical drainage compared with endoscopic drainage. This trial-based cost-effectiveness analysis was developed from a UK perspective and using data collected in the Netherlands. The trial results are transferable to the UK because of the reasonable similarity in terms of patient population, clinical practices and healthcare organisation. It is common practice in health economics to estimate the cost-effectiveness of an intervention from the perspective of one health system using the best available trial evidence conducted in a different country. However, even if it is accepted that the results of this cost-effectiveness analysis are sound from a UK perspective, a comparison of care pathways and unit costs should be made before the results of this study are extrapolated to other settings. The current economic analysis considered the cost of diagnosis procedures, treatment of pancreatic function and therapeutic procedures with the associated hospital stay. The cost of the therapeutic procedures included the original treatment for pancreatitis, retreatment(s) and the treatment of complications. The analysis did not include primary care costs associated with the follow-up of patients in the community. However, such costs are likely to be small compared with the cost of procedures and hospitalisation, and therefore would not affect the conclusions. The limited randomised evidence on the topic led to this analysis being based on a single trial with a relatively small sample size. Nevertheless, the uncertainty around all estimates was accounted for by the probabilistic analysis, which allowed estimation of the empirical distribution of results and the statistical significance of the differences. Additionally, we conducted sensitivity analyses which confirmed the robustness of the conclusions that surgery is cost-saving and highly cost-effective compared with endoscopy, even under conservative assumptions. Although the study size was relatively small, the probability that surgery is cost-effective was very high across the analyses. This was mainly due to the large improvements in quality of life at each follow-up point for the first 24-month period, as measured by the EQ-5D. The model showed that, in terms of QALYs, the large benefits from improved quality of life outweigh the QALYs lost due to increased risk of mortality, even under a conservative assumption: the difference in all-cause mortality of 5.3% applied in the sensitivity analysis is much higher than the reported mortality associated with surgical drainage (0.9%). 4 The baseline demographic and clinical characteristics of patients in the two treatment groups were similar, with the exception of ongoing alcohol abuse (n=5 in the surgery group; n=0 in the endoscopy group; p=0.05). We did not adjust for this when analysing the EQ-5D data nor did we estimate productivity losses, but if we had these are both likely to favour surgery. Another trial (Dite 2003 2 ) comparing endoscopic and surgical procedures in patients with chronic pancreatitis and an obstructed pancreatic duct showed that, in terms of pain relief, surgical drainage of the pancreatic duct was more effective than endoscopic drainage. No quality-of-life assessment was undertaken in this trial 2 and limited resource-use evidence was reported. Furthermore, extracorporeal shock-wave lithotripsy was not used in the Dite 2003 trial, 2 making it less relevant to current practice than the trial we used in our analysis. 3 Nevertheless, outcomes from Dite 2003 2 were considered in the sensitivity analysis by varying the rate of conversion to surgery and the reoperation rate, which did not change the conclusion of the base-case costeffectiveness assessment. A retrospective study 31 from Japan based on medical records compared the resource use and medical cost associated with endoscopic drainage versus surgical drainage in patients with painful chronic calcified pancreatitis. A total of 68 patients were classified into endoscopy group (n=34) or surgery group (n=34). Patients receiving endoscopy were further divided into two subgroups: a short-period group-patients who could discontinue serial pancreatic stenting within 1 year (n=19); and a long-period group-patients who needed pancreatic drainage by serial endoscopic stenting for more than 1 year (n=15). This study concluded that patients in the long-period endoscopy group required significantly longer hospital stays, more frequent hospitalisations and had higher medical expenses than both the short-period endoscopic treatment group and the surgery group. No difference was found between the short-period endoscopy group and the surgery group. This study is more open to bias being a retrospective observational study, and the results from the analysis have been influenced by the choice of subgroups compared. The results, however, do suggest that it may be less costly to initiate treatment with surgery or to change approach before 1 year in the case of serial endoscopic stenting. The total cost results in the current trial-based economic analysis showed for the base case that surgical drainage is less costly than endoscopy and this difference is statistically significant. These results were driven by a difference in the incidence of subsequent therapeutic procedures. In a previous comparative cost analysis of the long-term (79-month) results from the trial, 3 it was shown that surgery was less costly but this difference did not reach statistical significance. The main reason that this study did not show statistical significance when the current study does is due to the inclusion of the cost of treating endocrine insufficiency in the current study. In conclusion, surgical drainage of the pancreatic duct is highly cost-effective compared with endoscopic drainage for treating patients with chronic pancreatitis and an obstructed pancreatic duct in England and Wales. This conclusion was robust to sensitivity analysis. Also this analysis demonstrates that surgery is cost saving compared with endoscopy when considering all cost components related to patient care in chronic pancreatitis. This trial-based cost-effectiveness analysis lends further support to the NICE recommendation to 'Offer surgery, in preference to endoscopic therapy, to people with pain from large-duct (obstructive) chronic alcoholrelated pancreatitis'. However, the results should only be generalised to other healthcare systems with caution. Contributors PL contributed to the conception and design of the research, acquisition of data, analysis and interpretation of the data, statistical analysis, cost-effectiveness modelling, drafting of the article, critical revision of the article for important intellectual content. DW contributed to the conception and design of the research, critical revision of the article for important intellectual content, supervision. DLC, DJG and MB contributed to the acquisition of data, drafting of the article, critical revision of the article for important intellectual content. MGD contributed to the acquisition of data, statistical analysis, drafting of the article, critical revision of the article for important intellectual content. SPP contributed to the conception and design of the research, analysis and interpretation of the data, drafting of the article, critical revision of the article for important intellectual content and supervision.
2016-05-17T06:12:46.321Z
2013-09-01T00:00:00.000
{ "year": 2013, "sha1": "084bf684b83a900480c1aed78f9eebc199384a38", "oa_license": "CCBYNC", "oa_url": "https://bmjopen.bmj.com/content/3/9/e003676.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "084bf684b83a900480c1aed78f9eebc199384a38", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
248478709
pes2o/s2orc
v3-fos-license
Mental Health and Wellness of Service Providers Working with People Experiencing Homelessness in Canada: A National Survey from the Second Wave of the COVID-19 Pandemic Objective: This study examined the scope of common mental health problems and perceived impacts of the COVID-19 pandemic among direct service providers working with people experiencing homelessness in Canada. Method: This cross-sectional study used an online survey that was disseminated to homeless service, supportive housing, and harm reduction organizations and networks. Data were collected on depression, anxiety, stress, post-traumatic stress, compassion satisfaction and fatigue, and substance use problems as well as pandemic-related changes in mental health and wellness. A total of 701 service providers completed the survey and were included in data analysis. Descriptive statistics were used to examine the primary research questions, with hierarchical multiple regression models also being fit to explore mental health and wellness differences by occupational service setting. Results: Most direct service providers (79.5%) working with people experiencing homelessness reported a decline in their mental health during the pandemic. There were high rates of common mental health problems within the sample that are largely consistent with those found among health-care workers during the pandemic. Occupational service settings were not associated with the severity of mental health problems, indicating pervasive issues across the workforce, though providers who were younger and spent more time in direct service roles were at greater risk. Conclusions: The common mental health problems and negative impacts of the pandemic among service providers working with people experiencing homelessness highlight a highly vulnerable workforce that could benefit from improved access to supports. Given the similarities between our findings and other studies examining essential workforces, it is recommended that initiatives that provide accessible mental health care to the health-care workforce during the pandemic be expanded to include homeless and social service providers. Introduction Approximately 6,300 workers provide care and support to the 235,000 people in Canada who experience homelessness every year. 1,2 The size of this workforce grows exponentially when including the adjunct social and community service sector, which may also work with people experiencing or at risk of homelessness. 2 The health of service providers is instrumental to the delivery of quality and effective care, as burn out is associated with lower service satisfaction and poorer health outcomes for patients. 3 The consequences of inadequate or poor service delivery are even greater for people experiencing homelessness, as these can lead to service disengagement and prolonged homelessness. 4 Yet, direct service work is demanding and stressful, and homeless sector services often have limited resources and difficulties retaining staff. 5 A pre-pandemic study showed that one-third of emergency shelter workers in Alberta reported post-traumatic stress symptoms, and one-quarter had elevations on an index of burn out. 6 However, beyond this evidence, there is a dearth of research on the rates of common mental health problems among service providers working with people experiencing homelessness. The COVID-19 pandemic represents a third formidable challenge faced by the homelessness sector in Canada, which was already struggling with the affordable housing and overdose crises. People experiencing homelessness were identified early on in the pandemic as a population at greater risk of negative outcomes from the virus and pandemic, assertions that are now empirically supported by a robust and growing evidence base. 7,8 Efforts to reduce the spread of COVID-19 led to changes in how services were delivered to people experiencing homelessness (e.g., reduced emergency shelter capacity, provision of virtual supports), alongside the proliferation of sizable outdoor encampments in many cities. 9 The complexities of this changing landscape have far-reaching implications, including to the mental health and well-being of direct service providers working in the homelessness and housing sectors. However, this essential workforce's well-being throughout the pandemic has yet to be examined. This cross-sectional study used national, representative data to examine the scope of common mental health problems and perceived impacts of the pandemic among direct service providers working with people experiencing homelessness in Canada. Participants and Recruitment An environmental scan of homeless, supportive housing, and harm reduction sector service organizations and networks in each province and territory was conducted, with invitations to participate in the study being subsequently sent to over 300 identified agencies and groups. Individuals were eligible to participate in the study if they (a) were 18 years of age or older, (b) worked in Canada, (c) provided direct services to people experiencing homelessness, and (d) worked in homeless (including community-based health services specializing in care for people experiencing homelessness), supportive housing, or harm reduction services. A total of 948 individuals began the online survey. Of them, 701 (73.9%) completed the measures used in this study and were included in the analysis. A CAD $3 donation to a charity of participants' choosing was provided to service providers who completed the survey. This study was reviewed and approved by the Centre for Addiction and Mental Health's Research Ethics Board. Data Collection Data were collected using an online survey that consisted of both standardized measures and instruments developed or adapted for this study. The survey was created in REDCap Electronic Data Capture and hosted at the Centre for Addiction and Mental Health. 10 The survey was available for 2.5 months during the second wave of the COVID-19 pandemic in Canada, from November 12, 2020, to January 31, 2021. During this period of time, the average number of daily cases and deaths due to COVID-19 across the country were 6,200.70 and 114.96, respectively. 11 The survey was available only in English, as a number of measures had not been validated in other languages including French. Common mental health problems were measured using the Depression Anxiety and Stress Scales (DASS-21), 12 Professional Quality of Life Scale (ProQOL), 13 Abbreviated PTSD Checklist for Civilians (PCL-6), 14 the CAGE Adapted to Include Drugs (CAGE-AID), 15 and items examining the effects of the COVID-19 pandemic. The DASS-21 is a set of 3 self-report scales measuring the emotional states of depression, anxiety, and stress. A subscale score is computed for each emotional state ranging from 0 to 42, with higher scores reflective of more severe symptomatology. The measure has adequate convergent and discriminant validity and good internal consistency in both clinical and community samples. 16,17 The ProQOL is a 30-item measure of the positive and negative effects of helping others who experience suffering and trauma. Despite widespread use of the ProQOL in workplace mental health research, its psychometric properties are not well-established. Due to this measurement limitation, the tool was scored in 2 ways. The first was consistent with the measure's original procedures to enable contextual comparisons between our findings and those of other studies examining the health and well-being of health professionals during the pandemic and the homeless service workforce. This standard scoring approach involved computation of subscale scores for compassion satisfaction, burn out, and compassion fatigue that ranged from 10 to 50; higher scores on the Compassion Satisfaction subscale reflect greater pleasure derived from work, whereas higher scores on the other 2 subscales are indicative of more negative occupational impacts. Internal consistency for the subscales in our sample ranged from adequate to good (Cronbach as ¼ .77 to .89). Two additional scores-(1) Compassion Satisfaction and (2) Compassion Fatigue-were computed following procedures based on Rasch analysis, which demonstrated satisfactory content validity (ProQOL-21). 18 The Compassion Satisfaction subscale consisted of 10 items that produced a score from 10 to 36, whereas Compassion Fatigue consisted of 11 items and yielded a score from 10 to 46; higher scores are reflective of greater satisfaction and fatigue, respectively. The PCL-6 is a 6-item screening tool for post-traumatic stress symptoms in the past month. A total score was computed that ranged from 6 to 30, with higher scores suggestive of problems due to post-traumatic stress. The full scale has well-established reliability and validity for use with civilians. 19 The PCL-6 has been shown to account for 94.3% of the variance of the full scale, and a cut-off of 14 has excellent sensitivity and adequate specificity. 14,20 The CAGE-AID was used to screen for problems due to substance use. The 4 yes-no items were summed to create a total score ranging from 0 to 4. Two or more positive responses indicate a positive screen; this approach has adequate sensitivity and good specificity. 15 The effects of the COVID-19 pandemic on the mental health and wellness of service providers were assessed using items developed for this study, with content considerations informed by other COVID-19 surveys. 21,22 Questions asked about contraction of COVID-19, provision of homeschooling, provision of care to non-child dependents (e.g., elderly parents), receipt of the Canada Emergency Response Benefit (CERB), financial problems, changes in work hours, access to personal protective equipment in the workplace, support from work colleagues, changes in work effectiveness, changes in substance use, and changes in mental health and wellness. All items measuring perceived changes in mental health and substance use during the COVID-19 pandemic used a 5-point Likert-type scale. Demographic and occupational information was also gathered on gender, age, ethnicity, relationship status, level of education, occupational service setting, length of time in current role and service sector, hours and term of work, lived experience of homelessness and behavioural health problems, and amount of weekly direct contact with service users. Several items on access to health care drawn from previous protocols were also included. 23,24 Data Analysis Descriptive statistics were used to examine the rates of common mental health problems and effects of the COVID-19 pandemic within the sample. Hierarchal multiple linear regression models were used to explore the extent to which common mental health problems were associated with providers' occupational service settings. Predictor variables were entered into the regression models in 2 blocks. The first block consisted of individual characteristics and occupational roles (gender, age, relationship status, and percentage of work involving direct service contact) as proximal factors of common mental health problems. The second block included occupational service settings (homeless services, harm reduction programs, supportive housing, and community-based health services) as more distal, contextual predictor variables. w 2 tests, Mann-Whitney U tests, and independent samples t tests were used to examine any differences between survey completers and non-completers. All analyses were conducted using SPSS Version 25. Results The demographic and occupational characteristics of survey completers and non-completers were similar, with no significant differences being found between the 2 groups in age, gender, level of education, household income, and length of time working in one's current job and the sector. Of the 701 direct service providers who completed the online survey, most participants were white women with college diplomas or bachelor degrees who worked full-time in their jobs with people experiencing homelessness (see Table 1). The mean age of participants was 38.74 years (SD ¼ 12.42). Overall, the sample characteristics were similar to 2016 Census data on the composition of the homeless and social service workforces, with the exception of education (a higher proportion of participants had university degrees in our sample). 2 Ontario and Eastern Canada were slightly more represented in our sample proportional to the size of the workforces in those provinces, whereas Quebec, the Prairies, and British Columbia were more underrepresented. 2 Rates of common mental health problems are presented in Table 2. Occupational service settings (homeless service, supportive housing, harm reduction, or community health service) were not significantly associated with common mental health problems in the hierarchal multiple linear regression models after accounting for gender, age, relationship status, and amount of direct service work, with the exception of a small negative association between supportive housing and compassion fatigue (see Tables 3 and 4). More time spent in direct contact with service users and younger age were significantly associated with greater problems across many of the common mental health domains, with small effect sizes. Participants reported health, social, and financial impacts in their lives due to the COVID-19 pandemic. Only 8 (1.1%) participants had tested positive for COVID-19 at any point since March 2020; however, a larger proportion (n ¼ 69; 9.8%) believed that they had contracted COVID-19 but had not been tested for the virus. A total of 557 (79.5%) participants perceived that their mental health had declined; of them, 68.4% reported a slight decline, whereas 31.6% reported a substantial decline. Substance use increased for 274 (39.1%) participants, which was primarily alcohol (27.5%) and cannabis (20.8%). The majority of participants (n ¼ 387; 55.2%) reported that they had been less able to access support from their social networks, but most (n ¼ 535; 76.3%) also felt moderately or extremely supported by their co-workers throughout the COVID-19 pandemic. One hundred and fifty-nine (22.7%) participants program, street outreach, and respite program/warming centre. c That is, permanent or transitional housing program with supports. d For example, intensive case management, assertive community treatment, community health centre, and inner-city health team. e For example, supervised consumption and overdose prevention site, needle exchange/safer inhalation service, opioid agonist therapy clinic, naloxone training/provision program, and managed alcohol or opioid program. f For example, income support and employment programs, youth outreach and diversion, and victim prevention and protection services. reported that at least 1 individual that they served directly had died (any cause) during the COVID-19 pandemic. One hundred and thirty (18.5%) direct service providers identified an unmet need for treatment of mental health or substance use problems within the past year. Multiple reasons were often reported for not receiving behavioural health care, including not having time due to other commitments (54.6%), thinking the problem could be handled without treatment (54.6%), not having enough health insurance to afford treatment or counselling (40.0%), and not having any health insurance and being unable to afford care (33.8%). Discussion The findings provide a snapshot of the mental health and wellness of the workforce that serves people experiencing homelessness in Canada. A total of 79.5% of participants reported a decline in their mental health during the COVID-19 pandemic, suggesting worsening mental health in the workforce. As a survey conducted from November to December 2020 of approximately 18,000 health-care workers in Canada found that 70% reported worse mental health due to the pandemic, 25 our study findings suggest that the pandemic is taking a similar, if not slightly greater, toll on the homeless service, supportive housing, and harm reduction workforces. Results from the standardized measures of common mental health problems revealed a similar, concerning narrative. Approximately one-third of participants reported moderate or more severe symptoms on indices of depression, anxiety, and stress. These rates were only slightly below those found in a small study of hospital-based critical care nurses in Western Canada during the COVID-19 pandemic. 26 With regard to post-traumatic stress symptoms, 41.9% of direct service providers had a positive screen, which slightly exceeded that of a pre-pandemic study of emergency shelter workers in Alberta using the same measure (33% screened positive in that study). 4 The finding may reflect increased exposure to stressful and traumatic events in the workplace wherein both the pandemic and worsening overdose crisis are possible contributory factors. Further, the rates of compassion satisfaction and burn out reported by direct service providers are consistent with earlier pre-pandemic research of homeless service workers in the United Kingdom; however, secondary traumatic stress symptoms were more severe among providers in our study. 27 Our findings on compassion satisfaction, burn out, and secondary traumatic stress were also similar to those found in a small study of U.S. frontline health-care workers during the pandemic. 28 As occupational service settings were very minimally associated with common mental health problems, the findings suggest pervasive issues across the workforce, though providers who are younger and spend more time in direct service roles may be at greater risk. Gender was minimally associated with common mental health problems, though additional differences may be obscured by male participants being older and less involved in direct contact with service users than female, transgender, and nonbinary providers. Overall, considering the found rates of common mental health problems and unmet treatment needs, the workforce serving the homeless population should be seen as one that is highly vulnerable and could benefit from improved access to mental health supports. As community-based organizations serving the homeless population are often under-resourced, with many direct service providers also receiving low wages, 2,5 worsening mental health of the workforce during the pandemic may threaten its sustainability. Given that insufficient time due to other commitments was among the most frequently reported reasons for not seeking needed behavioural health treatment, service providers could benefit from more flexibility in their workloads and hours. The provision of more wellness days and the expansion of relief staff rosters are expected to give service providers more time for help-seeking. Interventions are also needed to support community-based direct service providers working with people experiencing homelessness. Initiatives to provide accessible mental health care, including psychotherapy and psychiatric services, to frontline health-care workers during the pandemic could be aptly expanded to include those working in homeless and social services. Development of similar frontline wellness services in jurisdictions without such programs is also recommended. Governmental financial support to expand paid sick leave and enhance job security would also be beneficial for reducing health-and financial-related stress among the workforce. The study findings also raise concern about the levels of grief and loss within the workforce. Given the high rates of overdose, suicide, and victimization among people experiencing homelessness, service providers who work with this population are regularly exposed to and grieve the deaths of people they support. 29,30,31 Further, given the higher mortality rate associated with COVID-19 in the homeless population, the pandemic has likely increased service providers' exposure to death and loss. 7 People with lived experience of homelessness, mental health, and substance use problems who work in "peer" roles are especially vulnerable, as they are less likely to receive health benefits compared to other direct service staff. 32 As more than 1 in 5 participants reported that they had served at least 1 individual who had died during the pandemic, increasing access to grief counselling through partnerships between mental health and social service systems is recommended. Organizations can also support direct service providers by engaging them in the development of "for staff, by staff" interventions for grief and loss. This study had several limitations. First, due to the cross-sectional study design, the extent to which the COVID-19 pandemic contributed to the high rates of common mental health problems found in this study is unknown. Although there is cause for concern with regard to the mental health and wellness of the workforce, this may improve as Canada and rest of the world transition out of the pandemic. Nevertheless, this should be monitored closely and investigated further. Second, our study used convenience sampling, and it is unknown how many homeless service, supportive housing, and harm reduction agencies disseminated the survey invitation within their organizations. Further, direct service providers who were on leave from work for mental health reasons during the recruitment period or lost their jobs due to the pandemic are likely highly underrepresented in the sample. Because of this, it is possible that the found rates of common mental health problems and impacts of the COVID-19 pandemic on this workforce are underestimated. Third, direct service providers from Quebec were notably underrepresented in the sample, likely due to the survey only being available in English. As such, the findings may be less applicable to the homeless service, supportive housing, and harm reduction workforces in that province. Conclusion and Future Directions The study findings highlight the high rates of common mental health problems among direct service providers working with people experiencing homelessness in Canada, which most perceived to have worsened during the pandemic. As burn out and secondary traumatic stress can precipitate staff turnover in health and social services 33,34 -a prevalent human resources issue in the homelessness sector 5 -it is important these mental health problems be addressed. Deteriorating mental health among direct service providers in the homeless service, supportive housing, and harm reduction workforces may also increase risk of negative service delivery outcomes for people experiencing homelessness. Interventions to support the workforce's mental health needs throughout and following the pandemic are needed. Further investigation to determine which groups are most at risk of mental health and wellness problems within this workforce is also recommended.
2021-05-21T06:16:49.611Z
2021-05-20T00:00:00.000
{ "year": 2021, "sha1": "22f273c33583efb494c67218ee543548e38a9e11", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Sage", "pdf_hash": "6bca4c7e6b0097e85666bdcd2ea892e9a90e0741", "s2fieldsofstudy": [ "Medicine", "Psychology", "Political Science" ], "extfieldsofstudy": [ "Political Science", "Medicine" ] }
54027561
pes2o/s2orc
v3-fos-license
In vivo pharmacokinetic/Pharmacodynamic modeling of Enrofloxacin against Escherichia coli in broiler chickens Background Systemic Escherichia coli infections cause early mortality of commercial broiler chickens. Although enrofloxacin has long been used in poultry, the in vivo pharmacokinetic/pharmacodynamic (PK/PD) relationship of enrofloxacin against E. coli is unclear. The present study aimed to establish an in vivo PK/PD model of enrofloxacin against E. coli in seven-day-old chicks and to ascertain whether the selection of target organ for PD determination is critical for parameter magnitude calculation in enrofloxacin PK/PD modeling. Results The in vivo effectiveness of enrofloxacin against E. coli in different organs varied, with the Emax ranging from − 4.4 to − 5.8 Log10 colony forming units (cfu)/mL or cfu/g. Both the surrogate AUC0–24/MIC of enrofloxacin or AUC0–24/MIC of the combination of enrofloxacin and ciprofloxacin correlated well with effectiveness in each organ. The AUC0–24/MIC ratio of the combination of enrofloxacin and ciprofloxacin producing bactericidal and elimination effects were 21.29 and 32.13 in blood; 41.68, and 58.52 in the liver; and 27.65 and 46.22 in the lung, respectively. Conclusions The in vivo effectiveness of enrofloxacin against E. coli in different organs was not identical after administration of the same dosage. To describe the magnitude of PK/PD parameter exactly, bacterial loading reduction in different organs as PD endpoints should be evaluated and compared in PK/PD modeling. The selection of a target organ to evaluate PDs is critical for rational dosage recommendation. Background It is estimated that 50% of total poultry loses could be attributed to first week mortalities [1]. Among them, over 50% of mortalities are caused by bacterial infections, primarily Escherichia coli [2]. Systemic E. coli infections contribute significantly to the early mortality of commercial broiler chickens [3]. However, because of the high diversity in virulence-associated genes and serotypes, effective vaccines against E. coli challenge are rare [4]. Until now, using antimicrobials has been the main strategy to control E. coli infections in the poultry industry. Enrofloxacin, a second-generation fluoroquinolone, is commonly used in chickens because of its favorable pharmacokinetic (PK) profile and its excellent activity against gram-negative aerobic bacteria and some gram-positive bacteria [5,6]. However, with the extensive use of enrofloxacin, resistance has emerged [7]. Enrofloxacin is metabolized to ciprofloxacin, which is used clinically in humans, and there are reports showing that resistance genes for fluoroquinolones could transfer to other organisms under antimicrobial pressure [8,9]. Thus, the non-rational usage of enrofloxacin runs the risk of leading to bacterial resistance and potential health hazards in humans [10][11][12]. Thus, there is a growing need to optimize the use of enrofloxacin. Optimizing the use of an antimicrobial should be based on a good understanding of its PK and pharmacodynamic (PD) relationship in target animals against specific bacterial species [13]. Although ex vivo PK/PD modeling of enrofloxacin has been evaluated in buffalo calves, swine, and chickens against E. coli, Pasteurella multocida, and Salmonella typhimurium [14][15][16][17][18][19][20], to the best of our knowledge, there are no in vivo PK/PD modeling studies of enrofloxacin against E. coli in chicks. In vivo PK/PD modeling has great advantages over ex vivo modeling in describing the PK/PD relationship [21], especially for enrofloxacin, whose metabolite, ciprofloxacin, (another fluoroquinolone) has almost the same potency as enrofloxacin. Therefore, for PK/PD modeling of enrofloxacin, it is important to involve its metabolism to ciprofloxacin in the modeling. According to our previous study, the selection of the target organ for PD determination is critical for parameter magnitude calculation in antimicrobial PK/PD modeling [22]. Whether this is true for enrofloxacin requires further investigation. In the present study, to further understand the PK/PD relationship of enrofloxacin, especially whether the selection of a target organ for PD determination is critical for parameter magnitude calculation in PK/PD modeling, broilers were used as an animal model. The following aspects were investigated: (1) The pharmacokinetics of enrofloxacin and its metabolism to ciprofloxacin were determined at three different dosage administrations; (2) the in vivo PK/PD modeling of enrofloxacin against E. coli was developed using the reductions in the bacterial burden in the blood, liver, and lung as the PD endpoints; (3) whether the concentration of ciprofloxacin influence the in vivo PK/PD modeling of enrofloxacin was evaluated; and (4) the corresponding magnitude of PK/PD parameters for a certain efficacy were determined. In vitro susceptibility The MICs of enrofloxacin and ciprofloxacin against E. coli O78 were the same (0.5 μg/mL). The corresponding MBC values were 0.5 and 1 μg/mL respectively. The MIC and MBC of enrofloxacin in serum were identical (0.5 μg/mL). The MPC of enrofloxacin was 3.2 μg/mL ( Table 1). E. coli infection model Clinical signs of colibacillosis, such as depression, decreased feeding, diarrhea, and fever were observed 24 h after challenge with E. coli O78. After dissection, perihepatitis and pericarditis were obvious. The bacteria load in the blood, liver, and lung were 7.2 ± 0.92 Log 10 cfu/ mL, 6.4 ± 0.14 Log 10 cfu/g, and 6.1 ± 0.17 Log 10 cfu/g respectively. The bacterial load in the three organs was similar among different chicks. The death rate was 10%. Pharmacokinetics The serum drug concentration-time profiles of enrofloxacin and ciprofloxacin after enrofloxacin administration at three dosages are illustrated in Fig. 1 and Fig. 2. The PK parameters of enrofloxacin and ciprofloxacin are shown in Table 2 and Table 3. The time of peak concentration (T max ) for enrofloxacin and ciprofloxacin were about 3.3~3.4 h and 4.3~5 h, respectively, with peak concentrations (C max ) of 0.16, 1.76, and 2.86 μg/mL for enrofloxacin at 1, 10, and 20 mg/kg, respectively; and of 0.03, 0.10, and 0.37 μg/mL at corresponding doses for ciprofloxacin. The C max of ciprofloxacin was much lower and emerged later than enrofloxacin. The AUC 0-24 of enrofloxacin at 1, 10, and 20 mg/kg b.w. was 1.64, 17.95, and 30.07 h, respectively, and the corresponding values for ciprofloxacin were 0.61, 1.42, and 4.93 h, respectively. The AUC 0-24 values of enrofloxacin were 2.6, 12.6, and 6.09 times higher than those of ciprofloxacin at doses of 1, 10, and 20 mg/kg b.w. respectively. Dose proportionality was observed for the AUC 0-24 of enrofloxacin and ciprofloxacin in the range of 1-20 mg/kg with r 2 of 0.9868 and 0.9035, respectively. Thus, the AUC 0-24 of other doses between 1 and 20 mg/kg could be calculated. The relationships between the effectiveness (bacteria loading reduction) of enrofloxacin in different organs and PK/PD indices of enrofloxacin, or the combination of enrofloxacin and ciprofloxacin, are shown in Fig. 3 and Fig. 4. The surrogate AUC 0-24 /MIC correlated well with effectiveness in each organ, with r 2 values greater than 0.85. The in vivo effectiveness of enrofloxacin against E. coli in different organs varied, with E max ranging from − 4.4 to − 5.8 Log 10 cfu/mL. Using the PK of enrofloxacin for simulation, then the AUC 0-24 /MIC values of enrofloxacin for the bactericidal effect in the blood, liver, and lung were 19.32, 32.15, and 23.41, Discussion There are several infection routes for E. coli when simulating colibacillosis, such as intramuscular (IM) and oral administration [16,23]. The complicated nature of the gastrointestinal tract could result in the bacterial load in different organs after oral administration being unstable. To obtain a stable bacterial concentration in the different organs, the IM infection route was chosen. In the present study, colibacillosis was achieved through inoculation of 10 7 cfu/mL E. coli in chickens. However, the infection dose was lower than that used in a previous report [16], where colibacillosis was induced by oral gavage with 8 mL of E. coli culture containing 1.2 × 10 9 cfu/mL. The differences in infection dose might be explained in two ways: The broilers used by Sang were 39 days old, whereas ours were 7 days old, and the inoculation method was different, oral gavage in Sang's report vs. IM injection in this study. The PKs of enrofloxacin have been investigated in goats, pigs, calves, horses, and sheep [24][25][26][27][28]. It has also been studied in chickens [29][30][31]. The elimination half-life (T 1/2β ) values in this study (9.78-11.4 h) were similar to those in previous reports [30]. After oral administration of 10 mg/kg of enrofloxacin, the AUC 0-24 value (17.95 h) in this study was much lower than that reported in previously (35 h in Da Silva's report and 25.35 in Mekala's) [30,32]. The difference illustrates that the pathological state affects the total amount of the drug in the blood. Although ciprofloxacin is the main active metabolite of enrofloxacin, few studies have reported the concentration of ciprofloxacin. Da Silva reported that the concentration of ciprofloxacin was lower than their limit of quantification (LOQ) (0.2 μg/ mL) in healthy chickens [30]. With an LOQ of 0.02 μg/ mL, the concentration of ciprofloxacin in the present study was detected even after enrofloxacin administration at dose of 1 mg/kg b.w. The biotransformation of enrofloxacin to ciprofloxacin at doses of 10 and 20 mg/ kg b.w. were 7.9 and 15.3%, which was in accordance with a previous study [16]. However, the biotransformation rate was as 37% higher for the low dose (1 mg/kg b.w.). The moderate concentration of ciprofloxacin indicated that the role of ciprofloxacin should be considered in pharmacodynamic studies of enrofloxacin. A good linear relationship between dosage and AUC 0-24 was observed for enrofloxacin and ciprofloxacin. This phenomenon was similar to that reported in previous studies [21,33]. To the best of our knowledge, there has been no in vivo PK/PD modeling study of enrofloxacin against E. coli in chicks. One of the best PK/PD parameter for fluoroquinolones is AUC 0-24 /MIC [34][35][36], and this study further confirmed this conclusion. Both the surrogate AUC 0-24 /MIC for enrofloxacin or the combination of enrofloxacin and ciprofloxacin correlated well with effectiveness in each organ. It seems that the metabolism of ciprofloxacin has little influence on the PK/PD modeling of enrofloxacin. However, whether the active metabolite plays a role in emerging resistance or has an impact on dosing optimization needs further investigation [37], because optimization of the dosing regimen involves not only maximizing therapeutic outcome, but also minimizing the risk of developing resistance [38][39][40]. The values of AUC 0-24 /MIC for the bactericidal effect were 19.3-32.15 in the different organs using only the concentration of enrofloxacin for simulation, and 21.29-41.68 when using the combined concentrations of enrofloxacin and ciprofloxacin for simulation. The values were much lower than those of enrofloxacin against E. coli or Salmonella typhimurium in the intestinal content of infected chicken calculated from ex vivo PK/PD modeling (1065.93 and 719.33, respectively) [14,16]. Several reasons may explain this significant discrepancy. First, the components of the intestinal content are very complex, a large proportion of drugs may exist in a bound form and show no antimicrobial effect; however, in our PK/PD modeling, the whole amount of the drug was involved in the AUC 0-24 /MIC calculation; therefore, the value of AUC 0-24 /MIC in the intestines may be higher than that in serum to achieve the same effect. Second, as reported previously, the discrepancy between ex vivo PK/PD modeling and in vivo PK/PD modeling was obvious [21]. Using the bacterial burden reduction in each organ as PD endpoints, the value of the PK/PD parameter, AUC 0-24 /MIC, to attain the same effect was different. The AUC 0-24 /MIC value for the bactericidal effect in the liver was higher than that in lung, and twice than that in blood. This phenomenon was also observed in our previous study. The AUC 0-24 /MIC values of danofloxacin against Salmonella typhimurium for the same effect in different organs also showed marked differences [22]. Similar results were also reported in other studies [20,41]. The possible reason for these differences may lie in the differences in the initial bacterial load, the concentration diversity of drugs in each organ, and the complicated structures among different organs. The precise explanation requires further study. Usually, bacterial load reduction in a single organ is used for PD evaluation in most in vivo PK/PD studies, and for ex vivo PK/PD studies, the antibacterial effect of drugs in serum or other body fluid is used for PD calculation [12,40,[42][43][44][45][46][47]. However, according to our results, for a systemic infection by bacteria, to describe the relationship of PK and PD exactly, bacterial loading reduction in different organs, as PD endpoints, should be compared in PK/PD modeling and the selection of a target organ for PD evaluation is critical in rational dosage recommendation. The results obtained using this model need to be validated by clinical trials in relevant animal species. However, it is still a critical step to increase our understanding of PK/PD relationships for antimicrobials. To simulate the clinical use of enrofloxacin, enrofloxacin was administrated via gavage in this study; however, the most recent basic principles of the prudent use of antimicrobials do not support the group oral administration of fluoroquinolones in animals. Conclusions In conclusion, an in vivo PK/PD model of enrofloxacin against E. coli in seven-day-old chicks was established using bacteria loading reduction in several organs as PD endpoints. The in vivo effectiveness of enrofloxacin against E. coli in different organs varied with E max ranging from − 4.4 to − 5.8 Log 10 cfu/mL. Both the surrogate AUC 0-24 /MIC of enrofloxacin or the combination AUC 0-24 /MIC of enrofloxacin and ciprofloxacin correlated well with effectiveness in each organ. The combined AUC 0-24 /MIC ratios producing bactericidal and elimination effects were 21.29 and 32.13 in blood; 41.68 and 58.52 in liver; and 27.65 and 46.22 in lung, respectively. The magnitude of PK/PD index was lowest for the same effect in blood, but highest in the liver, indicating that at the same dosage, the in vivo effectiveness of enrofloxacin against E. coli in different organs was not identical. This study emphasized the importance of target organ selection for PD evaluation in PK/PD modeling. Organisms, chemicals, and animals The clinical E. coli O78 strain, which was used in our previous study, was isolated from a broiler showing colibacillosis [23,48]. The quality control standard E. coli strain ATCC 25922 was purchased from the Chinese Veterinary Culture Collection. The enrofloxacin reference standard and ciprofloxacin (98% purity) were purchased from Solarbio Life Sciences Co. Ltd. (Beijing, China). Acetonitrile (ACN) and methanol (MeOH) were purchased from TEDIA (Fairfield, CT, USA). All reagents used in this experiment were of high Susceptibility testing The MICs of enrofloxacin and ciprofloxacin against E. coli O78 and ATCC 25922 were determined using the micro-dilution method, according to the Clinical and Laboratory Standards Institute (CLSI) reference method [49]. The MIC of enrofloxacin in serum was also determined using the micro-dilution method according to a previous report [50]. To determine the minimum bactericidal concentration (MBC), 100 μL aliquots from the MIC determination procedure were diluted with Mueller-Hinton (MH) broth. The colony forming units (cfu) of each dilution were counted by spreading 100 μL dilutions on MH agar plates after 24 h of incubation at 37°C. The lowest concentration of enrofloxacin that killed 99.9% of the bacteria was defined as the MBC. The mutant prevention concentration (MPC) determination was conducted according to a previous report [51]. Briefly, a series MH agar plates containing different drug concentrations (1 MIC to 64 MIC) were inoculated with more than 10 10 cfu of E. coli and then incubated at 37°C for 72 h. The MPC was determined as the lowest drug concentration that prevented bacterial growth. E. coli infection model Preliminary experiments were conducted to confirm the inoculation amount. After a 6-day acclimation period, broilers were inoculated with 0.5 mL of E. coli culture containing~10 7 cfu/mL through intramuscular injection (IM) in the chest muscle. The clinical symptoms and pathological changes were observed. Then, animals were sacrificed by a lethal intravenous injection of beuthanasia (0.3 mL/kg) after anesthesia with ketamine-Xylazine. The bacterial load in blood, liver, and lung were determined by the agar plate dilution method at 24 h post inoculation. Pharmacokinetics Enrofloxacin was administered orally to 280 infected broilers at doses of 1, 10, and 20 mg/kg body weight (b.w.). Ten blood samples were collected at each time point (0, 0.5, 1, 1.5, 2, 4, 6, 8, 12, and 24 h after administration). After sampling, animals were narcotized with ketamine-Xylazine and sacrificed by a lethal intravenous injection of beuthanasia (0.3 mL/kg). After incubation at room temperature, samples were centrifuged for 10 min at 3000×g to obtain serum. The serum was stored at − 20°C until analysis. The concentrations of the drug in the serum were determined using HPLC with a fluorescence detector, as described previously, with some modifications [17,52]. Briefly, 0.1 mL of serum was added to 1 mL of ACN containing 0.1% acetic acid, vortexed for 3 min, and then centrifuged at 12000×g for 10 min. The supernatant was transferred to a clean tube, dried under nitrogen, and re-dissolved with 0.1 mL 17% ACN. The sample was filtered through a 0.22-μm membrane before injecting into the HPLC apparatus. The recovery rate was between 80.2 and 91.3%, and the intra and inter coefficient of variation was less than 7%. The serum concentration-time data of enrofloxacin and Pharmacodynamics determination Infected broilers (n = 54) were randomly divided into nine groups (n = 6 in each group) and treated with enrofloxacin for 3 successive days at doses ranging from 0 to 20 mg/kg b.w. (0, 1, 2, 5, 7.5, 10, 12.5, 15, and 20 mg/kg per day). At 24 h after the last dose, the broilers were humanely killed through a lethal intravenous injection of beuthanasia (0.3 mL/kg) after anesthesia with ketamine-Xylazine to collect blood, liver, and lung samples. The bacterial loading in each organ was determined via plating dilutions onto MH agar and counting the colonies after incubation at 37°C for 24 h. The effectiveness of enrofloxacin was expressed as the bacterial reduction after treatment compared with that before treatment in each organ. Pharmacokinetics and Pharamcodynamics integration and modeling The best PK/PD parameter for fluoroquinolones is AUC/MIC or C max /MIC; therefore, in the present study, we chose the AUC 0-24 /MIC method to model the PK data and in vitro PD data for enrofloxacin and its active metabolite ciprofloxacin. The sigmoid E max model in the WinNonlin software (version 6.1; Pharsight) was used to simulate the relationship between AUC 0-24 /MIC of enrofloxacin, or the combination AUC 0-24 /MIC of enrofloxacin and ciprofloxacin and in vivo effectiveness. The equation for this model was as follows: In the above formula, E 0 is the change in log 10 cfu/mL or log 10 cfu/g in the control sample (absence of drug). E max is the difference in effect between the greatest amount of growth (as seen for the growth control, E 0 ) and the greatest amount of killing. C e is the tested AUC 0-24 /MIC ratio; EC 50 is the AUC 0-24 /MIC value that reached 50% of the E max ; and N is the Hill coefficient that describes the steepness of the AUC 0-24 / MIC-effect curve [52]. The in vivo antibacterial effects of enrofloxacin were quantified into three levels including: (1) 1 log10 cfu/mL killing (E = − 1), (2) bactericidal action (99.9% reduction, E = − 3), and (3) bacterial elimination (99.99% reduction, E = − 4). Data analysis The PK data, PK/PD data, and PK/PD curve fitting were analyzed using the WinNonlin software (version 6.1; Pharsight). T-tests were conducted for other data using SPSS software (IBM, Armonk, NY, USA). P < 0.05 was considered statistically significant. Availability of data and materials The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request. Authors' contributions XX and ZW designed this study and revised and guided the experiment. XX wrote this manuscript and participated in the whole experiment process; LJ managed the whole experiment and analyzed the data; LJ and WL participated all the experiments; WL and YJ helped for the sampling process and concentration detection. All authors read and approved the final manuscript. Ethics approval and consent to participate The animals were maintained according to the National Standards for Laboratory Animals of China (GB 14925-2010). This study was approved by the Animal Experiments Ethics Committee at Yangzhou University (SYXK(Su) IACUC 2017-0045). Consent for publication Not applicable. Competing interests The authors declare that they have no competing interests.
2018-11-30T04:37:46.932Z
2018-11-29T00:00:00.000
{ "year": 2018, "sha1": "3c53573764e8565f5972a1a23ee9217a92d832b5", "oa_license": "CCBY", "oa_url": "https://bmcvetres.biomedcentral.com/track/pdf/10.1186/s12917-018-1698-3", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3c53573764e8565f5972a1a23ee9217a92d832b5", "s2fieldsofstudy": [ "Medicine", "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
9596011
pes2o/s2orc
v3-fos-license
The HAC Trial (Harmonic for Acute Cholecystitis) Study. Randomized, double-blind, controlled trial of Harmonic(H) versus Monopolar Diathermy (M) for laparoscopic cholecystectomy (LC) for acute cholecystitis (AC) in adults Background In the developmental stage of laparoscopic cholecystectomy (LC) it was considered 'unsafe' or 'technically difficult' to perform laparoscopic cholecystectomy for acute cholecystitis (AC). With increasing experience in laparoscopic surgery, a number of centers have reported on the use of laparoscopic cholecystectomy for acute cholecystitis, suggesting that it is technically feasible but at the expense of a high conversion rate, which can be up to 35 per cent and common bile duct lesions. The HARMONIC SCALPEL(R) (H) is the leading ultrasonic cutting and coagulating surgical device, offering surgeons important benefits including: minimal lateral thermal tissue damage, minimal charring and desiccation. Harmonic Scalpel technology reduces the need for ligatures with simultaneous cutting and coagulation: moreover there is not electricity to or through the patient Harmonic Scalpel has a greater precision near vital structures and it produces minimal smoke with improved visibility in the surgical field. In retrospective series LC performed with H was demonstrated feasible and effective with minimal operating time and blood loss: it was reported also a low conversion rate (3.9%). However there are not prospective randomized controlled trials showing the advantages of H compared to MD (the commonly used electrical scalpel) in LC. Methods/Design Aim of this RCT is to demonstrate that H can decrease the conversion rate compared to MD in LC for AC, without a significant increase of morbidity. The patients will be allocated in two groups: in the first group the patient will be submitted to early LC within 72 hours after the diagnosis with H while in the second group will be submitted to early LC within 72 hours with MD. Trial Registration ClinicalTrials.gov Identifier: NCT00746850 Background In the developmental stage of laparoscopic cholecystectomy it was considered 'unsafe' or 'technically difficult' to perform laparoscopic cholecystectomy for acute cholecystitis [1,2]. With increasing experience in laparoscopic surgery, a number of centers have reported on the use of laparoscopic cholecystectomy for acute cholecystitis, suggesting that it is technically feasible but at the expense of a high conversion rate, which can be up to 35 per cent [3,4] and common bile duct lesions [5]. The HARMONIC SCALPEL ® (H) is the leading ultrasonic cutting and coagulating surgical device, offering surgeons important benefits including: minimal lateral thermal tissue damage, minimal charring and desiccation. H technology reduces the need for ligatures with simultaneous cutting and coagulation: moreover there is not electricity to or through the patient H has a greater precision near vital structures and it produces minimal smoke with improved visibility in the surgical field. [6] In retrospective series LC performed with H was demonstrated feasible and effective with minimal operating time and blood loss: it was reported also a low conversion rate (3.9%). [6] However there are not prospective randomized controlled trials showing the advantages of H compared to Monopolar Diathermy (MD -the most commonly used electrical scalpel) in LC. Aim of this RCT is to demonstrate that H can decrease conversion rate compared to MD in LC for AC, without a significant increase of morbidity. Methods The study project is a prospective, randomized investigation. The study will be performed in the Department of Emergency Surgery St Orsola-Malpighi University Hospital (Bologna, Italy), a large teaching institutions, with the participation of all surgeons who accept to be involved in. The Ethic Committee of St. Orsola Malpighi University Hospital has approved the study protocol on June 24 th 2008. The study and its Informed Consent form have been judged by the Committee to be ethically and scientifically satisfactory as well as correct and adequate to the aims. The patients will be allocated in two groups: in the first group the patient will be submitted to early LC within 72 hours after the diagnosis with H while in the second group will be submitted to early LC within 72 hours with MD. The randomization will be obtained through computergenerated schedule. Blocked randomization is used to ensure close balance of the numbers in each group at any time during the study. The result of this randomization will be sealed in numbered envelopes, inserting inside a cardboard to ensure that they are opaque. After cholecystitis diagnosis if the patient fulfils the inclusion criteria the responsible surgeon will ask the patient to partecipate to the study. If the patient agree, he/she will sign the informed consent. After patient's consent the randomization will be carried out. The responsible surgeon will record the patient name (and number). [7][8][9] All eligible patients will be recorded. [10] Statistical Methods and Power Calculations By using StatCal of Epi INFO 2000 software package (Centers for Disease Control and Prevention, Atlanta, GA, USA), a sample size of 21 patients for each group (42 patients for the whole study) has been calculated to reach a confidence level of 95% with a power of 80%, supposing that the hospital stay for LC with H the conversion rate can be reduced from 35% to 3%. Analysis of data is by intention-to-treat. It will be specified when and how any deviations from randomised allocation, false inclusions, or missing outcomes have been handled. Repeated measures of the outcome will be performed with complete case analyses, potentially excluding individuals with some follow-up data, Eventual missing outcomes will be imputed using baseline values or by assuming that all missing participants have the same risk as the observed participants in the control group. Data are expressed as numbers (%) and means (SD). The results of the two groups in comparison are analyzed using the Pearson's chi-square test and Fisher exact test, as appropriate, for proportions in case of discrete data. Fisher exact test is used when the data are very unequally distributed among the cells of the table or the expected frequency of any cell is less than 5 or the total N is less than 50. For means in case of continuous numerical data, the independent samples T test and the Mann-Whitney U-test are used, respectively for data normally and non-normally distributed. The data are previously tested for normality by the Kolmogorov-Smirnov test. For multivariate analysis the stepwise logistic regression is applied. A p-value of < 0.005 is in any case considered to be statistically significant. Forward selection has been used as variable selection method; it will include all the baseline variables and all baseline variables are assessed for imbalance. The final model only contain variables with a p-value less than 0.05. Imbalances of baseline variables are defined comparing the groups using t tests for continuous variables and χ 2 tests for categorical variables. The adjustments are based upon the stratified logrank test or analysis of covariance. [11][12][13]The primary outcome is the conversion rate. Secondary outcomes are: intra-operative and post-operative morbidity and mortality, operative time and intraoperative blood losses. All the secondary outcomes, including morbidity and mortality, operative time and blood losses, will be controlled as well for imbalances, (i.e. age, previous comorbidities, anticoagulant therapies, previous abdominal surgery). • Patients with an intra-operative findings of different pathology will be excluded from the study • Apache II score > 10 Intervention Preoperative data collected will include patient demographics and comorbidity conditions (genitourinary, cardiac, pulmonary, gastrointestinal, renal, or rheumatologic) and a detailed history of symptom onset. The procedure was performed by a surgeon that had performed at least 50 LCs. On admission, the patients were started on cefotaxime, 2 g IV every 12 h, which was continued postoperatively according to NNISS score. The standard four-trocar operative technique is used for LC for acute cholecystitis. When the gallbladder is distended it will be first aspirated. To allow a good hold on the gallbladder larger graspers will be inserted through a 5 mm right lower port. The cystic artery and duct are clip-ligated in the MD group whereas in the H group cystic artery and duct are closed by H. In the H group the surgeon will use only H, whereas in the MD group the surgeon will use only MD. The gallbladder and intraperitoneal "dropped" stones are collected in an endoscopic bag and extracted through the umbilical cannula site, which can be extended. A closed system suction drain is left. Fascial closure is attempted only at the umbilical cannula site. The skin at all the cannula sites are closed with staples. Conversion to laparotomy will be decided by the operating surgeon and each conversion will be motivated. Data Collection Patients' data sheets are generated containing demographic data and preoperative, operative, and postoperative information. Pre-operative notes concern the history of gallbladder stones, the presence of associated diseases (cardiac, hypertension, diabetes, malignancy), duration of gallbladder complaints (as an indication for the onset of the disease), finding of a palpable gallbladder, temperature, and laboratory results of WBC count, serum bilirubin, gamma GT, CRP, IL-6 and alkaline phosphatase. Ultrasound findings are also reported. Operative data of concern are macroscopic findings (of acute cholecystitis, gangrenous cholecystitis, hydrops, and empyema of the gallbladder), the presence of small stones (< 1 cm diameter) or large bile stones (> 1 cm diameter), information regarding perforation of the gallbladder and intraperitoneally "lost" stones, reasons for conversion, and duration of surgery. Postoperative notes of interest included the use of nasogastric tubes and drains, the amount of analgesics used, (evaluation of pain with VAS score), complications, and length of hospital stay. Complications are classified as surgical infections (wound infection, subphrenic or subhepatic abscess); noninfectious surgical problems (e.g., bile duct injury, hemorrhage); remote infections (urinary or respiratory); and miscellaneous problems (e.g., atelectasis, deep vein thrombosis, AMI, CVA, etc). The collected information are entered into a database as either continuous or categorical variables for statistical analysis. Following the operative procedure, a normal sterile dressing will be applied to cover the abdomen. A second surgical team, aware of the operative findings but not the surgical dissection instrument, then will assume the care of the patient. Postoperative care and ability to be discharged from the hospital will be determined by the second surgical team. The primary operative team will be in every moment available for emergent consultation. Patient discharge will be base on good medical practice criteria: 1) apyrexia 2) absence of diseases requiring hospitalisation 3) return of bowel function 4) patient's compliance. No Placebo drugs are used for this study. Every patient will be asked to sign the Informed Consent. In the informed consent form, patients will receive all the information about the study protocol, the confidential nature of personal data and will fill up a questionnaire before signing or refuse. There will be not inconveniences caused to the patients. No incentives are planned for the patients regarding the operation or the follow-up. All the medical informations obtained from the patients will be kept confidentially among the research scientists conducting the study. The patients will be free to withdraw from the study, whenever they want without any obligation. The study will be stopped in case of newly discovered statistically significant advantages in one group. The aim of the study is to demonstrate that H can reduce conversion rate compared to MD in LC for AC but also differences in terms of morbidity, mortality, operation time, hospital stay, postoperative pain, return to normal activity will be evaluated. The primary endpoints of our study will be: The onset of any other complications will be recorded intraoperatively, postoperatively, at discharge, at 7-days, 1-month and 6-months. The side effects that could have observed are not substantially different between the arms of the study. All the above mentioned data will be recorded in the Case Report Form and later stored in computer database. At the end of the study the final statistical examination will be carried out. An interim statistical examination of the data will be done every 3 months during the period of patients' inclusion in the study. Then at the end of every completed follow-up period (1-month, 6-months). The statistical analysis will be carried out using Epi Info No incentives are planned for the patients regarding the operation or the follow-up. The study will take approximately 6 months -1 year for the inclusion period. According to the number of AC managed monthly, the duration of the inclusion period can be approximately of 1 year to reach the number of about 42 enrolled patients. An interim report is planned at the end of any completed follow-up period. Discussion AC is a common disease. Any improvement in this field will benefit many patients reducing morbidity, mortality, conversion rate, operation time, hospital stay, postoperative pain, return to normal activity and aesthetic result. All our patients will be informed about the study and an informed consent will be obtained. There will not be inconveniences caused to the patients. All the medical informations obtained from the patients will be kept confidentially among the research scientists conducting the study. The patients will be free to withdrawn from the study, whenever they want without any obligation.
2018-04-03T03:50:48.595Z
2009-05-26T00:00:00.000
{ "year": 2009, "sha1": "163bbb9636830cccbaead0821a3716fcde1984dd", "oa_license": "CCBY", "oa_url": "https://trialsjournal.biomedcentral.com/track/pdf/10.1186/1745-6215-10-34", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f0507b6d81d79818a9174d5b9f0eff570b24a012", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
117086402
pes2o/s2orc
v3-fos-license
A lower bound on the orbit growth of a regular self-map of affine space We show that if $f : \mathbb{A}_{\bar{\mathbb{Q}}}^r \to \mathbb{A}_{\bar{\mathbb{Q}}}^r$ is a regular self-map and $P \in \mathbb{A}^r(\bar{\mathbb{Q}})$ has $\limsup_{n \in \mathbb{N}} \frac{\log{h_{\mathrm{aff}}(f^nP)}}{\log{n}}<1/r$, where $h_{\textrm{aff}}$ is the affine Weil height, then $\mathbb{N}$ partitions into a finite set and finitely many full arithmetic progressions, on each of which the coordinates of $f^nP$ are polynomials in $n$. In particular, if $(f^nP)_{n \in \mathbb{N}}$ is a Zariski-dense orbit, then either $n = 1$ and $f$ is of the shape $t \mapsto \zeta t + c$, $\zeta \in \mu_{\infty}$, or else $\limsup_{n \in \mathbb{N}} \frac{\log{h_{\mathrm{aff}}(f^nP)}}{\log{n}} \geq 1/r$. This inequality is the exponential improvement of the trivial lower bound obtained from counting the points of bounded height in $\mathbb{A}^r(K)$. 1. Introduction 1.1. In the appendix to our preprint [5] we formulated a precise conjectural criterion for the algebraicity of a formal function on a projective curve over a global field. For the case of the projective line, this criterion generalizes simultaneously the classical Pólya-Bertrandias criterion (cf. Amice [1], Ch. 5), on the one hand, and a conjecture of I. Ruzsa [7,6,10,4], on the other hand. In [5] (3.2 in loc. cit.) we proved a weak variant of this conjecture with the purpose of applying it to a case of a generalization of the "Hadamard quotient theorem" to higher genus and positive equicharacteristic (Theorem 1.7 in loc. cit.). In the present note we prove another weak variant, and apply it to obtain a lower bound on the growth of an orbit under a regular iteration of affine space. 1.2. We need to settle some notation before we can state our result. In what follows we denote by h(·) :Q × → R ≥0 the absolute logarithmic Weil height, whose definition we recall next. For p a finite rational prime let | · | p by the p-adic absolute value on Q normalized by |p| p = 1/p, while for p = ∞ we consider the ordinary (archimedean) absolute value. For v a place a number field K lying over the place p of Q, and for x ∈ K, let This definition is independent of the choice of the number field K. Viewing A r (Q) as the affine piece [1 : x 1 : . . . : x r ] in P r (Q), we consider the affine height h aff (α 1 , . . . , α r ) := h([1 : α 1 : · · · : α r ]). Finally, we write as usual For a polynomial F ∈ K[x] in several variables over K, we write h(F ) for the height of its set of coefficients, viewed as a point in a projective space. For F = n≥0 a n t n ∈ K[[t]], let F /N := N n=0 a n t n ∈ K[t] be the polynomial truncation modulo t N +1 . 1.3. To motivate our result we make the following trivial observation. If a point P ∈ Z r has infinite orbit under a set-theoretic mapping f : Z r → Z r , then H n := max 0≤i≤n exp(h(f i P )) satisfies (2H n + 1) r > n, whence, in the limit, lim sup n∈N h aff (f n P ) log n ≥ 1/r. The result of this note is that when the mapping f has an algebraic structure, and outside of a degenerate situation, this trivial inequality can be improved exponentially. Then there is a d ∈ N and polynomials p 0 , . . . , The following is an immediate corollary, obtained by taking λ to be the coordinate projections. Corollary 1.5. In the setup of Theorem 1.3, either the Zariski closure of the orbit (f n P ) n∈N is a union of rational curves in A r , or else Another corollary arises in taking f to be a recurrence of the form and λ the projection onto the last coordinate. We will derive Theorem 1.4 from a criterion for the rationality of a formal function on the projective line, which we formulate and prove in the next section. Results of this type date back to the short paper [9] by U. Zannier. A rationality criterion In what follows, K is a number field and S a finite set of places of K including all archimedean places. The following is a particular case of the conjecture formulated in the appendix to [5]. then the power series F ∈ K(t) is rational. This conjecture is sharp, in the sense that there are uncountably many power series F for which the quantity on the left-hand side of (1) is zero. It extends an old conjecture of I. Ruzsa [7], and appears intractable to our current techniques. In this section we prove the following crude variant. Proposition 2.2. Let η > 0. In the setup of Conjecture 2.1, assume instead that the inequality holds for all n ≫ 0. Then F ∈ K(t) is rational. Proof. The proof is a variant of that presented to the algebraicity criterion 3.2 in [5]. Without loss of generality we may assume η to be rational. Letting L be a large integer parameter such that ηL ∈ Z, Siegel's lemma (see 3.1 in [5] and the references therein) for M := L equations in N := (1 + η)L unknowns yields polynomials P ∈ K[t] and Q ∈ K[t] \ {0} of degrees less than (1 + η)L such that and It follows from (2) and (4) that there is an L < ∞ such that all n ≥ (2 + η)L satisfy 1 We claim that then F = P/Q, hence F is rational. Assuming otherwise, let n be the minimum integer such that Q(t)F (t) − P (t) ≡ 0 mod t n ; by construction, n > (2+η)L. Consider a prime s of K with h s < n/(2+η), and write F s := A s /B s the reduction at s, with A s , B s ∈ k(s)[t], deg A s , deg B s < n/(2 + η). Then, denoting by a tilde the reduction at s, we have A s (t) Q(t) − B s (t) P (t) ≡ 0 mod t n−1 . The degree of this polynomial is less than (1 + η)L + n/(2 + η), which by our assumption n > (2 + η)L does not exceed n − 1. It follows that the polynomial is identically zero, hence the coefficients of Q(t)F (t) − P (t) all reduce to zero at s. On the other hand, we have assumed that t n appears with a non-zero (This is just the coefficient of t n in Q(t)F (t).) Thus the product formula yields At the places corresponding to the primes s with h s < n/(2+η), the previous paragraph shows that the contribution to (6) To estimate h(c) from above, we use the easy bound applied to the sum defining c as the coefficient of t n in Q(t)F (t). We obtain: Taken together with (7) this contradicts (5), thus forcing F = P/Q as claimed. Proof of Theorem 1.4 There is a number field K and a finite set S of its places, including all archimedean places, such that the triple (f, λ, P ) has a model over O K,S . We will apply Proposition 2.2 to the formal power series Φ := n≥0 λ(f n P )t n ∈ O K,S [[t]]. If the power series Φ is rational, the conclusion of Theorem 1.4 follows from the explicit descriptions of coefficients of rational power series as confluent power sums. If Φ is not rational, Proposition 2.2 with η := 1 implies the lower bound inequality for infinitely many n ∈ N. On the other hand, for s / ∈ S a prime of K, the iteration f : A r O K,S → A r O K,S reduces mod s to an iteration f mod s : A r k(s) → A r k(s) over a set with |k(s)| r elements, hence h s (Φ) ≤ 2|k(s)| r . Consequently, by the prime number theorem, (9) yields h(Φ /n ) ≥ 1 3 (n/6) 1/r for arbitrarily large n ∈ N. We have h(Φ /n ) ≤ |S| max j≤n h(λ(f j P )), and the conclusion of the theorem follows from (10). Two conjectures We end this note by recording two conjectures related to the setup of Theorem 1.4. Problems of this type have been posed by J.H. Silverman in [8]. Conjecture 4.1. Let X be a complex projective variety, f : X X a rational self-map, λ : X P 1 a non-constant rational function, and P ∈ X(C) a point with well-defined forward orbit. Then the set {n | λ(f n P ) = 0} ⊂ N 0 is the union of a finite set with finitely many full arithmetic progressions. Conjecture 4.2. Let A/Q be an abelian variety and λ : A P 1 a non-constant rational function. Consider a point P ∈ A(Q). If h λ([n]P ) = o(n 2 ) then there is a surjective homomorphism A ։ B to an abelian variety mapping P to a torsion point.
2013-11-17T09:16:18.000Z
2013-11-17T00:00:00.000
{ "year": 2013, "sha1": "f887b96f4ccced920e39ed0d650a5074ff501a0d", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "f887b96f4ccced920e39ed0d650a5074ff501a0d", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
55898534
pes2o/s2orc
v3-fos-license
Feasibility Study on Solar Power Plant Utility Grid under Malaysia Feed-in Tariff : In Perlis, Northern Malaysia, a solar power plant with an energy capacity of 5 MWp began selling energy to Tenaga Nasional Berhad in January 2013. Upon obtaining Feed-in Tariff approval from the Sustainable Energy Department Authority of Malaysia, the power plant will produce energy with a Feed-in Tariff of RM 0.874 for every kWh for 21 years according to the Renewable Energy Power Purchase Agreement. However, the output of solar plants is unpredictable. Investors commonly estimate the output of solar PV power generation from simulation results based on irradiation data proposed in simulations. However, estimates of potential solar generation from simulated analyses may not be accurate and thus exert negative financial impacts to investors. Therefore, comparing estimated output results from simulations with the actual output from solar PV power generation is important. The aim of the present study is to identify the error between the simulation and actual performance of solar PV power generation over the twelve months of 2013. Sensitivity analysis of Feed-in Tariff degression was also performed to study the impact the performance error on the economic aspect of energy generation. Introduction The environment and energy crises are major issues worldwide.The release of greenhouse gases, especially CO2 from power generation using coal, gas and steam, depletes the ozone layer and creates more pollution.Shortages in fossil fuels and increasing energy demands also contribute to the energy crisis.The world depends on fossil fuels as primary sources of energy.Some projections indicate that the global energy demand will triple by 2050 (MP, 2011).Globally in 2010, 86.9% of the total energy usage involved fossil fuels, where 33.5% of the energy usage was from crude oil, 23.8% was from natural gas and the remaining was from coal.On the other hand, 5.22% was from nuclear energy, 6.46% was from hydroelectric energy and 1.3% was from renewable energy (BP, 2011). In Malaysia, the Primary Energy Demand (PED) is represented by natural gas 60.3%, coal 30.4%, hydroelectric power 5.4% and crude oil 1.1% (Daud, 2010).Major concerns in using fossil fuels as primary energy have been expressed.GHG emissions and fossil fuel depletion, for example, are among the major issues in fossil fuel usage.In Malaysia, researchers estimate that 339 million tons of CO2 will be produced by 2030.According to the APEC (2009), the energy sector is the biggest contributor of CO2 emissions (GHG, 42%) to the atmosphere at, followed by the transportation sector (28%) and the industrial sector (20%).The Malaysian government has grown increasingly concerned about GHG emissions and fossil fuel depletion.When the energy demand increases, GHG emissions and the demand for fossil fuels increase accordingly.In 2009, Malaysia introduced the National Green Technology Policy, which is designed to mitigate issues of security, energy efficiency and environmental impact while addressing the rising energy demand. Given that environmental protection concerns are increasing worldwide, both new energy and clean fuel technologies are being intensively pursued and investigated.Most renewable energy sources, including wind, microhydro, tidal, geothermal, biomass and solar, are converted into electrical energy for delivery either directly to the utility grid or to isolated loads (Canale et al., 2009;Mercure and Salas, 2012). Solar energy technologies, including solar heating, solar Photovoltaic (PV) cells, solar thermal electricity and solar architecture can make significant contributions toward solving some of the most pressing energy problems currently faced by the world.PV technology has been proven to generate electricity from solar energy easily.Malaysia is a country in which renewable energy is widely promoted because of its tropical location.The country receives an average solar irradiation of 400-600 MJ/m 2 per month and thus has potential for establishing large-scale solar power generation plants (Mekhilef et al., 2012). The solar potential and Feed-in Tariff (FiT) rate offered by the Malaysian government to energy producers is expected to attract more investors to the renewable power generation sector, which in turn will help the government achieve its objective of promoting clean technology.The Malaysian government is keen to develop solar energy as a significant source of energy in the country.According to the 10th National RE Goals, a large allocation of funds has been allocated to the implementation of solar PV systems, as shown in Fig. 1. The size of a power supply system exerts an important function when generating electricity in farflung areas at a reasonable price.PV systems and other renewable energy systems are excellent options for producing energy at low-to-medium power levels in remote areas because of easy scaling of the input power source (Chamboulegron, 1986;Enslin, 1990).The main feature of PV systems is that they can produce electric power without harming the environment.PV systems achieve this by directly transforming a free inexhaustible source of energy, namely, solar energy, into electricity. Continuing decreases in the cost of PV arrays and increases in their efficiency also promote the use of PV generating systems (Zweibel, 1990;Hussein et al., 1995). PV systems may be categorized as a grid-connected PV systems, stand-alone PV systems, or building systems.In grid-connected applications, DC power from solar cells runs through an inverter, which feeds the power back into the distribution system.Grid-connected systems have proven their worth in natural disasters by providing emergency power capabilities during utility power interruptions.While the PV power generated by this type of system is generally more expensive than utility-provided power, the use of grid-connected systems is increasing (Penick and Louk, 1998;Quesada et al., 2011).Various technologies, such as silicon PV (crystalline silicon and nanocrystalline), thin film solar cells (amorphous silicon, cadmium telluride, gallium arsenide and copper indium gallium deselenide) and concentrated PV (multifunction cells), are used to produce PV cells.Of these technologies, researchers have found that multicrystalline outputs and monocrystalline cells have the highest output range (between 12 and 17%) (Sick and Erge, 1996). PV systems are known to present varying seasonal patterns that depend on temperature and solar irradiation.PV system outputs vary because of the different temperature coefficients of the voltage and current output.To simplify the work of manufacturers, PV modules are rated at standard test conditions of a solar irradiation of 1000 Wm 2 , fixed spectrum and sun-spectrum at air mass of 1.5 (AM = 1.5).The electric power generated by a solar PV array fluctuates depending on the operating conditions and a number of field factors, such as the sun's geometric location, irradiation levels and ambient temperature (KeTTHA, 2011).The study initially reviewed the performance of previous solar PV power generation systems but found that a large number of studies on this topic and a detailed review are beyond the scope of this research.Mondal and Sadrul Islam (2011) conducted a case study in Bangladesh where they identified the potential location of grid-connected solar PV in 14 districts.The study analyzed the feasibility of 1 MWp solar PV using HOMER optimization software.However, this study was discontinued because of very high investment costs.Thus, a comparison based on actual performance and the results of simulation analysis could not be performed to confirm findings. Another solar PV performance analysis was conducted by Sharma and Chandel (2013).Performance analysis of a 190 kWp solar PV power plant installed in Khatkar-Kalan, India, was carried out and simulation estimations were found to be in close agreement with the actual measured results with an uncertainty of 1.4%.This estimation was performed using PVSYST software.Detailed analysis of the plant's economic feasibility, however, was not performed. To promote green energy, many countries have introduced the concept of Feed-in Tariff (FiT) for renewable energy.The FiT scheme has been proven successful in accelerating RE deployment, reducing carbon emissions and creating jobs in different countries, such as Germany, Italy, Spain and Thailand (Chua et al., 2011).FiTs are RE payments of electricity in kilowatthour (kWh).FiT promotes exportation of electricity as a form of investment.The concept of FiT, according to the Ministry of Energy, Green Technology and Water, obliges Distribution Licensees (DLs) to buy the electricity produced from renewable sources from feedin approval holders and sets a feed-in rate.DLs thus pay for each unit of renewable electricity supplied to the grid for a specific duration (GPS, 2012). FiT was introduced to encourage more people to invest in renewable energy sources.The introduction of FiT is set to change Malaysia's electricity production through RE.Domestic and industrial users will be able to generate renewable electricity (i.e., solar power) through RE and sell it back to the national power grid at a premium rate (NEB, 2008).Solar PV power generation is still new in Malaysia and thus requires a large initial investment. Therefore, feasibility studies focusing on the economic impact of the initial costs of solar PV power are necessary.The first solar power plant under the FiT scheme was inaugurated on March 28, 2012 by Cypark Resources Berhad.The power plant had a total investment of RM 100 million and a capacity output of 8 MWp (GPS, 2012).With the successful construction of this solar power plant, the Malaysian government achieved a significant milestone in promoting clean energy in the country.Given that the development of large solar power sources in Malaysia requires large investments and that solar generation power plants under the FiT scheme are a new concept, prospective investors were initially doubtful.The fact that case planning-based simulations were not completely parallel with actual expectations also caused doubt.In the present study, we analyzed differences between simulation analysis results and actual production to identify sources of discrepancies.By identifying the difference percentage, investors are likely to feel more confident about investing in the renewable energy generation sector of Malaysia.The study objective is to determine the technical and economic feasibility of solar power plant generation in Malaysia. Methodology In this study, the solar PV power plant, located in Perlis, Malaysia (latitude 6°24' North and longitude 100°8' East), uses poly-crystalline panels, as shown in Fig. 2.An install capacity of 5 MWp was used as the case study.Information for economic analysis, such management planning stage and price of materials (e.g., solar PV modules, inverters and land acquisition), was gathered from a consulting solar PV company (Mega Jati Consult Sdn.Bhd).The FiT rates in Malaysia are shown in Table 1. Several design tools to model and simulate the performance of PV power generation systems are available in the market.The present study was implemented so that actual results can be compared with the analysis of simulation software.The study also presents an overview of the extent of GHG emissions and economic impact of a degression of 8% per year of FiT, as offered by Malaysia.By studying FiT degression, an indicator of future investments in RES in Malaysia may be obtained to ensure that the objectives of the country are achieved. Simulation software has been developed to estimate the performance of a solar power plant and assist system designers and installers (Henrion et al., 2013).HOMER is a computer model that simplifies the task of evaluating design options for both off-grid and grid-connected power systems for remote, stand-alone and distributed generation applications (Homer Energy).Three main tasks can be performed by HOMER: Simulation, optimization and sensitivity analysis.HOMER was developed by the United States National Renewable Energy Laboratory and is specifically designed to analyze and optimize renewable energy industry systems, as applied by (Mondal and Sadrul Islam, 2011;Bhattacharjee and Dey, 2014). HOMER is mostly used for studying, sizing and analyzing the performance of PV grid-connected systems.It is useful for determining several aspects in terms of finance, such as net present value and cost of energy.The variation of input parameters, such as FiT or solar radiation can be performed to identify the different outputs for each input.In this study, HOMER was used to predict the total monthly and yearly energy output of a 5 MWp grid interactive component of the PV plant. The study is divided into two parts: The technical analysis and the economic analysis.The difference between the actual and simulated PV outputs of the solar plant was examined and analyzed to identify the error.Cost analysis refers to the total system cost, which includes initial and maintenance costs.The annual cost was calculated to determine the life cycle cost and payback period.All costs were based on real prices used in the development of the solar power plant. Measurement of Actual Data The actual data, such as solar radiation and energy production, were collected based on site measurement to compare the actual performance of the system against HOMER simulation.The solar PV power plant was equipped with the latest Supervisory Control and Data Acquisition System (SCADA), as shown in Fig. 3. Parameters such solar radiation and electrical energy production were recorded and averaged at 15 min intervals by the data logger.The recorded data were stored in a computer system using an RS232/RS485 peripheral interface with a SCADA communicator through a computer. Simulation Data Solar Radiation Solar radiation is radiant energy that can be converted into electrical energy by solar PV panels.This parameter input was important to ensure the success of solar PV power generation because every location on earth receives different amounts of solar radiation.In HOMER, the data were generated by adding the latitude and longitude of the solar plant location. Modeling of the Solar PV Power Generation In this study, the SEDA-approved grid connected to the solar PV system was 5 MWp.All of the energy produced by the solar PV power plant was injected into the grid system without any limit from the utility provider (Tenaga Nasional Berhad).HOMER simulation was based on the maximum output and all electrical instruments, such as the solar PV and inverter, were calibrated accordingly to meet the maximum output (Fig. 4). Solar PV Array The kilowatt (kW) capacity was determined using HOMER.Based on a maximum output of 5 MWp, the size of the solar PV was 5 MW (Fig. 5).In terms of economic analysis, the capital cost for the solar PV was RM 38,550,048.00.The projected cost covered all components except for the inverter.Other parameters considered in this study were slope degree, derating factor, ground reflector and solar PV lifetime; these parameters were proposed by HOMER. The cost of replacement was assumed to be the same as the initial costs.Another parameter to be considered is Operation and Maintenance (O& M), which is reflected by the salary of one Electrical Charge man amounting to RM 48,000.00 per year. Inverter Model Given that the output signal from solar PV is DC current, installing the inverter to change the DC input to AC output before entering the grid system was necessary.Based on the maximum output from solar PV (5 MW), the inverter capacity was expected to be the same.The proposed lifetime of the inverter is 15 years.Another parameter proposed by HOMER was efficiency, which is 90%, as shown in Fig. 6.The cost for this inverter was RM 3,029,400.00. Grid Model HOMER can be used to model the system used in grid-connected or stand-alone systems.The capacity of this plant was above 1 MW and could reach 10 MW.The power plant began to sell energy on January 2013.The FiT approved by SEDA was RM 0.874, as per Table 2. To secure investments, all of the energy produced by the solar PV plant was allowed to enter the grid without any limit to maximize the income from solar generation. Analysis of GHG Reduction HOMER can simulate GHG reductions of the solar PV plant.In this study, the emission setting was proposed by HOMER.Emission factors for grid power were set by HOMER as 632 g/kWh for CO2, 2.74 g/kWh for SO2 and 1.34 g/kWh for N2O.Based on GHG simulation results, these values were analyzed to study the CO2 reduction target proposed by the Malaysian government, which is 145,000,000 tonnes by year 2030 (Mendoca et al., 2009). Economic Analysis The economy presents a crucial function in the development of large scale solar power plants because of the complex interplay of incentives and various supply demand factors that impact the owner.The major components of the system cost include the price of solar panels, inverter and infra work.To analyze economic implications, the total investment required to create a solar PV power plant must be determined.To ensure the adherence of our economic analysis to Malaysian economic conditions, actual costs must be obtained and real figures must be presented to investors.In the construction of solar power plants, the material cost of solar panels and inverters is not the only expense.Costs involved in mounting the structure, transformer and high-tension switchgear, as well as land acquisition or management costs, must also be considered.Malaysian Ringgit (1 MYR = 0.30 US Dollar) was used as the main currency for this study.The total actual cost obtained from investors was RM 41,579,448.10.Detailed prices are shown in Table 3 and Fig. 7. The costs and benefits of the proposed solar PV system throughout its lifetime were analyzed and assessed by HOMER using the following financial indicators. Net Present Value (NPV) represents the life cycle cost of the system.The calculation assesses all costs occurring within the project lifetime, including initial set-up costs, component replacements within the project lifetime, maintenance and fuel.Future cash flows are discounted to the present.HOMER calculated NPV according to the following Equation 1: where, TAC is the total annualized cost ($) (the sum of the annual costs of each system component).The Capital Recovery Factor (CRF) was calculated as: (1 ) (1 ) 1 where, N is the number of years and "i" is the annual real interest rate (%). Cost of Energy (CoE) is the cost of generating electricity.The Life Cycle Cost (LCC), unit cost and payback period criteria were used and the LCC was obtained using the formula below: (3) The capital cost (Ccapital) of a project includes the initial Ccapital for the equipment, system design and installation.The Ccapital is usually considered as a single payment made on the first year of the project.The maintenance cost (CO&M) is the sum of all O&M costs incurred yearly.Examples for O&M costs include operator salary, inspections, insurance and property tax.The replacement cost (Creplacement) is the sum of equipment replacement costs incurred over the lifetime of the system.A good example of a Creplacement expense is the battery, which requires replacement once or twice during the entire lifetime of the system. These costs normally occur at specific predicted years and the entire cost is often covered by the predicted yearly expenses.Finally, the salvage value (Csalvage) of a system refers to its net worth in the final year of the life-cycle period.Assigning a Csalvage of 20% of the original cost for mechanical equipment that can be moved is a common practice.The Csalvage can be modified depending on other factors, such as obsolescence and equipment condition [46].After calculating the LCC, the unit CoE produced by the system can be calculated as follows: The material cost of solar panels and inverters is not the only expense encountered during the construction of solar power plants.Costs involved in construction, such as electrical tools (step-up transformers, high-tension switchgears, or cables), land acquisition and management costs must be considered. Simple Payback Period (SPP) represents the number of years required for the cash flow to equal the total investment.The basic assumption of the SPP method is that the more quickly the cost of an investment can be recovered, the more desirable the investment is.The equation for SP is given below: Per Period  (5) Degression Analysis of FiT in Malaysia, the tariff degression per annum set by SEDA is 8%.The FiT rate is different for each year.In this study, the degression of FiT was estimated by HOMER for two years: For 2014, where the FiT is projected at RM 0.804 and for 2015, where the FiT is projected at RM 0.739.Degressions of FiT were set as sensitivity values.The analysis using FiT rate after two years, from 2014 to 2015, was performed for comparative purposes; the FiT rates of 2014 and 2015 and the degression of FiT were shown to investors to spark interest in investing in solar PV generation in the next two years.Degression analysis used constant values of capital cost because of the uncertainty in future market prices. Technical Analysis The technical aspect of our study is discussed in this section.Technical indicators were analyzed and solar irradiance, technical performance, energy production and GHG emissions are discussed.The variation in monthly average of solar radiation on the tilted PV modules indicates that solar radiation is minimum during July (4.070 kWh/m 2 ), which reflects a 22.10% error.The maximum actual solar radiation measurement in March (6.335 kWh/m 2 ) was compared with the simulation results in February (6.134 kWh/m 2 ).The predicted yearly irradiance was 5.27 kWh/m 2 and the measured value was 6.66 kWh/m 2 .Comparison of solar resource inputs from HOMER and the measured irradiance shows that solar radiation is maximum from February to April and minimum from May to July.The average yearly solar irradiance error percentage was 20.88%, as determined by the HOMER output.All comparisons of solar radiation are summarized in Table 4 and Fig. 8. To compare the measured and predicted outputs of the solar plant, the normalized energy yield (kWh/kWp) was calculated.The measured total normalized energy output for the year 2013 was 1358.11kWh/kWp, whereas the predicted yearly output was 1390.22 kWh/kWp.Results of the estimation of energy yield using HOMER software were in close agreement with the actual measured results; an uncertainty of only 2.31% was obtained.Comparisons of technical performance are summarized in Table 5.The measured monthly normalized yield was highest for March (135.25 kWh/kWp) and the predicted normalized yield was maximum for March (135.95kWh/kWp).Therefore, an error of 0.518% was obtained.The plant supplied 6.79 GWh of energy during the year 2013, whereas the total annual energy yield predicted by HOMER was 6.95 GWh, which indicates an error of 2.35%.All comparisons of energy production are summarized in Table 6 and Fig. 9. Calculations of the annual reduction in estimated GHG emissions yielded values of 4,393,080 kg/year for CO2, 19,046 kg/year for SO2 and 9,314 kg/year for N2O.Based on this yearly reduction of GHG emission for the twenty one year, the objective of Malaysia target in CO2 reduction in year 2030 is not achievable which is 57.17% from 145,000,000 tons when this plant reach 2030. Economic Analysis The economic aspect of our study is discussed in this section.Analyzing the economic aspect of power generation is important because it gives investors the necessary information for gauging the success of renewable energy investments in Malaysia.Here, economic indicators, including the NPV and CoE and SPP period are discussed. Net Present Value and Cost of Energy The NPV for this project is RM 26,628,810.00 and its CoE is RM 0.329/kWh, as shown in Fig. 10.A negative NPV is an indicator of the total profit of this project.Based on our analysis, we believe that the project will be profitable and attract investors to invest in solar PV generation.A negative CoE indicates income as renewable energy production for every kWh. Simple Payback Period As described previously, the payback period for solar power generation is 10 years, as indicated in Fig. 11 and 12 and Table 7.The cash inflow is negative for the first nine years to recover the initial capital cost.Investments begin to turn a profit starting from the tenth year to the twenty-fifth year. Sensitivity Analysis To explore possible discrepancies in the results caused by key parameter variations, sensitivity analysis was performed for important parameters, such as FiT degression, initial investment cost, project lifetime and electricity export cost.CoE production varied between 0.194 and 0.329 RM/kWh with a FiT degression rate of 8% per year.Results of sensitivity analysis of NPV varied significantly in terms of FiT rate (e.g., RM 26,628,810.00,RM 20,958,466.00 and RM 15,693,144.00), as shown in Fig. 13.The FiT degression rate implies that investor profits will eventually decline.This decrease in profit will surely cause anxiety to prospective investors in solar PV generation in the years following 2013.To acquire a better understanding of the solar power plant performance, performance parameters evaluated for the Perlis solar power plant were compared with the reported performance parameters of solar power plants at various locations (Table 8).The annual average energy yield of the solar power after twelve months of continuous operation is 1390.22 kWh/kWp.This annual average yield is comparable with the annual average energy yield of solar power plant systems at other locations, which confirms the suitability of solar power generation in Perlis.A brief analysis of such studies is summarized in Table 8.Based on our comparison, we conclude that Malaysia has the best potential for solar PV power generation among the four countries studied (Greece, Ireland, India and Poland). Conclusion In this study, performance and economic effects of a 5 MWp solar PV plant installed at Perlis, Malaysia, was conducted.Comparison of the energy yield of solar power plant systems in Malaysia with other systems installed at different locations worldwide shows that the energy yield of the solar PV plant in Malaysia (1358.11)kWh/kWp is higher than those of other countries.The comparative study confirms the suitability of Malaysia as a solar power plant location.Estimations of energy yield using HOMER software are in close agreement with the actual measured results and showed an error of 2.31%.The FiT degression rate (8%) will reduce income from solar power generation according to our sensitivity analysis.This predicted income reduction may worry investors and prevent them from investing in solar power generation.The lack of investment will result in failure to achieve the objectives of the Malaysian government, particularly in the renewable energy sector. Fig. 1 . Fig. 1.National renewable energy Goals of the Sustainable Energy Development Authority (SEDA) (Source: Sustainble Energy Developement Authority Malaysia) Fig. 3 . Fig. 3. SCADA system used for monitoring and recording purposes Fig. 10 . Fig. 10.Net present value analysis and cost of energy (Malaysian Ringgit, RM) Table 1 . FiT rates Table 3 . Detailed cost for 5 MWp solar power generations Fig. 7. Percentage of costs in the development of the 5 MWp solar power generation Table 4 . Summary of solar radiation Table 8 . Performance comparison of PV systems in different countries
2019-04-11T13:16:26.605Z
2015-05-27T00:00:00.000
{ "year": 2015, "sha1": "b14a1c36de46266eb9f0a5d39c443fdd4614b327", "oa_license": "CCBY", "oa_url": "https://thescipub.com/pdf/ajeassp.2015.210.222.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "080162c467ee13d89936831ad6f65e61374af716", "s2fieldsofstudy": [ "Environmental Science", "Engineering", "Economics" ], "extfieldsofstudy": [ "Environmental Science" ] }
2378536
pes2o/s2orc
v3-fos-license
Prostate Cancer: Current Treatment and Prevention Strategies Abstract Prostate cancer is one of the life threatening disorders of male. Although, over the last two decades, a high rate of overdiagnosis, and overtreatment has lowered the incidence rate of prostate cancer, the treatment or prevention strategies are not enough to control the high rate of disease related mortality. Current medical treatment approaches include surgery, radiation therapy, chemotherapy, hormonal therapy, cryosurgery and other methods. These approaches are more or less effective either as monotherapy or in multimodal approach. However, many adverse or side effects exist with these strategies. Researches are ongoing to find out the way or better strategies to eliminate the adverse effects. Dietary modifications may also contribute to decrease prostate cancer risk. Several nutraceuticals against prostate cancer have also been identified. This review article summarizes some of the current treatment, and prevention strategies with the protection of prostate cancer, which may be helpful to control and prevent this highly frequent life threatening disease. Context Prostate cancer is one of the most common cancers in male. However rates of detection of prostate cancers vary widely across the world, with Europe and the United States detecting higher frequency than South and East Asia. In China, the incidence rate is 1.6 cases per 100000, while 119.9 cases per 100000 in the USA (1). Prostate cancer tends to develop after the age of fifty in men, but unfortunately many patients do not have symptoms, they do not take treatment, and eventually die. The reasons behind this may be the slow growing cases of prostate cancer, and since older people may die of other causes such as heart/circulatory disease, pneumonia, other unconnected cancers, or old age. Although two-third cases of prostate cancers are slow growing, there are some cases of aggressive prostate cancers. Recent evidences from the Prostate, Lung, Colorectal, and Ovarian Cancer Screening Trial (PLCO), and the European Randomized Study of Screening for Prostate Cancer suggested a high rate of overdiagnosis, and overtreatment of prostate cancer, which causes a low mortality rate relative to the incidence rate over the last two decades (2,3). Despite these results of the foregoing trials and the success, the high intervention rate of prostate cancer continues. Unfortunately, prevention may have little effect on disease-related mortality. Primarily, surgery, radiation therapy, and proton beam therapy are the current treatment options of prostate cancer. However, chemotherapy, hormonal therapy, cryosurgery, and high intensity focused ultrasound (HIFU) are also belonging to the treatment strategies, depending on clinical conditions, and outcomes. Also, the choice of treatment depends on the stages of the disease progression, the level of prostate specific anti-gen (PSA), the Gleason score among others. Patient's age, general health conditions, his interest about treatments, and their possible side-effects may also influence choosing among different treatment options. Any of the treatments may have significant side-effects, so the treatment discussions often focus on balancing the goals of therapy with the risks of lifestyle alterations. Dietary management, and other lifestyle modification of patients with prostate cancer have also shown some positive results to control, and prevent prostate cancer. Patients with prostate cancer are strongly recommended to work closely with their physicians, and use a combination of the treatment options when managing their prostate cancer (4,5). The optimal management of prostate cancer still remains controversial. This review article summarizes the current treatment and prevention strategies with the protection of prostate cancer, which may be helpful to control and prevent this highly frequent life threatening disease. Surgery Surgery is not regarded as monotherapy in men with prostate cancer; rather it is a part of the multimodality approaches. Surgery is mainly suggested for high-risk locally advanced prostate carcinoma (6). Radical prostatectomy (7) and pelvic lymphadenectomy (PLDN) are mostly applicable surgery types in prostate cancer. Traditionally, RP for high-risk prostate cancer has been discouraged because of concerns regarding the side effects such as high rates of positive surgical margins, risk of lymph node metastasis, and high rates of PSA recurrence. However, surgery has been shown to be more beneficial than watchful waiting for mortality, risk of local progression, and risk of metastasis (8). Montie suggested that initial RP may have a role for treating high risk localized prostate cancer (9). After 8-10 years of following up, Bill-Axelson et al. (10) suggested that RP reduces disease-specific mortality, overall mortality, and the risks of metastasis, and local progression of prostate cancer. According to their study, the absolute reduction in the risk of death after 10 years was small, but the reductions in the risks of metastasis, and local tumor progression were substantial. Patients most likely to benefit from surgery include those with a biopsy Gleason score ≤ 8, the serum PSA level < 20 ng/ml, and the tumor ≤ cT3a (11); these criteria are currently recommended by the European Urology Association (5) for surgery in locally advanced prostate cancer (12). PLND is commonly suggested to perform during RP for high-risk prostate cancer (8) because 15% to 40% of nodes would have positive results (13). To detect the lymph node metastases in prostate cancer, PLND is the most reliable strategy, but its therapeutic benefit in prostate cancer manage-ment is still debatable (14). Zorn et al. (15) described PLND technique during robot-assisted RP in a cohort study to evaluate the nodal yield and perioperative outcomes, and they demonstrated the feasibility and low complication rate of robotic standard-template PLND with lymph node yields comparable to those with open PLND. Radiation Therapy After RP, radiotherapies are considered as the second major therapeutic modalities for localized high-risk prostate cancers. External-beam radiotherapy (EBRT), and brachytherapy are widely used treatment strategies for prostate cancer, which have a significant clinical and technological development in recent decades (16). Lowdose rate brachytherapy (LDRB) involves the permanent insertion of radioactive seeds with the half-life of 60 days under the ultrasound guidance (17). A small randomized trial comparing RP and LDRB for low-risk prostate cancer demonstrated equivalent outcomes, with a 5-year biochemical progression-free survival of 91.7% by brachytherapy versus 91.0% by surgery, however they produce different short-term sequelae of urinary disorders, and erective functions (18). Also these two strategies (brachytherapy and surgery) have similar cost profile for prostate cancer treatment in France (19). In high-dose rate brachytherapy (HDRB), there is a temporary insertion of applicators into the prostate to ensure a feeding of high energy source by different positions of prostate. This ensures a high dose of radiation to prostate gland with the minimized dose to bladder and bowel. HDRB can be used as monotherapy or in combinations with EBRT. Usually, HDRB offers a good treatment strategy for patients with more locally advanced disease (17). EBRT may be effective to every patient without distant metastases, and a life expectancy of at least 5-10 years (20). The advantage of a dose escalation up to the total doses of 76-78 Gy concerning biochemical tumor control has been showed by some randomized trials, which additionally concerns the disease-specific survival for high risk patients. Other randomized trials demonstrated the benefits of an additional adjuvant antiandrogen therapy to EBRT for patients with locally advanced cancers. A radiation dose of at least 74 Gy should be the standard of care for all men with localized prostate cancer who choose treatment with EBRT (16). However, the optimal dose of EBRT has not yet been established for these patients, and an argument can be made for additional dose escalation. For reducing metastases risk and increasing survival, Pinkawa (20) recommended an adjuvant postprostatectomy EBRT of the prostatic fossa with doses in the range of 60-66 Gy. Proton Beam Therapy Proton beam therapy (PBT) is one the types of EBRT which use ionizing radiation. The main advantage of pro-ton beam therapy is its ability to localize the radiation dosage more precisely when compared to other types of radiation therapy. A particle accelerator is used to target the tumor with a beam of protons during the treating process. PBT allows an excellent dose distribution, with the additional benefit of no exit dose. These characteristics make PBT as an excellent choice for the treatment of prostate cancer (21). In a phase III trial by Shipley et al. (22), an increased dose with an external beam of 12.5% to 75.6% CGE (Cobalt Gray Equivalent) by a conformal proton boost compared to a conventional dose, significantly improved local control of cancer in patients with poorly differentiated prostate tumors. With an emphasis on the biochemical freedom from relapse, Slater et al. (23) analyzed the results of conformal proton radiation therapy for localized prostate cancer, and reported that conformal proton radiation therapy yields disease-free survival rates with a minimal rate of morbidity. Over the last ten years, proton beam therapy has increased the survival rate among patients with prostate cancer (24). Cryosurgery Cryosurgery is a treatment strategy where extreme cold is applied to destroy abnormal or diseased tissue, including prostate tumors. In this strategy, the supercooled liquid is sprayed on the diseased tissue by using liquid nitrogen as the cooling solution. For the treatment of localized low-risk prostate cancer, focal cryotherapy has emerged as a less morbid option, and obviously an interesting concept (25) Bahn et al. (26) retrospectively reviewed the efficacy and safety of the long-term experience with targeted cryoablation of prostate cancer (TCAP) in a series of 590 consecutive patients, who experienced TCAP as primary treatment for localized or locally advanced prostate cancer for 7 years at a community hospital. The outcome provided the compelling validation of TCAP as an effective treatment strategy for the locally confined, and locally advanced prostatic carcinoma (26). Hubosky et al. (27) critically examined patients at a single institution, who were receiving the third-generation cryosurgical treatment for localized prostate cancer. They reported that treatment success with cryosurgery varies with treatment outcome, morbidity profile, and quality of life parameters definition, but their results were comparable to other series in the regard of short-term cancer control. In that series of patients undergone third-generation cryosurgery, the complication rates were low; quality of life parameters of third-generation cryoablation were similar to second-generation series. Compared to brachytherapy, cryotherapy was found as less irritative, and obstructive voiding symptoms in the early post-treatment period, and it improved the urinary function after treatment (27). In a randomized, noninferiority trial to compare cryoablation with EBRT in patients with prostate cancer, after a long-term follow up the trend favored cryoablation/ cryosurgery (7). Hormonal Therapy Androgens are regarded as the fuel for hungry prostate tumor (28). Testosterone accounts for more than 90% of the systemic androgen function, and dihydrotestosterone (DHT) is its important variant (cytosolic) (29). The androgen receptor (AR) is a ligand-dependent transcription factor which acts in the nucleus of cells (30). The AR binds to testosterone and DHT with similar affinity, although DHT is a more potent androgen for structural and biochemical reasons (31). At normal concentrations, adrenal androgens have little effect on the prostate. Although activation of the AR by androgens is the most direct means of promoting prostatic growth, there are several surrogate pathways in prostate cancer. These pathways permit the AR to be activated, amplified, enhanced or bypassed without androgen stimulation, thus leading to the development of prostate cancer (32). Androgen deprivation therapy (ADT) with either medical or surgical approach is regarded as the initial treatment for metastatic prostate cancer. The beneficial clinical effects of ADT in men with symptomatic metastatic prostate cancer are rapid and dramatic (33). Huggins et al. (34) reported the dramatic clinical effects of suppressing serum testosterone levels in men with advanced prostate cancer. Inhibition of various hormones, receptors, or enzymes along the androgen production pathway is the basis of treatment. ADT is frequently used as the primary treatment for prostate cancer, particularly locally advanced and metastatic disease. It is also used as neoadjuvant, and adjuvant therapy, in combination with surgical or radiation therapy. ADT does not cure prostate cancer when used alone but is often the treatment modality of choice for palliative therapy. Although the emphasis of cancer treatment is typically focused on the cancer cells directly, an emerging concept in the treatment of prostate cancer is inhibition of prostatic stroma in addition to the tumor. The prostatic stroma has been shown to have a supportive role in prostate cancer, and may play a role in driving cells into a tumorigenic or invasive phenotype (29,32). Initially, diethylstilbestrol was used for achieving androgen deprivation, but was replaced by luteinizing hormonereleasing hormone (LHRH) (33). Presently, medications used for ADT include estrogens, GnRH (Gonadotropin releasing hormone) agonist, GnRH antagonist, androgen receptor blockers, 5-alpha reductase inhibitors, adrenal androgen inhibitors and some others (29). Although ADT is very widely used, the role of ADT in the management of prostate cancer is highly controversial. Adverse events associated with LHRH agonists include the flare phenomenon, hot flashes, loss of libido, erectile dysfunction, depression, muscle wasting, anemia, and osteoporosis (33). Also, ADT reduced insulin sensitivity and increased body weight, serum cholesterol and triglyceride levels. A significant cardiac risk has also been shown, as neoadjuvant hormonal therapy given with radiation therapy has been shown to increase all-cause mortality in men with a his-tory of coronary artery induced congestive heart failure or myocardial infarction; this effect was not seen in men with up to a single coronary artery disease risk factor (35). Fortunately, bilateral orchiectomy (an out-patient surgical procedure for the removal of the testicals) is associated with fewer side effects than medical ADT. Bilateral orchiectomy does increase the risk of diabetes, similar to GnRH agonists; however it does not appear to have the same increase in myocardial infarction, coronary heart disease, and cardiac death (36, 37). Chemotherapy Generally, chemotherapy is not regarded as the very effective way against prostate cancer. In fact, before midnineties of last century, it was thought that chemotherapy is not beneficial for prostate cancer. However, after that time, the use of chemotherapy in patients with hormone refractory prostate cancer (HRPC) has shown significant improvements in pain and quality of life, as well as decreases in PSA level (38). The common chemotherapeutic drugs used as the treatments of advanced prostate cancer include mitoxantrone, doxorubicin, vinblastine, paclitaxel, docetaxel, and some others. Mitoxantrone is an anthracenedione antineoplastic agent. Mitoxantrone plus prednisone (a pro drug) reduce pain and improve the quality of life in patients with advanced HRPC, but do not improve the survival rate. For metastatic HRPC, the combination of mitoxantrone, and prednisone is now approved as a second-line treatment. However this combination was regarded as the first line of treatment previously until the recent development of treatment strategy with the combination of docetaxel and prednisone, which has been shown to improve survival and disease-free period (39). A recent study confirmed that the survival rate of men with metastatic HRPC is significantly higher after the treatment with docetaxel and prednisone than that with mitoxantrone and prednisone (40). Docetaxelis is a clinically well-established antimitotic chemotherapeutic medication. This drug interferes with cell cycle by binding with the microtubules. It has also been found to influence the phosphorylation of oncoprotein bcl-2, blocks the apoptosis (41). The monotherapy with anthracyclines, doxorubicin or epirubicin, or their combination with other agents, have been used extensively in the treatment of HRPC, but the outcomes were controversial (42). Dietary Strategies Like many other disorders, the interactions between individual genetic susceptibility, and the life style background, including the diet, are responsible for cancer causation. Dietary modification is an important way to prevent cancer, because some dietary factors may contribute to a decrease in risk while others could cause an increase. Avoiding high fat and cholesterol may help to control or prevent prostate cancer, because dietary fat and cholesterol play an important role in the development of prostate cancer (43). Shirai et al. reported ω-6 polyunsaturated fatty acids to exert promotional effects in prostate carcinogenesis, and ω-3 polyunsaturated fatty acid-rich oils to suppress tumorigenesis (44). Freedland et al. reported that no-carbohydrate ketogenic diet could significantly reduce prostate cancer growth, and prolonged survival in xenograft model mice injected with LAPC-4 cells. This activity was associated with favorable changes in serum insulin and insulin-like growth factors (IGF) axis hormones relative to low-fat or Western diet (45). In laboratory studies, nutraceutical compounds most commonly show antioxidant properties combined with other antineoplastic actions. Because oxidative stress with androgen exposure and age-factor increase prostate cancer risk, dietary materials with antioxidants should be effective against prostate cancer (46)(47)(48). Reports from several studies reviewed by Shirai et al. (44), have suggested that isoflavones, carotenoids, and in particular lycopene, could be prostate cancer-preventive agents. However Peters et al. (49) suggested that lycopene is not effective for prostate cancer prevention. Dietary intake of selenium, which is present in a wide range of foods such as fish, meat, poultry, eggs, dairy products, grains, and some others, has been suggested to have a protective effect against prostate cancer (50). A Meta-analysis performed by Brinkman et al. suggested that men with low selenium levels are at increased risk of prostate cancer. Redman et al. (51) reported the inhibitory effect of selenomethionine on DU-145 prostate cancer cells through inducing apoptosis. However the effect of selenium on human trial is controversial. Meyer et al. (52) suggested that nutritional doses of antioxidant vitamins (like vitamin E), and minerals (like selenium) may help to the chemoprevention of prostate cancer. But, in clinical trials, vitamin E and selenium were not so effective. The Selenium and Vitamin E Cancer Prevention Trial (SELECT) was designed on 35,533 healthy men from American and African-American origin by Dunn et al. (53), and they observed that neither selenium nor vitamin E, alone or together, prevented prostate cancer in this heterogeneous population. However there are evidences that vitamin D improves the survival of patients with prostate cancer, and vitamin D appeared to be important in reducing the risk of prostate cancer over many years (54). A high vitamin B-6 intake has also been suggested to improve prostate cancer survival among men with a diagnosis of localized-stage disease (55). The American Dietetic Association and Dieticians of Canada reported a decreased incidence of prostate cancer for the vegetarians (56). Discussion As prostate cancer is one of the life threatening and most frequent case of disorders, proper treatment and other control strategies are of specific goals to many biomedical researchers. An integrated treatment strategy, which combines the local and systemic therapies, can be beneficial in the management of prostate cancer. However, the choice of treatment strategy is dependent on many factors, like patient preference, and quality of life aspects. It is expected that within a near future, the treatment approaches like surgery, radiation therapy, hormonal, and chemotherapy would be much more developed without minimal side effects. And most importantly, proper dietary management may keep away a person from prostate cancer risk.
2016-05-12T22:15:10.714Z
2013-04-01T00:00:00.000
{ "year": 2013, "sha1": "973bc53011aa27a1e2c5d79c144dd851eacdc592", "oa_license": "CCBY", "oa_url": "http://cdn.neoscriber.org/cdn/serve/313ea/973bc53011aa27a1e2c5d79c144dd851eacdc592/15862-pdf.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "973bc53011aa27a1e2c5d79c144dd851eacdc592", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
21435253
pes2o/s2orc
v3-fos-license
Severe acute caffeine poisoning due to intradermal injections : mesotherapy hazard Introduction. Caffeine is indicated in the treatment of migraine headaches, as well as neonatal apnea and bradycardia syndrome. In mild poisoning, the most prevalent symptoms are nausea, vomiting, diarrhea, tremor, anxiety and headache. In more severe cases, symptoms consist of heart rythym abnormalities, myocardial infarction and seizures. Due to its common lipolytic effect, caffeine is used in mesotherapy, usually in combination with drugs of similar effect. We presented a patient with acute iatrogenic caffeine poisoning. Case report. A 51-year-old woman, with preexisting hypertension and hypertensive cardiomyopathy was subjected to cosmetic treatment in order to remove fat by intradermal caffeine injections. During the treatment the patient felt sickness, an urge to vomit, and a pronounced deterioration of general condition. Upon examination, the patient exhibited somnolence, hypotension and nonsustained ventricular tachycardia, which was sufficient enough evidence for further hospitalization. On admission to the intensive care unit the patient was anxious with increased heart rate, normotensive, with cold, damp skin, and visible traces of injection sites with surrounding hematomas on the anterior abdominal wall. Paroxysmal supraventricular tachycardia (PSVT) on electrocardiographic monitoring was found. The laboratory analysis determined a lowered potassium level of 2.1 mmol/L (normal range 3,5 – 5.2 mmol/L), and a toxicological analysis (liquid chromatography with ultraviolet detection) proved a toxic concentration of caffeine in plasma – 85.03 mg/L (toxic concentration over 25 mg/L). On application of intensive therapy, antiarrhythmics, and substitution of potassium, as well as both symptomatic and supportive therapy, there was a significant recovery. The patient was discharged without any sequele within four days. Conclusion. A presented rare iatrogenic acute caffeine poisoning occured due to massive absorption of caffeine from the subcutaneous adipose tissue into the circulation when injected directly into the tiny blood vessels, as evidenced by hematoma formation. Poisoning manifestations were registered in gastrointestinal, CNS (anxiety, somnolence) and cardiovascular (hypotension, ventricular tachycardia and nonsustained PSVT) system. In this era of mesotherapeutic treatment promotion, one should keep in mind toxic prevention, with application being carried out exclusively in a specialized institution graphic monitoring was found.The laboratory analysis determined a lowered potassium level of 2.1 mmol/L (normal range 3,5 -5.2 mmol/L), and a toxicological analysis (liquid chromatography with ultraviolet detection) proved a toxic concentration of caffeine in plasma -85.03 mg/L (toxic concentration over 25 mg/L).On application of intensive therapy, antiarrhythmics, and substitution of potassium, as well as both symptomatic and supportive therapy, there was a significant recovery.The patient was discharged without any sequele within four days.Conclusion.A presented rare iatrogenic acute caffeine poisoning occured due to massive absorption of caffeine from the subcutaneous adipose tissue into the circulation when injected directly into the tiny blood vessels, as evidenced by hematoma formation.Poisoning manifestations were registered in gastrointestinal, CNS (anxiety, somnolence) and cardiovascular (hypotension, ventricular tachycardia and nonsustained PSVT) system.In this era of mesotherapeutic treatment promotion, one should keep in mind toxic prevention, with application being carried out exclusively in a specialized institution Key words: caffeine; poisoning; cosmetic techniques; risk assesment. Introduction The biochemical structure of caffeine is 1,3,7trimethylxanthine.This compound belongs to a class of theophyllines with the chemical structure of 1,3dimethylxanthine and theobromine, 3,7-dimethylxantine.Being an ingredient that is found in coffee, tea, cocoa and various drinks, caffeine is used routinely.The therapeutic use of caffeine in adults is an adjuvant therapy in combined analgesics for the treatment of migraine headaches, in children for the treatment of neonatal apnea, and in bradycardia syndrome.Caffeine, theophylline and theobromine belong to the group of methylxanthines, which cause the release of endogenous catecholamines, leading to the stimulation of adrenergic receptors.They are structural analogues of adenosine and pharmacologically function as adenosine antagonists.In higher doses, methylxanthines inhibit phosphodiesterase, the enzyme responsible for degradation of intracellular cyclic adenosine monophosphate (cAMP).The increase in cAMP leads to the clinical effects of adrenergic stimulation, muscle relaxation, stimulation of the myocardium, peripheral vasodilatation, stimulation of the respiratory center and the excitation of the central nervous system (CNS).Caffeine is bioavailable after oral, intravenous, subcutaneous, intramuscular and rectal application 1,2 .Caffeine metabolism occurs via hepatic cytochrome P450 oxidase, the main processes including demethlylation and hydroxylation, with metabolic by-products (3,7-dimethylxanthine) teobromin, and (1,3dimethyxsanthine) theophylline.For this reason, in patients with caffeine poisoning, serum concentrations of theophylline must be determined 1,3 .Methylxanthines have a positive chronotropic and inotropic effect on the myocardium, leading to supraventricular tachycardia, atrial fibrillation, atrial flutter, multifocal atrial tachycardia, ventricular tachycardia and ventricular fibrillation.Electrolyte imbalances may be a factor in enhancing the development of arrhythmias.Caffeine and theophylline stimulate the respiratory center and increase respiratory rate (frequency of breathing), and therefore are used to treat sleep neonatal apnea syndromes.Effects on the CNS are manifested as headache, anxiety, agitation, insomnia, tremor, irritability, hallucinations and seizures.The effects exhibited on the musculoskeletal system result in an increase of intracellular calcium, muscle excitation, tremors, fasciculations and rhabdomyolysis 1 .The most common and mild clinical effects of caffeine toxicity are sinus tachycardia, hypertension, nausea, vomiting, anxiety, CNS agitation and palpitations.Severe clinical effects, fortunately less common, are seizure, dysrhythmias, myocardial infarction, hypertensive crisis, hyperthermia and delirium 3 . Treatment of patients with severe caffeine intoxication includes admission to the intensive care unit, electrocardiographic (ECG) monitoring, intensive therapy with isotonic solutions, as well as other forms of symptomatic and supportive therapy.In methylxantine severe poisoning, charcoal and hemodialysis are used in order to counteract caffeine's resistance.Indications for hemoperfusion through activated charcoal and hemodialysis are: serum levels of caffeine which are greater than 90 mg/L, severe poisoning with convulsions, hypotension resistant to parenteral infusion therapy, and heart rhythm disorders 1,4 . Mesotherapy was discovered in Europe as a medical and cosmetic method for intradermal injection of a mixture of specific substances.Although traditionally used in the treatment of pain, it has recently been used for cosmetic purposes, especially in the treatment of cellulitis, as well as in the local reduction of fatty deposits 5 .In these procedures, the process of inhibition of phosphodiesterase contributes to its overall lipolytic effect. Case report A 51-year old woman, underwent aesthetic treatment of excess adipose tissue through lipolysis.The treatment took about sixty minutes and was performed in a beauty salon, under the control of a plastic surgeon specialist.It consisted of 20 intradermal injections of caffeine solution.The patient felt discomfort after the first two applications, and soon felt ill with anxiety, nausea and the urge to vomit.Because of a sudden disturbance of general condition, the patient was further examined in the Emergency Center, Clinical Center of Serbia, ascertaining suffering from somnolence and hypotension.Electrocardiographic (ECG) examination registered sinus rhythm and occasional nonsustained ventricular tachycardia.Due to suspicion of underlying systemic toxic effects during the treatment the patient was admitted to the Poison Control Center, Military Medical Academy. On admission the patient complained of nausea, vomiting, and chest palpitations.The patient was anxious, afeb-Perkovi Vuk evi N, et al.Vojnosanit Pregl 2012; 69(8): 707-713.rile, hyperventilating, with cold/moist skin, and mydriatic pupils.The auscultatory findings in the lungs were normal.The patient's heart rate was 150 beats per min, tones cleares without additional sounds.Blood pressure on admission was 130/80 mmHg.Injection marks on anterior abdominal wall were present with surrounding hematomas (Figure 1).The personal history of the patient indicated treatment of previous hypertension with nifedipine.ECG recorded paroxysmal supraventricular tachycardia (PSVT) with the frequency of 146/min, changes in repolarization, ST segment depression of 6 mm in left-sided leads (V4-V6) D1 and AVL (Figure 2).The patient was admitted to the intensive care unit with continuous ECG monitoring and parenteral therapy.The first 6 h of parenteral therapy included 5 mg of verapamil, diazepam 20 mg, 10 mg metoclopramide, infusion therapy with 3,000 mL of isotonic solution, and substitution with 100 mEq potassium chloride.Diuresis following this therapy was an amount of 1,800 mL.In addition to the therapy, symptoms included heart rate slowing to 90/min, confirmed by ECG finding (Figure 3), with repeated attack of PSVT at the frequency of 170/min, which is the reason for inclusion of beta blockers in the standard therapy. On the second day of hospitalization the patient complained of nausea, warranting removal of metocloperamide. From that day until hospital discharge, the patients ECG rhythm was at a normal frequency.Serum potassium level was 2.3 mmol/L.Detection of blood caffeine levels of 57.73 mg/L and theophylline at a concentration 6.59 mg/L were obtained. On the third day, the patient complained of a headache, as well as pains in the neck area.Hypertension was established at 170/90 mmHg.Laboratory analysis determined hypokalemia (serum potassium level 2.8 mmol/L).The concentration of caffeine in blood was 27.43 mg/L, which was between the range of concentrations considered to be toxic.The concentration of theophylline was 7.43 mg/L.The treatment started with an angiotensin converting enzyme (ACE) inhibitor, administered both parentally and orally, resulting in correction of hypertension, as well as hypokalemia. On the fourth day of hospitalization the patient was in good condition and discharged.The ECG at discharge was found to be a sinus rhythm, at the frequency of 67/min (Figure 4), with the signs of hypertrophy and left ventricular overload.Normal values for biochemical parameters were recorded.The concentration of caffeine in blood was 8.39 mg/L and 1.86 mg of theophylline/L, which was at the therapeutic level.On echocardiographic examination, there was an enlarged left atrium of 4.3 cm, normal left ventricular dimensions, with concentric hypertrophy and a wall thickness of 13 mm.There was no failure of segmental contractility; ejection fraction was 65%.Diastolic function was altered by delayed relaxation. Discussion Caffeine side effects and systemic toxicity in adults is usually documented after oral administration, and during intravenous use in pediatric patients or neonates 1 . Intravenous caffeine in neonates has been presented with a number of severe toxic effects such as hypertension, tachycardia, tachypnea, tremor, opisthotonus, tonic-clonic convulsions, cardiac failure, pulmonary edema and metabolic acidosis with confirmation of toxic concentrations of caffeine.The symptoms gradually retreated after 7 days, and the concentration of caffeine was then between 60-70 mg/L 6,7 .Ingestion of large amounts of caffeine can cause significant agitation, severe hypotension, tachycardia, ventricular arrhythmia, cardiac arrest, myocardial infarction, hypokalemia, rhabdomyolysis, seizures and acute renal impairment 8,9 .Waring et al. 10 showed clinical data of 38 patients with caffeine ingested at an average dose of 1,040 mg (600 to 1,500 mg), which is equivalent to the amount found in about 10 cups of coffee.Out of them, 28 (73.7%)patients attempted suicide by deliberate self-poisoning, 8 (21.1%) patients ingested caffeine in order to enjoy (energy drinks), and 2 (5.3%) patients did it for weight loss. We reported acute poisoning caused by intradermal caffeine intake by intentional injections for aesthetic purpose. So, that differs from accidental poisoning with caffeine, usual entry of the toxin. So far, it has not been proven that consuming caffeine from coffee increases the risk of cardiovascular disease.Also, there is no clear evidence that drinking moderate amounts of coffee, 3 to 4 cups per day (about 300 to 400 mg per day), poses a risk to health.However, certain groups of people, including people with hypertension, children and adolescents may be more sensitive to caffeine in terms of side effects 11 .Before the treatment, the presented patient had already had hypertension, but this higher risk to adverse effects of caffeine was not taken into account. The contraindications to mesotherapy are 12,13 : pregnant and lactating females, insulin dependent diabetes mellitus, history of bleeding disorders, history of strokes, history of thromboembolic phenomena, patients on medication for cardiac arrythmias, aspirin, warfarin, heparin, history of recent cancer, severe heart disease, renal disease, any severe chronic systemic disease. The main clinical effects at both, caffeine therapeutic doses as well as in case of poisonings with proven toxic concentrations, are from adenosine antagonism, beta adrenergic receptor stimulation and phosphodiesterase inhibition.On admission to the hospital, the presented patient's ECG showed repeated attacks of PSVT, with evidence of hypertensive cardiomyopathy and hypoklemia. Severe caffeine poisoning is relatively rare and accompanied by unwanted hemodynamic complications, including a high mortality rate.Among complications are the most severe forms of cardiac abnormalities: sinus tachycardia, ventricular tachycardia and ventricular fibrillation, generalized convulsions, multiple organ failure (MOF) and cardiac arrest 1,8,9,14 .The presented patient was initially observed to have hypotension, nonsustained ventricular tachycardia, and PSVT, fortunately with a favorable outcome.Sinus tachycardia is a common sign of poisoning, and is most likely benign in people with no previous cardiac disease.However, sinus tachycardia in methylxanthine poisoning, can progress to severe arrhythmias.Atrial fibrillation, atrial flutter, multifocal atrial tachycardia, ventricular tachycardia and ventricular fibrillation may result from methylxanthine poisoning 1 .Caffeine stimulates the respiratory center in the CNS, increasing the frequency of breathing, causing hyperventilation, respiratory alkalosis, respiratory failure, respiratory arrest and acute lung injury (ALI).On admission to the intensive care unit, arterial blood gas values described in the presented patient indicated respiratory alkalosis and hypocapnea, which was correlated with increased respiration and tachypnea. In the article of Scottish authors 10 , 24 (63.2%)patients showed only gastrointestinal symptoms, nausea and vomiting.In the first 6 hours of the treatment, the presented patient showed gastrointestinal symptoms, nausea and vomiting, which responded favorably to the use of metoclopramide. It is known that caffeine causes psychiatric disorders under certain circumstances.Caffeine, which is widely used especially in younger population, is also found in many energy drinks, and can cause marked anxiety in otherwise healthy individuals.This is particularly true in sensitive persons with existing anxiety disorders.Caffeine may be associated with symptoms of depression, sleep disorders, and worsening of psychotic disorders in people with schizophrenia 15 .The presented patient was anxious upon admission, and later complained of a headache.According to the Scottish Poison Centre (for the period 2000 -2008) dizziness, headache, tremor and agitation were much less common symptoms of caffeine poisoning in comparison to those with gastrointestinal symptoms 10 . Hypokalemia is a common manifestation of acute poisoning with methylxanthines resulting in beta adrenergic agonism and stimulation of Na + /K + ATP-ase, which leads to a shift of potassium from the extracellular to intracellular space.This can be accelerated by vomiting and loss of potassium through the kidneys.In patients with theophylline intoxication, hyperkalemia occurs early and is independent of the initial laboratory analysis with vomiting 16 .The presented patient had a potassium concentration of 2.1 mmol/L, which was interpreted as a loss of potassium due to vomiting.After the first analysis of toxic concentrations of caffeine, hyperkalemia was interpreted as caffeine toxic effect.We performed a parenteral and oral potassium replacement, which exhibited parallel falls in toxic concentrations of caffeine, but not to completely normal levels.The concentration of caffeine in blood of the patient before mezotherapy remains unknown, but there are recommendations that coffee or caffeine-containing beverages must not be used for at least 12 h before the treatment 17 .In the presented patient, the concentration of caffeine was 85 mg/L immediately after mezotherapy, and 57.73 mg/L on the second day. The immediate cause of death in severe caffeine poisoning is ventricular fibrillation, as has been shown by an experimental work 18 .Generally speaking, a concentration of caffeine in the blood of more than 100 mg/L is considered lethal 19,20 . A case of sudden death has been documented involving a 25-year-old woman previously diagnosed with mitral valve prolapse.Cardiac arrest occurred immediately after drinking energy beverage.At autopsy screening, the presence of 19 mg/L caffeine was indicated in the aortic blood.The caffeine concentration was 10 g/L 21 upon further analysis.Swedish forensic experts during a year period witnessed four fatalities, demonstrating caffeine concentration of 80 to 100 mg/L 22 in post-mortem toxicological analysis. Fatal caffeine overdose in adults is rare and involves more than 5 g of a drug containing caffeine to cause death.American toxicologists 23 from New Mexico, during a one year follow-up documented accidental caffeine poisoning as a cause of death in two patients: in a 39-year-old woman with the history of intravenous drug abuse with the caffeine concentration 192 mg/L in femoral blood and in a 29-yearold man with the disease history of obesity and diabetes mellitus, with caffeine concentration of 567 mg/L in femoral blood. In both patients, the cause of death was accidental caffeine poisoning.At the begining of the 80's, articles were published about the cosmetic application of phosphodiesterase inhibitors and cAMP in the treatment of lipodystrophy 5 .In animal models, after subcutaneous application, the efficacy of methylxanthines themselves was tested, usually incorporating caffeine and theophylline, methylxanthines, or in combination with other substances that have a lipolytic effect.Their effect on the rate of absorption was monitored in accordance with artificially induced granulomas in adipose tissue.A better effect was achieved by combining preparations 5,24 . Adverse effects of cosmetic treatments for cellulitis occur with intradermal injection of lipolytic substances, and can be presented as pain and erythema at the puncture site, vagal reactions, injury to nerves and blood vessels, skin necrosis and hematoma formation.Hematomas, which should not follow this type of treatment, are usually the most common side effect, and are a consequence of the effects of the applied substances interfering with the process of coagulation.According to the previously published data there are numerous local side effects associated with therapy 5 .Abdominal wall hematomas in the presented patient are shown as local side effects caused by substances in deeply applied injection (Figure 3).After application there was a massive absorption into the circulation as proved by the elevated concentration of caffeine 85.03 mg/L in the blood. Few papers describe systemic toxicity of substances applied during and after mesotherapy.Brazilian authors 30 describe the first case of systemic toxicity in a young woman presenting with thyrotoxicosis caused by mesotherapy with triiodthyroacetate acid.Alster and Tanzi 31 reported that in 2003, mesotherapy was banned by the Brazilian National Agency of Health due to its unwanted side effects.There are also systemic complications."Systemic complications are allergic reactions, vagal syndromes, lipothymia, infections (HIV, hepatitis, etc) and liver toxicity with demyelination of nerves due to large doses of phosphatidylcholine" 24,25 . In addition to supportive and symptomatic treatment for severe poisoning and systemic toxic effects, hemoperfusion and hemodialysis are strongly recommended 14,32 .Fortunately, the presented female patient responded favorably to the treatment and did not require the action of extracorporeal detoxification.Cardiopulmonary resuscitation was needed in the worst case scenario.There were documented cases of survival in patients with cardiotoxicity induced by caffeine in which cardiopulmonary support was applied percutaneously 9 . Conclusion The severe systemic toxic effects of caffeine applied intradermally in mesotherapy were seen in the presented patient.The cause of massive caffeine absorption from the subcutaneous tissue into the systemic circulation of the patient was partly due to the tiny blood vessels in the skin, as indicated by hematomas in the abdominal wall.The clinical picture showed mild gastrointestinal symptoms (nausea, vomiting), CNS disorders (somnolence, anxiety), and cardiovascular disturbances (hypotension, ventricular tachycardia and nonsustained PSVT).In this era of increasing popularity of mesotherapeutic aesthetic treatment, one should keep in mind the possibility of a significant absorption of the applied substances into circulation and their potential systemic side effects.Proper methods of intervention need to be applied in specialized institutions for cosmetic surgery that are staffed and equipped to respond in case of complications, as well as in poisonings. Fig. 1 - Fig.1 -Injection marks on the anterior abdominal wall with surrounding hematomas. Fig. 3 -Fig. 4 - Fig. 3 -The ECG after beginning the treatment showed heart rate slowing to 90/min with changes in repolarization (the first day)
2017-06-15T22:09:39.250Z
2012-01-01T00:00:00.000
{ "year": 2012, "sha1": "ccac5aaf38500a391fab30068e2d362bdebee489", "oa_license": "CCBYSA", "oa_url": "http://www.doiserbia.nb.rs/ft.aspx?id=0042-84501208707P", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ccac5aaf38500a391fab30068e2d362bdebee489", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
251110418
pes2o/s2orc
v3-fos-license
Mobile Health Applications for Depression in China: A Systematic Review Mobile health (mHealth) applications (apps) have the potential to increase access to mental health care. In China, there is growing interest in mHealth apps for depression. Our objective was to systematically review research on mHealth for depression in China to identify benefits and challenges. A systematic literature search was conducted using Chinese and English databases in accordance with Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Randomized and nonrandomized clinical studies on mHealth apps and depression in China were included. Study quality was assessed using the Cochrane Risk of Bias tool. Seven studies met the inclusion criteria with three randomized trials, two quasi-randomized trials, one clinical trial with an uncertain grouping method, and one study with a single-group design. All studies used the WeChat platform and included activities such as psychoeducation, self-management, supervised group chats, and/or remote contact with a healthcare team, in comparison to usual care. All studies reported significant and large benefits for outcomes, but the risk of bias was high. There are few rigorous evaluations of mHealth apps for depression in China, with all included studies involving WeChat programs and most using WeChat to extend nursing discharge care for inpatients with depression. While these studies showed significant improvement in health outcomes as compared to usual care, the results remain inconclusive because of the high risk of bias. mHealth holds promise for increasing access to mental health care in China, but issues such as efficacy, scalability, patient and clinician acceptability, and data privacy must be addressed. Introduction And Background Major depressive disorder (MDD) is a common mental disorder affecting more than 280 million people worldwide, representing about 4% of the population [1]. In China, over 56 million people are estimated to suffer from MDD [2][3]. Depression was estimated to result in the loss of 5.8 million disability-adjusted life years (DALYs), an increase of 36.5% from 1990 to 2017, ranking it the 10th leading medical cause of DALYs in China [2]. Many efforts have been made to improve depression treatment in the past decades. As computer and Internet technology matured, physicians started to apply electronic health (eHealth) to mental health services, and eHealth practices are proving to be successful and cost-effective [4]. Recently, mobile phone use has dramatically increased and smartphones have become ubiquitous. In 2020, there were over 6 billion smartphone users globally [5]. Mobile health (mHealth) is defined as the practice of medicine supported by applications, or apps, for mobile devices such as smartphones and tablets [6]. mHealth differs from the broader term, eHealth (which includes telemedicine and internet-based resources), by incorporating mobile device capabilities such as multimedia technologies and on-device notifications. Hence, mHealth is more than simply a web browser or video conferencing program that is launched from a mobile device. mHealth services and apps for mental illness have shown evidence of effectiveness in many countries [7][8][9]. In China, mobile use has also skyrocketed. In 2021, China had over 1.03 billion internet users and 1.64 billion mobile phone subscriptions [10]; almost all internet users in China now go online using mobile devices. A recent Chinese survey on e-Mental Health found that 83% (n=133/184) of patients and family members, 92% (n=208/225) of mental health professionals, and 92% (n=162/177) of respondents from the general population reported high or relatively frequent use of mobile devices [11]. WeChat, the widely 1 2 3 2 2 4 1 5 2 6 4 popular social media platform with over 900 million users [12], is embedded into the lives of most Chinese people, from booking appointments to paying for services and products. However, the application of mHealth interventions in China, especially in mental health service delivery, is nascent but seeing rapid growth. In 2016, a systematic review found 234 Chinese mHealth apps, but only three apps were used by psychiatrists [13]. By 2020, there were 843 apps about psychology, and 121,560 apps on psychological counseling available on online app stores [11]. Thus, there are considerable opportunities for mHealth in China. Existing systematic reviews on mHealth focus on Western countries, Asian regions other than China, or on patients with other diseases [7,14]. Very few systematic reviews are relevant to mHealth for mental health in China, and none have focused specifically on depression. To address this knowledge gap, we conducted a systematic review of mHealth interventions for depression in China, with an aim to identify the benefits of mHealth and opportunities and challenges for future research. Search Method We used methods consistent with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines and checklist [15], but the protocol was not registered. Selection Criteria We included randomized or non-randomized studies based on the criteria in Table 1. Study selection was conducted independently by two reviewers; disagreements were resolved by consensus with a third reviewer. Category Criteria Inclusion: (1) Involved an mHealth app or intervention (2) Targeted major depressive disorder or depressive symptoms (3) Conducted in mainland China or had data on users in China Exclusion: (1) Involved other illnesses that were primary (e.g., epilepsy with depressive symptoms) (2) Only used the phone feature (e.g., telephone-delivered psychotherapy) (3) Included only populations in Hong Kong or Chinese immigrants in other countries (because of differences in health care systems) (4) Involved only online applications (e.g., internet-delivered psychotherapy), even though these can be viewed on a smartphone browser, our focus was on mHealth apps Assessment of Risk of Bias We summarized the risk of bias of individual randomized studies using the Cochrane Risk of Bias tool [16]. The risk of bias domains assessed was random sequence generalization, allocation concealment, blinding of participants and study personnel, blinding of outcome assessment, selective reporting, and other bias. The assessment was conducted independently by two assessors with disagreements resolved by consensus with a third assessor. Results Screening and Selection Figure 1 shows the PRISMA flow diagram. After screening for inclusion/exclusion criteria and removing duplicates, we found 12 papers (9 written in Chinese and 3 in English) that appeared suitable for inclusion. Upon detailed review, we excluded one study of a telephone-only intervention and four papers on internetonly applications, such as online discussion groups and online data reporting, leaving seven eligible papers addressing mHealth for depression in China. Table 2 summarizes the key features of the included studies. All seven included studies used the WeChat platform and were published in Chinese journals. There were three randomized controlled trials (RCTs), two quasi-randomized trials, one clinical trial with an uncertain grouping method, and one study with a singlegroup design. The period of research spanned from 2014 to 2019 and research sites covered the coastal and inland provinces of China, including the cities of Qingdao (Shandong province), Zhenjiang (Jiangsu), Guangzhou (Guangdong), and Luohe (Henan). The included studies were conducted primarily in psychiatric hospitals (which are named mental health centers in China) and general hospitals. For diagnostic criteria, four studies used the International Classification of Diseases (ICD-10), two studies used the Chinese Classification and Diagnostic Criteria for Mental Disorders, 3rd edition (CCMD-3), which has diagnostic categories and criteria similar to the ICD-10, and one study did not report the diagnostic criteria. The participants in the seven studies were primarily hospitalized patients who had been recently discharged; outpatients were included in only one study. The sample size of individual studies ranged from 60 to 212. Outcome assessments included various clinician-administered symptom scales and patient-rated scales. Because of the considerable heterogeneity in patient populations, study methods, and outcomes, quantitative synthesis was not possible. Instead, the studies are summarized in a narrative review. Narrative Review of Studies Ai and colleagues conducted a quasi-randomized study on newly discharged patients with depression by ICD-10 criteria [17]. They assigned patients by order of discharge to an intervention group (n=79) that received support via an intensive WeChat-based program or to a comparison condition of usual care (n=77). They did not indicate if any blinding took place. Two components were included in the WeChat program: regular information sessions three times a week focused on psychoeducation and medication guidance, and group chats with two nurses and an unspecified number of other patients. The group chats included practical problem solving of daily challenges, medication adherence, social skills and vocational coaching, and family environment optimization. Additionally, those in the WeChat group were taught distress tolerance and mindfulness skills, encouraged to practice healthy diet and exercise habits, and instructed to avoid substances including caffeine and alcohol. Outcomes included the Self-rating Depression Scale (SDS) and Self-rating Anxiety Scale (SAS), both validated for and normed in China, and medication adherence. The SAS and SDS scores were similar between groups at the time of discharge, but by the three-month follow-up and continuing through to one year, the WeChat group had significantly lower SDS and SAS scores compared to the care as usual group (p<0.01). Both groups had comparable and high medication adherence rates at baseline, but by three months, the WeChat group outperformed the usual care group significantly, with similar rates persisting to the one-year follow-up. At one year, five of 79 (11%) were nonadherent with medications in the WeChat group compared to 35 of 77 (45%) in the comparison group (p<0.01). Huang et al. conducted an RCT on patients with depression and anxiety who were recently discharged from the hospital. Patients were randomized to the WeChat intervention group (n=50), which was supervised by a multidisciplinary team of physicians, nurses, and psychological counselors, or to the comparison group (n=50) that received routine nursing care. The intensive intervention was available via WeChat and telephone and included psychoeducation, medication adherence guidance, family involvement and followup, nursing guidance, and twice-weekly supervised group chats. The SDS and SAS were used for inclusion and as outcomes. Additional outcomes included the World Health Organization Quality of Life scale (WHO-QOL) and a study-specific medication adherence scale. After six months, SDS and SAS scores in the WeChat group were significantly lower compared to the usual care group (p<0.05). The medication adherence (p<0.001) and quality of life (p<0.05) of patients were also significantly better in the WeChat group. There were no significant differences between groups in relapse or rehospitalization rates during the six-month follow-up. Of note, statistical analysis was conducted on completers only (n=74), and there were more dropouts in the WeChat group than in the comparison group (10 vs 6, respectively). Wang and colleagues conducted a quasi-randomized study in which 80 outpatients with MDD with a 17-item HAM-D score >17 were assigned into two groups according to visit order [20]. HAM-D scores at baseline were not reported. The intervention group received a WeChat program with psychoeducation (medication management, identifying and managing negative cognitions, stress management) delivered one to two times a week via text/video/graphics, and a daily chat group supervised by nurses and psychologists. The comparison group received routine health education. After 12 weeks, the WeChat intervention group had superior rates of HAM-D recovery/improvement (90% vs 68%, p<0.05) and medication adherence (88% vs 60%, p<0.01) compared to the usual treatment group. In addition, the quality of life was significantly better in the WeChat group compared to the usual treatment. Another clinical trial involved two groups with 30 participants per group but the group allocation methods were unclear [21]. Furthermore, the diagnostic criteria, inclusion criteria, and outcome scores at baseline were not reported. The intervention group received a WeChat program with psychoeducation and selfmanagement and a group chat for at least 0.5 hours/week, supervised by a multidisciplinary team. The comparison group received routine discharge nursing. Outcomes were assessed with the 17-item HAM-D and the Morisky Medication Adherence Scale (MMAS). After six months, the authors reported that the WeChat group had significantly lower HAM-D scores (t-test, p<0.05) than the comparison group; however, we could not replicate that result using the reported means and standard deviations (7.21 ± 3.73 versus 7.63 ± 4.12, respectively, p=0.68). The WeChat group also had significantly better medication adherence (higher MMAS scores, p<0.05) and higher overall nursing satisfaction than the comparison routine discharge nursing group. Xie et al. conducted two studies: the first was an RCT [22] that randomized 212 patients with MDD, aged 18-55, who were discharged from the hospital and recovered, into two groups (although data on only 210 patients were reported). The intervention group (n=104) participated in an intensive program with daily activities over WeChat managed by the treatment team (psychiatrists, nurses, counselors). The WeChat program was tailored according to an individual assessment to include psychoeducation (focused on controlling symptoms and encouraging medication adherence), personalized life and social skills guidance via text messaging, video games, and a weekly group chat with nurses. The comparison group (n=106) received usual discharge care. Outcomes evaluated were the General Self-Efficacy Scale (GSAS), knowledge of depression (assessed by the Insight and Treatment Attitudes Questionnaire, ITAQ), and the MMAS. At the three-month study endpoint, the scores on all the outcome scales were significantly improved from baseline in the WeChat group, but not in the usual care group. The WeChat group also had significantly better scores at the endpoint. The second study by Xie et al. was a single-group study [23] in patients with MDD (n=108), recently discharged and in remission (24-item HAM-D score ≤8), using a WeChat program similar to that of the first study [22]. Outcome assessments again included the ITAQ and MMAS, with scores reported at the onemonth and six-month timepoints (but not at baseline). ITAQ and MMAS scores of patients at six months were significantly improved from those of the one-month time point. Discussion To our knowledge, this systematic review is the first to examine the strength of evidence for mHealth for depression in China. Our literature search found only seven studies of mHealth, all involving the WeChat platform. However, as assessed by the Cochrane Risk of Bias tool, the quality and reporting of studies were poor, with important information lacking and weaknesses in the study methodology. Randomization method, allocation concealment, and blinding of participants and assessors were often not reported or addressed. The study methodology was heterogeneous, with intensive, multifaceted, study-specific WeChat interventions that were difficult to compare across trials. All the studies used usual care as the comparison condition, which does not control for the intensity of professional contact and attention provided by all the WeChat interventions. Of the studies that included group chats, there is little information on the format (WeChat supports video chat, telephone calls, voice messaging, and texting) or intensity of clinician-topatient interaction, and whether this was standardized. The outcome assessments also varied from study to study. With those caveats, the seven studies from China reported that mHealth services provided via WeChat appeared highly effective for reducing depressive symptoms, increasing medication adherence, and improving quality of life. One potential benefit of mHealth is its scalability for delivering services to more patients in China. There are shortages in professional mental health staff, with only 2.19 psychiatrists and 5.51 registered psychiatric nurses per 100,000 people [24]. These resources are centralized in major institutions with scant outpatient coverage. In 2018, 97% of psychiatric services were provided in hospitals [25]. The studies reviewed here all delivered similar elements such as psychoeducation and self-management, supervised group chats, and contact with a multidisciplinary team and/or nursing staff. In this regard, the WeChat platform became an extension of virtual outpatient care. While Chinese patients prefer in-person medical interactions, they also have positive attitudes towards mHealth for depression treatment [11]. The advantages of these WeChat programs for patients could include greater accessibility (especially outside urban centers), more discreet delivery, lower costs, empowerment of patients and their families, and, especially important in the context of the global COVID pandemic, reduced need for in-person interactions. However, there are also limitations and barriers to the wider use of mHealth in China and elsewhere. For example, access to the internet in rural China is still lagging. By the end of 2021, 57.6% of people in rural areas were using the internet, compared to 81.3% of people in urban areas [10]. Despite rural usage rapidly increasing from 35.4% in 2017 [10], there remains a large digital divide that compounds barriers to mental healthcare access. There are several gaps in the available data representing areas for further research. Almost all the studies focused on recently discharged patients from hospitals or mental health centers, overrepresenting seriously ill patients from urban tertiary centers. Only one study involved acute treatment of outpatients at a hospital clinic, also based in an urban location. Data for patients in underserved regions, especially rural China, remain lacking. Workload burden is also a concern. Every study implemented WeChat platforms that required the involvement of healthcare professionals. A recent study surveying 225 Chinese mental health professionals about eHealth showed that roughly half are concerned about the increased workload, and twothirds feel they would have inadequate time and energy [11]. Future studies will need to consider the implementation of mHealth in ways that are effective for patients and acceptable to healthcare providers. Additionally, privacy and security issues were not explicitly discussed in the included studies. Data privacy in mHealth is a major issue globally, and China has unique challenges in this regard. Some health data, such as genetic data, is already considered and regulated as property of the state [26]. China has introduced recent regulations, including the Personal Information Protection Law applying to any online data collection [27], but effective mHealth privacy regulation remains an ongoing area of development in China, as it does in many other countries [28][29]. Future mHealth studies should address privacy and security safeguards within the context of evolving regulatory changes. Research into mHealth is challenged by the large number but low quality of apps. Even in North America, where markets are more mature and more diverse options are available, the majority of apps are not evidence-based and some are occasionally harmful [30]. In China, the total number of mental health-related apps available on online storefronts is growing rapidly, and most of them are also not evidence-based [31]. With an often-overwhelming number of options, there is a great need for future research to refine and identify the best mHealth implementations and strategies to guide patients and practitioners. Conclusions In summary, compared with the expanding use of mHealth applications in other countries, mHealth in China is still in the early stages. There is preliminary positive evidence for using WeChat to improve outcomes for patients with depression discharged from the hospital. However, attention to major gaps in the evidence-base and increased rigor in study methodology are required to adequately demonstrate the efficacy, safety, acceptability, and accessibility of mHealth applications in China.
2022-07-28T15:08:44.196Z
2022-07-01T00:00:00.000
{ "year": 2022, "sha1": "8ba32ef091d4cf98354d1a3832bf176d670f550b", "oa_license": "CCBY", "oa_url": "https://www.cureus.com/articles/101554-mobile-health-applications-for-depression-in-china-a-systematic-review.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9a840b455a09c70279c871e0fa161469b1f2f419", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
31505559
pes2o/s2orc
v3-fos-license
Non-local modulation of the energy cascade in broad-band forced turbulence Classically, large-scale forced turbulence is characterized by a transfer of energy from large to small scales via nonlinear interactions. We have investigated the changes in this energy transfer process in broad-band forced turbulence where an additional perturbation of flow at smaller scales is introduced. The modulation of the energy dynamics via the introduction of forcing at smaller scales occurs not only in the forced region but also in a broad range of length-scales outside the forced bands due to non-local triad interactions. Broad-band forcing changes the energy distribution and energy transfer function in a characteristic manner leading to a significant modulation of the turbulence. We studied the changes in this transfer of energy when changing the strength and location of the small-scale forcing support. The energy content in the larger scales was observed to decrease, while the energy transport power for scales in between the large and small scale forcing regions was enhanced. This was investigated further in terms of the detailed transfer function between the triad contributions and observing the long-time statistics of the flow. The energy is transferred toward smaller scales not only by wavenumbers of similar size as in the case of large-scale forced turbulence, but by a much wider extent of scales that can be externally controlled. I. INTRODUCTION The dynamics of kinetic energy plays a central role in turbulent flows. The nonlinear term in the Navier-Stokes equations is responsible for the transfer of energy between any three wavevectors that form a triad in spectral space [1]. Along with the viscous and forcing terms this controls the production, transfer and dissipation of energy in the system. The triadic interactions have been studied for decaying and forced turbulence by many authors (for a review see [2]). Throughout the years various types of large-scale forcing methods [3,4,5,6,7,8,9,10,11] have been proposed to sustain quasi-stationarity in numerical turbulence as an idealized form of turbulent flow. The aim of such numerical experiments was to investigate the basic concept of the Kolmogorov (K41) theory [12] that proposes an inertial range in the kinetic energy spectrum and local transfer of energy within this range. The turbulent kinetic energy is on average transferred locally from larger to neighboring smaller scales. The purpose of this paper is to numerically investi- * Electronic address: a.k.kuczaj@utwente.nl † Also: Anisotropic Turbulence, Fluid Dynamics Laboratory, Department of Applied Physics, P.O. Box 513, 5300 MB Eindhoven, the Netherlands gate the processes associated with the flow of energy in a turbulent flow. Specifically, we consider modulated turbulence in which the modifications involve the supplementary forcing in a wide range of modes located in an inertial range of the flow. In the literature, mainly turbulence with forcing restricted to the large scales has been examined in detail [2]. The small scale behavior was found to be energetically quite insensitive to the type of forcing and at sufficiently high Reynolds numbers a welldeveloped inertial range was observed [13]. Against this background, we extend the use of forcing methods and investigate their application directly in the inertial range, thereby focusing particularly on the competition between transfer and forcing. We quantify the dominant alterations due to the broad-band forcing in terms of changes in the energy cascading processes. We pay attention to the energy transfer function and consider changes that arise in the contributions from 'local', 'non-local' and 'distant' triadic interactions. Compared to traditional large-scale forced turbulence, we observe a strengthening of the contributions of non-local interactions, leading to a modification of the inertial range spectrum. High-resolution direct numerical simulations of turbulence that measure the influence of individual terms in the Navier-Stokes equations on the triadic interactions have been reported [14,15,16,17,18,19,20]. It was found that the energetically dominant triadic interactions involve sets of three modes in which the magnitude of the wavevector of one of the modes differs considerably from the other two. This suggest that statistics of smaller scales may be affected by larger scales. These dominant processes are not in contradiction with the Kolmogorov theory because the energy is mainly exchanged between the two modes of quite similar wavevector-size [19]. Only a small net energy transfer toward larger wavenumbers arises that involves a detailed cancellation between many individual triad transfers [20]. The spectral space dynamics is characterized by a multitude of separate transfer-processes among various modes. These contributions can be collected in pairs with opposite sign and almost the same magnitude. In total, this leads to a large number of 'near-cancellations' and hence only a comparably small net effect remains that constitutes the well-known 'downward cascading' toward higher wavenumbers in spectral space. This was confirmed with the use of helical mode decomposition in [20]. The dynamics of actual turbulent flows seen in nature is usually characterized by an enormous number of interacting scales, often perturbed by geometrically complex boundaries and influenced by additional forces such as rotation and buoyancy. This can lead to inhomogeneity and anisotropy, which are not covered directly in the classical view of the Kolmogorov energy cascade and may express themselves in non-local interactions of various particular scales of motion. The complexity of such systems motivated us to study in more detail forcing methods that simultaneously perturb a prescribed range of scales [21]. Such 'broad-band' agitation of various scales of motion is observed experimentally in turbulent drag reduction by fibre suspension [22,23], flows through porous media [24] and over tree canopies [25]. In these cases the energy is transferred abruptly to small scales when the flow reaches an obstruction. Various other types of flows also exhibit turbulent motions that coexist at different scales [26]. To explore the possibilities of a broader application of forcing methods in turbulence modeling and concurrently examine the energy dynamics in flows that do not directly follow the classical Kolmogorov −5/3 scaling we employ numerical simulations of broad-band forced turbulence. The forcing studied in this paper represents a continual addition/removal of energy from a broad range of scales in the system, thereby providing the possibility of altering the characteristic −5/3 slope in the kinetic energy spectrum as predicted by the K41 theory. Specifically, as indicated in Fig. 1, we apply the forcing to two regions. The large-scale forcing k ≤ k 0 classically agitates the largest scales in a flow while the additional band k 1 < k ≤ k 2 is located in a region of the inertial regime, to allow a direct competition with the nonlinear transfer term. For inertial-range scales broad-band forcing introduces explicit energy injection next to the transfer-term. We varied the spectral support and strength of the high-k band to investigate the modulation of the turbulence that develops. This distinguishes it from the classical forcing of large scales only. In this paper we compute changes in the energy distribution associated with the broad-band forcing and ob- serve a characteristic alteration in the spectral energy transfer compared to the classical Kolmogorov cascading. This alteration expresses itself by additional local minima and maxima in the transfer function. It is well known that in cases with large-scale forcing only, negative values are found for the transfer at the smallest wavenumbers indicating the energy injection at these scales. The positive values for the transfer that arise for all other wavenumbers indicates the energy cascading process to smaller scales. In our case of broad-band forcing in the inertial range, additional negative regions appear in the transfer function. These coincide with the additional local injection of energy. Such a negative region is bordered by nearby additional maxima in the transfer. These characterize the associated increased energy transfer to scales just larger or just smaller than the broad-band forced region. Forcing applied to different spatial scales simultaneously allows a non-local modulation of the energy distribution compared to the reference Kolmogorov case. To quantify the alterations in the energy transfer we use a decomposition of the velocity field closely following [16] and investigate the magnitude of the contributions from various spatial scales to the overall energy transfer. The main finding of this study pertains to the role of broad-band inertial range forcing in modifying the natural energy cascading process. This is understood explicitly in terms of changes in the detailed non-local energy transfer. In addition, we illustrate and quantify the mechanism of enhancement of the total energy transfer to smaller scales arising from broad-band forcing and the depletion of the energy-content in the large scales. Agitation of certain high wavenumbers can affect well separated low wavenumber components in a flow. These findings may be relevant for problems that involve the control of turbulent flow in complex geometries in which various scales of motions are simultaneously agitated, e.g., in compact heat-exchangers [24]. Further applications of such broad-band forcing may be connected with the observed modulations of transport properties in physical space leading to an enhanced scalar dispersion rate [21]. The organization of this paper is as follows. The mathematical formulation of the problem is given in Sec. II where the computational method and the energy transfer terms are also described. The energy spectra of broadband forced turbulence and the modulation of the energy transfer are investigated in Sec. III. In Sec. IV we present a more detailed view of the energy transfer processes by computing partitioned energy transfer function over various spatial scales. The paper closes with a summary in Sec. V. II. COMPUTATIONAL FLOW MODEL An overview of the computational model is given in this section (II A). The forcing method in the broad-band context is described subsequently (II B). In addition, the energy transfer between different triads is partitioned in a number of contributions (II C) that will be studied numerically in Sec. IV. Finally, simulation details are given (II D). A. Equations of motion The incompressible Navier-Stokes equations in spectral (Fourier) representation can be written as where u α (k, t) is the velocity field coefficient at wavevector k (k = |k|) and time t [1]. The non-dimensional kinematic viscosity ν is the inverse of the computational Reynolds number (Re = 1/ν). The nonlinear term reads and the forcing term F α (k, t) is specified in section II B. The tensor M αβγ in (2) accounts for the pressure and incompressibility effects: in which Taking the inner product of (1) and u * α (k, t), where the asterisk denotes the complex conjugate, we obtain the energy equation The spectral energy density is denoted by E(k, t) = 1 2 u * α (k, t)u α (k, t). The rate of energy exchanged at wavevector k with all other modes in the system is characterized by the energy transfer function The rate of energy provided by the forcing term is and the energy dissipation rate present in (5) reads The three terms T (k, t), T F (k, t) and ε(k, t) represent the energy dynamics in the system that each typically act in distinct wavenumber regions. The forcing term T F (k, t) is non-zero in the forced modes only. In this paper the collection of forced modes will always contain a low wavenumber band corresponding to large-scale forcing of the flow. In addition, higher wavenumber contributions will be included in T F (k, t). In contrast, the energy dissipation rate ε(k, t) is defined in the entire spectral space, but it is dynamically important primarily for the high wavenumber range. Finally, the transfer term T (k, t) is basic to the development of an energy cascade and is a dominant contribution for wavenumbers in an inertial range [1]. The change of the total energy E in the system is connected with its viscous dissipation and the total effect of the forcing. In fact, introducing we find where ε(t) = k ε(k, t) and T F (t) = k T F (k, t). We used the fact that the total energy transfer T (t) = k T (k, t) = 0. The injection of energy occurs only in the forced region. This keeps the whole system in a quasistationary state. Normally, the forced region is restricted to the largest scales in a flow represented by the smallest wavenumbers [5,10]. The energy introduced in the large scales is transferred to smaller scales and dissipated primarily in very localized flow-features of viscous lengthscales. By the introduction of an additional source of energy in the inertial range we will study the perturbation of the energy cascading process by the forcing. The forcing method adopted here will be presented next. B. Forcing method Forcing is achieved by applying an additional driving F α (k, t) to the velocity field in Fourier space, cf. (1). Conventionally, the turbulent cascade develops as a statistical equilibrium is reached, characterized by the balance between the input of kinetic energy through the forcing and its removal through viscous dissipation. In literature ( [3,4,5,6,7,8,9,10,11]), we may distinguish several numerical approaches to forced turbulence that all refer to the agitation of the largest scales of motion. Here, we modify such classical forcing procedures by allowing for the simultaneous agitation of a broader range of intermediate-k modes as depicted in Fig. 1. We study two ranges of forcing: the classical largescale forcing (k ≤ k 0 ) and small-scale forcing localized in the spectral region where the transfer of energy T (k, t) is important (k 1 < k ≤ k 2 ). By narrowing or widening the width of the forced bands, along with a change in their location in spectral space we can control several aspects of the energy-dynamics. The strength of forcing is controlled by the amount of energy introduced to various regions in spectral space. We expect the small-scale forcing band to influence the inter-scale energy transfer process not only between scales of similar size but at a wider spectrum of scales. This may be understood globally as follows. The process of energy cascading is mainly interpreted via the resulting local transfer of energy in spectral space [19]. However, this total energy transfer results from many nonlocal contributions and these may be directly altered by the additional small-scale forcing. Correspondingly an influence on the overall energy cascading process may occur over an extended wavenumber range. We quantify this effect by evaluating the nonlinear interactions among the various modes while they are being perturbed by the broad-band forcing. In this paper we adopt the recently proposed fractal forcing [27], which involves a power-law dependence of F α on the wavenumber: where the coefficient β = D f − 2 is connected with the fractal dimension D f of the stirrer and ε w (k) is the energy input rate at mode k. The set of forced modes K is composed of bands K m,p (m ≤ p) which consist of in terms of the size of the computational domain denoted by L b . In the simulations we always force the first shell S 1 and a single high-k band K m,p , if not stated otherwise. The classical large-scale forcing of the first shell S 1 has a constant energy injection rate ε w,1 in (11) while K m,p has a constant strength ε w,2 and a support in spectral space controlled by m and p: The vector e in (11) is given by [27]: This vector consists of two parts, either parallel or perpendicular to the vector u(k, t). In this forcing procedure, we have control over the energy input rate, the range of forced modes and the effective geometrical complexity of the stirrer represented by the fractal dimension. The summation over all forced modes of u * α (k, t)F α (k, t) yields a total energy input rate given by: where ε w = ε w,1 + ε w,2 . The energy input leads to a quasi-stationary state described by the energy equation This characterizes the energy dynamics in the system at the most global level. We observe that this forcing implies a constant energy injection rate that results in a fluctuating total energy E and a fluctuating total energy dissipation rate with mean ε w . In the next subsection a more detailed description of the energy dynamics will be given based on the processes that are responsible for its transfer. C. Energy transfer A detailed investigation of the energy transfer in largescale forced turbulence [19] shows that the dominant triadic interactions occur between wavevectors of quite different lengths. Hence, large-scale forcing may be directly involved in the dynamics of much smaller scales [28]. The interactions are roughly classified as "local" when the sizes of all wavevectors in a triad are similar, "non-local" when the scale separation is about a factor 10-15 and "distant" when the separation is much larger [29]. It was shown that the transfer of energy reaches maximum values for triads with two wavevectors of similar size and one with quite different length [19]. Although, the interactions between triads can be seen mainly as non-local, the dominant net energy transfer is local, i.e., occurring between similar scales [13,30,31]. The interactions produce forward and backward energy transfer that combined result in a small net forward energy transfer because of the detailed balance between contributions that virtually cancel each other [20]. The forward cascade in the inertial range was found to be dominated by local and non-local interactions, while the distant interactions do not significantly transfer energy [29]. All these findings concern the classical turbulence forced at the largest scales. Against this background, we ask what the turbulence response will be to a broad-band perturbation of the energy transfer processes? In recent literature a somewhat related study was reported in [32]. Decaying turbulence that starts from an initial condition with an energetically strongly enhanced small-scale band of modes was studied. The presence of the extra small-scale band was found to reduce the intensity of the developing turbulence by enhancing the non-local energy cascade directly towards smaller scales. This removes the kinetic energy more efficiently. The energy feeding mechanisms and energy transfer also attract much attention in transitional and turbulent flows with an active control [33]. The modulation induced by the broad-band forcing has its consequences not only in the spectral space dynamics of a flow but also in its physical space transport properties [21]. To analyze the response of turbulence to the additional broad-band perturbation in more detail we apply previously developed methods used in the examination of energy transfer in large-scale forced turbulence [16]. Referring to Fig. 2, the energy transfer between a wavevector k = (k 1 , k 2 , k 3 ) and all pairs of wavevectors p and q = k − p with p, q chosen in some prescribed regions P and Q will be investigated. Such a decomposition allows measuring the contribution of separate scales to the transfer function T (k, t). The precise specification requires a few steps that are presented next. First, we define the truncated velocity field as Based on this truncated velocity field we may compute the energy transfer involving the wavevector k and all wavevectors p and q: where The nonlinear term Ψ is defined by the convolution of the truncated fields: (19) where the sum is over all triads with p ∈ P and q ∈ Q such that p + q = k. For a statistically isotropic, homogeneous turbulence it is convenient to average over spherical shells in wavevector space. In addition, in view of the considerable computational effort involved in computing all interactions between the very large number of scales present in the flow, we introduced a slight coarse-graining in terms of the regions P and Q as shown in Fig. 2. Specifically, it was found adequate to group together contributions from four adjacent shells. Other more coarse 'groupings' of wavenumbers have been considered in the literature with the aim of extracting the dominant interaction processes at a reasonable computational effort. As an example a 'logarithmic' grouping was adopted in [16] combining contributions from bands with a width of 2 k . In this paper we will look at the interactions of four shells P at distance k p (cf. Fig. 2) with four shells Q at distance k q that contribute to the nonlinear energy transfer to shell S k characterized by the wavenumber k. In terms of the transfer function T PQ (k, t) we may now define the required spectral transfer functions. The energy transfer term (17) gives the exchange of energy by the triad (k, p, q) where the latter two wavevectors are specified by the sets P and Q and the triangle constraint. Summing over all modes k in shell S k we obtain the exact exchange of energy in the k-th shell between k, k p and k q : We refer to T pq as the 'three-mode' transfer. The total energy transfer function T (k, t) can be computed directly from (6) or as sum of the contributions from (20): in which the 'two-mode' transfer T p is given by: The individual transfer-terms T (k, t), T p (k, k p , t) and T pq (k, k p , k q , t) give respectively more detailed characteristics of the energy transfer. The total transfer T (k, t) expresses the amount of energy transferred from (negative) or to (positive) shell S k . All three transfer-terms T , T p and T pq will be used to investigate the transfer of energy in the sequel. D. Simulation details The numerical integration of the Navier-Stokes equations (1) is done via a four-stage, second-order, compactstorage, Runge-Kutta method [34]. To fully remove the aliasing error we applied a method that employs two shifted grids and spherical truncation [35]. We consider the canonical problem of forced turbulence in a cubic box of side L b with periodic boundary conditions. Direct numerical simulations are characterized by N 3 computational points, where N is the number of gridpoints used in each direction. A detailed description of the simulation setup and the validation of the numerical procedure can be found in [21]. The components of the wavevector k are k α = (2π/L b )n α where n α = 0, ±1, ±2, . . . , ±(N/2 − 1), −N/2 for α = 1, 2, 3. The numerical simulations are defined further by the size of the domain (L b =1), the computational Reynolds number Re and the energy injection rates to the two distinct bands (ε w,1 , ε w,2 ). We will study this homogeneous turbulent flow at two different computational Reynolds numbers, i.e., Re = 1061 and Re = 4243. In case of homogeneous, decaying turbulence these Reynolds numbers correspond to R λ = 50 or 100, in terms of the initial Taylor-Reynolds number [21]. The large-scale forcing of S 1 has an energy injection rate ε w,1 = 0.15 that is used as reference case. For all simulations the fractal dimension was kept constant and equal to D f = 2.6 [27]. The smallest lengthscale that should be accurately resolved depends on the size of the box, viscous dissipation and energy injection rate. Usually it is required that k max η > 1 [5,36,37] in terms of the Kolmogorov length-scale η and the maximal magnitude of the wavevector k max = πN/L b that enters the computations. In our simulations k max η 2 indicating that the small scales are well resolved. We consider time-averaged properties of the turbulent flow. For a function h these are defined by where T is sufficiently large. We start the averaging at t 0 = 5 which corresponds to about 10 eddy-turnover times for the simulated cases. The final time was taken equal to T = 30, so all results are averaged over approximately 50 eddy-turnover times. The accuracy of this approximation to the long-time average, measured as the ratio of the standard deviation and the mean signal is less than 5% for all investigated quantities. The energy spectra presented in this paper are shelland time-averaged. Moreover, we focus on compensated spectra E c in which we use non-dimensional Kolmogorov units: The compensation of the spectrum is not strictly required to observe the characteristic changes in the energy distribution, but as it gives more information about the dominant scales present in a flow it will be used throughout. III. BROAD-BAND FORCED TURBULENCE To investigate the energy dynamics in broad-band forced turbulence, first the influence of variation of the strength and location of the second forced band K m,p on the global characteristics and spectrum of the flow is presented (III A). Then we examine in more detail the effects of these variations on the energy distribution and transfer characteristics in the system (III B). A. Energy distribution in forced turbulence We first concentrate on the application of the high-k forcing band at different locations in spectral space. We apply a constant energy input rate ε w,2 = 0.15 to this band. Simultaneously, the large-scale forcing to the first shell S 1 is ε w,1 = 0.15. The computational Reynolds number is Re = 1061. We forced the bands K p,p+3 for p = 5, 9, 17, 25. The parameters of these simulations with some of the statistics are further presented in the Appendix (Runs 1 and 14 − 17 in Table I and II are concerned here). The total kinetic energy, energy dissipation rate and Taylor-Reynolds number are shown in Fig. 3 as a function of the location of the left-boundary p of the high-k forced band K p,p+3 . The first data point refers to the classical large-scale forcing only (Run 1). Application of broadband forcing in the different bands changes the characteristics of the flow modifying primarily the amount of small scales. This forcing in the second band is seen to increase the energy dissipation in the system. The Kolmogorov dissipation-scale and the Taylor-Reynolds number decrease, suggesting that the characteristic scale at which dissipation plays an important role is shifted to smaller scales. We notice that the total energy in the system is only slightly affected by the introduction of forcing. Moving the broad-band forcing to very small scales implies that there is no longer a strong influence on the flow because the energy injected in the small scales appears to be also dissipated immediately. The compensated, shell-and time-averaged, energy spectrum for different locations of the forced region K p,p+3 is shown in Fig. 4. We may observe that the forcing causes a non-local depletion in the energy spectrum for the larger scales while the tail of the spectrum is less affected. The pile up in the energy spectrum near the forcing region is characteristic of the explicit high-k forcing and is suggestive of a 'blocking' or reverse cascading. If the separation between S 1 and the high-k band is reduced, then the interaction is stronger and a considerable depletion of the energy levels in the largest scales arises. This is in agreement with the large-scale forced turbulence results, where the local and non-local interactions were found to be energetically dominant while the distant interactions were mainly responsible for transferring structural information [29]. An effective modulation of turbulent quantities is possible not only by a change in the range of forced modes but also via a change in the energy input rate. To investigate this we adopted an energy injection rate ε w,1 = 0.15 for the large-scale forcing in S 1 and we vary the intensity of forcing in the second band by changing ε w,2 . We adopted the following values for ε w,2 : 0.07, 0.15, 0.30, 0.45, 0.60, 0.75 or 0.90 and considered forcing of four or eight shells in K 17,20 or K 17,24 , respectively. The parameters and characteristic quantities can be found in Table I and II as Runs 2 − 7 and 8 − 13. The total energy in the system is only slightly affected by the forcing-strength in the second band as shown in Fig 5. An increased forcing strength introduces additional energy into the flow at small scales that is dissipated very efficiently. This is expressed by the linear increase in ε t . In Fig. 6 we present the compensated energy spectrum for various strengths of the forcing ε w,2 . The energy in the forced region reaches higher values with increasing ε w,2 . This may be further observed in terms of the energy maximum E max = max k (E c (k)) in Fig. 7. Changing the strength of the broad-band forcing induces a characteristic depletion in the larger scales. This suggests that the additional forcing term enhances the nonlinear interactions, which influence various scales quite far away from the forced region. The energy that is injected at the larger scales is transferred even more effectively through the cascade as ε w,2 increases. This effect appears similar to the so-called spectral short-cut observed in nature and experiments [25]. In case of such a short-cut the energy from larger scales is diverted quite directly to fine scales largely by-passing the traditional cascading. This mechanism was explained in the case of flow over forest-canopies in [25]. We will investigate it in more detail in the next section. While the energy-maximum E max shifts to higher values with increasing strength of the high-k forcing, the location of the peak moves towards larger scales. This may be seen from the value of (kη) max at which the maximum of E c (k) is attained (cf. Fig. 8). If we inject the same amount of energy to K 17,24 that is two times wider than K 17,20 the maximum of the response decreases (cf. Fig. 7) while the location of the peak moves towards smaller scales (cf. Fig. 8). These numerical experiments are in agreement with observations for decaying turbulence in which initially additional energy is assigned to small scales [32]. The non-local energy cascade toward the small scales was found to increase remarkably during the initial period of decay. This is quite similar to our observed increase of the dissipation rate arising from an increased forcing of the high-k band. A final quantification of the non-local effect on the spectrum that arises from the high-k forcing is collected in Fig. 9. Here we displayed the normalized accumulated energy in the consecutive shells. As pointed out, varying the properties of a flow in a specified spectral region can change the behavior of a flow well outside this region. In terms of S E (k) we notice that close to 90% of the energy is present in the first 10 shells (Fig. 9) when only the large-scale forcing is applied. Influencing the flow at smaller scales in K 17,20 is seen to remove most of the energy from these larger scales while there is only a slight impact on the dynamics of small scales. This effect becomes more pronounced with increasing ε w,2 . The underlying changes in the energy transfer will be considered in more detail in Sec. IV. B. Energy transfer spectra The transfer of energy in turbulence can be described in spectral space as interactions of triads of wavevectors (k, p, q) that form triangles, i.e., k = p + q. Direct numerical simulation with large-scale forcing shows that non-local interactions between wavevectors combine into a local energy flow [13,30]. By applying forcing that is located in a high-k range of spectral space we perturb the 'natural' cascading process. The associated changes in the transfer of energy will be investigated in more detail in this subsection. Specifically, we focus on the energy transfer and energy transport power spectra. In large-scale forced turbulence energy is injected into the first shell and removed by the transfer term. This gives rise to negative values for the energy transfer in the forced region. In the higher shells the transfer function takes on positive values which illustrates the transfer of energy through the cascade toward higher k. By invoking the broad-band forcing we influence this basic energy cascade. This is clearly seen in the energy transfer spectrum which develops distinctive regions where T (k) = T (k, t) t is negative. In Fig. 10 the effect of variations in the spectral support of the forcing is shown while Fig. 11 characterizes changes in the transfer function due to an increased forcing strength of the high-k band. The transfer function reaches lower values between the low-and high-k forcing regions compared to the largescale forced case. The reverse situation appears near the high-k forced band where the transfer increases with an increase of the forcing intensity. This is in agreement with the energy spectra presented earlier, where we observed the depletion of energy between the forced regions. This effect can be observed more directly from spectra of energy transport power that will be presented next. The energy transport power gives the rate at which energy is transferred from shells k ′ < k to those with k ′ > k: where k max = πN/L b is the cut-off wavenumber. We present the time-averaged transport power spectrum Π(k) = Π(k, t) t in Fig. 12 for forcing with various strengths in the K 17,24 band. In case of large-scale forcing only, the transport power is positive for all k as the energy is transferred toward smaller scales and reaches zero for large k indicating the general property of the total transfer function T (t) = 0. The application of high-k forcing for k 1 < k ≤ k 2 changes this well-known picture. First, we note that the values of the transport power are all similar in the largest scales, where the flow is governed by the same energy input. The transport power for 0 < k ≤ k 1 becomes larger at higher ε w,2 . A striking change of the behavior arises for k near and inside the high-k forced region. The transport power spectrum even assumes negative values for k ≈ k 1 . The observed behavior of the transport power in Fig. 12 is partly due to the relatively low Reynolds number that was used. At sufficiently high Reynolds numbers, the dissipation scales are much more separated from the high-k forced scales. In this case a plateau of Π will arise at low wavenumbers: Π(k, t) ≈ ε w,1 for k low enough [1]. This property is not observed at the computational Reynolds number considered so far. In cases specified by Runs 18 − 19 we consider the flow at a four times higher computational Reynolds number. The overall results for the energy spectra and energy transfer were found to be qualitatively the same as in the lower Reynolds number cases. However, a plateau may now be observed in Fig. 13, where we present the transport power for the higher Reynolds number. In this case the transport power does not decrease below zero in the forced region. The second forcing band is well separated from the dissipation region and the transport power in this band is much larger, approaching a maximum 0.23 that is near the energy injection rate ε w,2 . In this section we have looked at the effect of high-k modulation of the energy cascading process that leads to an increased energy dissipation in small scales. This process is supported by an increased energy transfer to smaller scales via nonlocal triad interactions. The effect of increased energy rate by the application of broad-band forcing is seen in the energy transfer and transport power spectra. In the next section we will look more closely at the interactions of various scales of motion under the influence of broad-band forcing by considering the twoand three-mode transfers T p and T pq introduced in (22) and (20). IV. TWO-AND THREE-MODE INTERACTION OF SCALES The energy dynamics of turbulent flow is generally discussed in terms of the transfer of kinetic energy from larger to smaller scales through nonlinear interactions. The statistical properties of turbulence are determined by these interactions. In the previous section we have shown how additional broad-band forcing of inertial range scales can modify the classical picture of the Kolmogorov cas- cade. To investigate the observed turbulence modulation effects in more detail we consider the underlying two-and three-mode energy transfer terms in this section. This will clarify to some extent the changes in the various nonlinear interactions that give rise to the observed alterations in the spectra and energy transfer. We start with the three-mode transfer that is averaged in time T pq (k, k p , k q ) = T pq (k, k p , k q , t) t and split this term into its positive and negative parts: in which with a similar definition for the negative part: In terms of these contributions we examine the normalized triad energy transfer where T min (k) = − min kp,kq Through the scaling of T − pq and T + pq with T min and T max respectively, the normalized transfer is well suited to characterize the overall structure of the three-mode transfer function, even in cases in which the order of magnitude of T pq varies considerably. The normalized energy transfer T pq (k, k p , k q ) is plotted in Fig. 14 for three different wavenumbers k/(2π) = 14, 42, 82, based on Run 19 in which R λ ∼ = 75. The three k-values that are selected correspond to wavenumbers below the forced region (k/(2π) = 14) or to wavenumbers that are considerably larger. Such contour maps for T pq can also be found in [19] for the case of large-scale forced turbulence. For completeness, we also presented the results from such large-scale forced turbulence (Run 18) comparing these directly to the broad-band forced turbulence (Run 19). This contour map is shown in Fig. 15 for k/(2π) = 46. The strongest interactions are observed for modes with wavenumbers between the largest forced scales and the high-k forced region as can be seen in Fig. 14(a). As in the case of large-scale forcing only we observe very strong interactions between Fourier modes of considerably different scales. These are located in the corners of the rectangular domains in the k p -k q plane. Distant interactions are well separated from the origin in these figures. Their contribution to the transfer is seen to be very small, as also noticed earlier in the literature [19]. The change of sign in the transfer function that occurs at k p = k and k q = k respectively on the k p -k q planes indicates that in this region the energy is mainly transferred to higher k. The most efficient transfer takes place between two wavevectors of similar size and one of quite different size as seen in the corners of the rectangular area in Fig. 14. This is in agreement with previous numerical experiments reported by various authors [2,16,19]. However, compared to the case of large-scale forcing only, we now observe quite extended, highly energetic interactions with the high-k forced region. The second forced band causes regions with high intensity of interactions to be much wider compared to the case of large-scale forcing only. This is visible directly in Fig. 15. The regions with positive and negative transfer are extended from the corners to the wavenumber regions where the actual application of forcing in the second band occurs. The energy is exchanged predominantly between scales that are more separated than in case of the large-scale forced flow where the dominant interactions occur only in the corners. This is a clear indication of the stronger non-local interactions, mentioned earlier. For further clarification of energy transfer processes we turn to the time-averaged two-mode energy transfer T p (k, k p ) = T p (k, k p ) t , which gives information about the interactions involving a sum over all k q wavenumbers at fixed k and k p . The sum involves all k q wavenumbers that are constrained by the triadic interactions, i.e., their length may vary between |k − p| and |k + p|. We normalized the two-mode transfer function T p (k, k p ) in a similar manner as T p (k, k p , k q ): where T ± p , T min and T max are defined in terms of T p in a manner analogous to the definitions in (26), (27), (28) and (30). In Fig. 16 we plotted the contour map of T p (k, k p ). For larger wavenumbers this quantity was found to look quite similar to the case of large-scale forced turbulence. The two-mode transfer function changes sign from negative to positive at k = k p indicating a downward energy flow. Comparing this to the largescale forced turbulence we observe (i) strong influence of forcing in the regions where it is applied (denoted with dashed lines), (ii) extended negative energy transfer region with comparatively high magnitude above the k = k p line, (iii) amplification of the backward energy transfer indicated by the positive region for small k and large k p . This region is separated from the intense negative energy transfer region by the indicated accumulation of contour lines above the k = k p line appearing as the curved black line. A more quantitative overview is plotted in Fig. 17 displaying the two-mode transfer function in the range k p /(2π) = 30, 34, . . . , 94. This clearly shows the cascading character of the energy flow from larger to smaller scales in the system. The modification due to the highk forcing expresses itself by the sequence of one slightly positive, two quite negative and one quite positive local extrema. The intensity of the energy transfer decreases with increasing wavenumbers as less energy needs to be transferred. This corresponds directly to the magnitude of T min (k) and T max (k) used in the normalization of T p (k, k p ) (31). The part in which the transfer is negative is much wider in the broad-band forced case compared to the large-scale forced turbulence results. We conclude by considering the effect of varying the forcing strength ε w,2 at a characteristic wavenumber k p /(2π) = 30 on the two-mode energy transfer function T p (k, k p ). This is shown in Fig. 18. In the large-scale forced case at ε w,2 = 0 the transfer is very small compared to the cases in which the high-k forcing is active. In addition, the effect is very localized (solid line in Fig. 18). The forcing in the high-k band completely changes this behavior. The intensity of the energy transfer is directly related to the value of ε w,2 . Additional extrema appear in the two-mode transfer function. The high-k forced cases display two pairs in which a negative minimum is combined with a positive maximum, while large scale forcing only yields one such combination. Correspondingly, the min-max pair at high k is associated with the large scale forcing in S 1 while the min-max pair at lower k originates from the additional forcing in the second band. We also investigated three-band forcing and observed further peaks in the energy transfer spectra. V. CONCLUDING REMARKS We performed direct numerical simulations of broadband forced turbulence to explore accumulated effects on the time-averaged energy transfer in isotropic homogeneous turbulence. Using broad-band forcing based on a recently proposed mathematical model for a fractal stirrer [27] we have shown how the application of such forcing modulates turbulence both qualitatively and quantitatively. The modulation is similar to that observed in experiments based on flows through porous media or canopies. Specifically the perturbation of a flow arising from the contact with complex physical boundaries enhances the dissipation and causes an abrupt energy drain from large to small scales. This aspect of simultaneous perturbation of a flow on a spectrum of length-scales is retained in the cases studied here. We found that broad-band forcing that perturbs a turbulent flow at smaller scales enhances non-local triad interactions and alters the detailed cancellation processes that occur in the traditional large-scale forced flows. This leads to non-local modifications in the energy transfer spectrum and the energy distribution among scales. We verified this by partitioning the nonlinear term in the Navier-Stokes equations in terms of different triad contributions to the total transfer function. The energy transport power is found to be enhanced in the spectral region in between the large-scale and the high-k forced bands. This characteristic may be influenced via the control parameters of the applied forcing, i.e., its strength and extent of agitated scales, and allows optimizing transport processes of turbulent flows. Future study will involve the examination of the consequences of forcing in the physical space context. We will investigate the geometrical statistics of broad-band forced turbulence looking at the interactions of strain and vorticity and their modulation by the applied forcing. This may help understanding which physical processes are responsible for the observed modulations and how to exploit this to enhance physical space mixing. The cases with large-scale forcing only are denoted by ⋆. In this table εw denotes the energy input-rate in the high-k band, except Run 1 and 18 in which it corresponds to the energy input-rate in S1. Moreover, m and p characterize the spectral support of the high-k band Km,p. APPENDIX The main parameters of the simulations are collected in Table I. The corresponding statistics of the velocity fields are summarized in Table II. The quantities compiled in Table II are the Kolmogorov dissipation wavenumber k d which is the inverse of the Kolmogorov length-scale η, the product k max η, the Taylor microscale λ = (5 E/ k k 2 E(k, t)) 1/2 , the Taylormicroscale Reynolds number R λ = λu ′ /ν, the integral length-scale L = 3π/(4 E) k k −1 E(k, t), the integral Reynolds number R L = Lu ′ /ν, the r.m.s velocity u ′ = (2 E/3) 1/2 , the energy dissipation rate ε = k 2νk 2 E(k, t), the eddy-turnover time τ = L/u ′ and the skewness S = 2/35 (λ/u ′ ) 3 k k 2 T (k, t). All these quantities in Table II are time-averaged · t as described in Sec. II D. We also checked that the alteration of the cascading process caused by the high-k forcing does not influence the isotropy of the flow field. A measure of isotropy was suggested in [38] given by: I 2 (t) = ψ 1 (t)/ψ 2 (t) where ψ 1 (t) = |e 1 (k)u(k, t)| 2 , ψ 2 (k, t) = |e 2 (k)u(k, t)| 2 are the kinetic energy along the components of two orthogonal solenoidal unit vectors e 1 (k) = k × z(k)/|k × z(k)|, e 2 (k) = k × e 1 (k)/|k × e 1 (k)| where z(k) is a randomly oriented unit vector. The operator · denotes averaging over these random unit vectors. For isotropic turbulence one can expect to find I = 1, i.e., ψ 1 = ψ 2 which was confirmed to close approximation in all simulations. Deviations from the expected value for I were found to be of the order of 1%.
2018-04-03T04:13:02.659Z
2006-04-17T00:00:00.000
{ "year": 2006, "sha1": "93a2f13ebbd98550c35b777de2a0ec5da69151e7", "oa_license": null, "oa_url": "https://pure.tue.nl/ws/files/1691939/Metis212773.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "edf56bb2785f95720bc878ea4dc262c2c63201b8", "s2fieldsofstudy": [ "Environmental Science", "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
229280436
pes2o/s2orc
v3-fos-license
Did the Physical and Psychological States of Outpatients Receiving Rehabilitation at a Geriatric Health Services Facility Decline during the State of Emergency Caused by the COVID-19 Pandemic? Many Geriatric Health Services Facilities in Japan may have continued outpatient rehabilitation by taking measures against infection even during the state of emergency caused by Coronavirus disease 2019 (COVID-19). The present study aimed to determine differences in physical and psychological states in rehabilitation outpatients (age, 83.5 ± 8.4 years) at a Geriatric Health Services Facility between the pre- and post-nationwide state of emergency in Japan. Physical outcomes were assessed with gait speed (GS), timed up and go test (TUG), handgrip strength (HG), and maximum phonation time (MPT). We used the Japanese version of the five-level EuroQoL five-dimensional questionnaire (EQ-5D-5L) to assess patients’ quality of life (QoL) as the psychological state. The physical (GS, pre, 0.92, post, 0.92 s, p = 0.875; TUG, pre, 14.09, post, 14.14 s, p = 0.552; HG, pre, 19.42, post 19.70 kgf, p = 0.807; MPT, pre, 13.6, post, 13.8 s, p = 0.861) and psychological (EQ-5D-5L, pre, 0.73, post, 0.81, p = 0.064) states of the participants did not change significantly between the pre- and post-nationwide state of emergency. This was likely due to the continuance of outpatient rehabilitation in accordance with the facility’s policy while taking adequate safety precautions against COVID-19 infection. Introduction The number of populations infected by Coronavirus disease 2019 (COVID-19) has been increasing worldwide [1]. The World Health Organization declared COVID-19 a pandemic on 11 March 2020 [2]. The government in Japan declared a state of emergency on 7 April 2020, and lifted it on 25 May 2020 [3]. These policies played a role in inhibiting the expansion of the outbreak of infection [4], but health problems affected by restrictions on activity are a concern. It was previously reported that physical activity (PA) was significantly decreased due to the COVID-19 epidemic in community-dwelling elderly Japanese adults [5]. That study performed a subgroup analysis according to frailty categories, and PA was significantly decreased among the subjects in all categories [5]. However, quite a few Geriatric Health Services Facilities may have continued outpatient rehabilitation by taking measures against infection even during the state of emergency. Thus, we hypothesized that the physical and psychological states would decrease in people receiving outpatient rehabilitation in Geriatric Health Services Facilities under the influence of the emergency restrictions. The purposes of the present study were to determine the differences in physical and psychological states of outpatients receiving rehabilitation at a Geriatric Health Services Facility between the pre-and post-nationwide state of emergency in Japan. Materials and Methods This was a longitudinal study of 20 consecutive elderly participants who received outpatient rehabilitation at the Geriatric Health Services Facility Elder Village in Kobe, Japan, from January 2020 to the end of May 2020. We excluded participants from the present study who did not give informed consent, could not walk by themselves, and who could not attend outpatient rehabilitation during the state of emergency due to the desire and/or personal convenience of the participant or the participant's family. The intervention of outpatient rehabilitation at the Geriatric Health Services Facility was performed once or twice a week (20 min/session) during the state of emergency. Each session included a warm-up period, resistance training, aerobic exercise, and a cooldown period. In the first session, outpatients performed a series of upper and lower limb and body stretches before and after the exercise. Exercise intensity was performed to maintain heart rate at a rating of perceived exertion of 11-13 on the Borg scale as based on a previous method [6]. At that time, infection prevention measures were strictly implemented for the participants and staff, which included wearing masks, washing hands, and disinfecting equipment. Characteristics of the participants' data were evaluated from the patients' medical records and included age, sex, body mass index (BMI), long-term care insurance level, living alone, diagnosis, and medications. Physical outcomes were assessed with gait speed [7,8], the timed up and go test [7], handgrip strength [8], and maximum phonation time [9], based on previous methods. We used the Japanese version of the five-level EuroQoL five-dimensional questionnaire (EQ-5D-5L) to assess the patients' quality of life (QoL) [10,11] as their psychological state. The EQ-5D-5L is a reliable and sensitive assessment in Japanese patients [10,11]. The EQ-5D-5L is a self-reported questionnaire in which patients report their own evaluations of their current health state [12]. The EQ-5D-5L consists of five items: mobility, self-care, usual activities, pain/discomfort, and anxiety/depression, and each item has five levels of description [12]. The QoL score is calculated by a value set determined beforehand to reflect the preferences of the general population. The EQ-5D-5L QoL score ranges from 0 (death) to 1 (full health). The physical and psychological states were assessed by physical and occupational therapists at two time points: before and after the nationwide state of emergency in Japan. We complied with the principles of the 1975 Declaration of Helsinki regarding investigations in human subjects, and we obtained informed consent from each participant. Characteristics of the participants and evaluated results are shown as numbers and as the mean ± standard deviation for the continuous variables. We analyzed longitudinal changes of the physical and psychological states from January, 2020 (pre-nationwide state of emergency) to the end of May, 2020 (post-nationwide state of emergency) using the Wilcoxon signed-rank test. The overall level of statistical significance was set at 0.05. Statistical analyses were performed with IBM SPSS Statistics 26 (IBM SPSS, Tokyo, Japan). Characteristics of the Participants Of the 20 patients, 5 were excluded because they could not attend outpatient rehabilitation during the state of emergency due to the desire and/or personal convenience of the participant or the participant's family, and 2 patients could not walk by themselves. Thus, the final analysis comprised 13 patients (age, 83.5 ± 8.4 years; BMI, 24.6 ± 5.5 kg/m 2 ; male, 6/13; long-term care insurance support level 2, 9/13, care level 1, 2/13, and care level 2, 2/13; and living alone, 2/13). Patient diagnoses were orthopedic disease in 7, internal disease in 4, and cerebrovascular disease in 2 patients. Main medications taken were angiotensin II receptor blocker in 3, β-blocker in 3, calcium antagonist in 7, diuretic in 3, statin in 5, and analgesics in 5 patients. Results of Physical and Psychological Tests The results of the physical and psychological testing of the participants did not change significantly between January, 2020 (pre-nationwide state of emergency) and the end of May, 2020 (post-nationwide state of emergency) (p < 0.01) ( Table 1). Discussion To the best of our knowledge, this is the first study to investigate differences in physical and physiological states of outpatients undergoing rehabilitation at a Geriatric Health Services Facility between the pre-and post-nationwide state of emergency in Japan. We showed that neither physical nor psychological states had decreased significantly between the two time points (Table 1). In other words, both the physical and physiological states of the participants could be maintained during this period. A previous study reported that PA significantly decreased due to the COVID-19 epidemic in community-dwelling older adults in Japan [5]. Moreover, it was reported that stress, depression, anxiety, PA, and QoL worsened due to the COVID-19 epidemic in patients with Parkinson's disease [13]. Our results suggested a tendency dissimilar to those of this other preliminary research [5,13] and that the policies enacted during the state of emergency in Japan might not affect either state in rehabilitation outpatients. In terms of long-term care providers, in response to the state of emergency for COVID-19, the basic coping policy of the Geriatric Health Services Facility in Japan states that "businesses are required to continue their business when an emergency is declared" [14]. Even during the state of emergency, this organization continued in accordance with its facility policy. It is possible that this contributed to the maintenance of patient states during the study period. There are several limitations in the present study. First, this study investigated a single Geriatric Health Services Facility, and the sample size was very small; thus, the generalizability of the results may be limited. Second, there was no control group, so we do not know the outcome of the Geriatric Health Services Facility patients who did not participate in outpatient rehabilitation during the state of emergency. Finally, there were no data on different outcomes due to different outpatient rehabilitation frequencies. Thus, further studies are needed to clarify the long-term effects of the COVID-19 epidemic on the relationship between physical and psychological states in these patients. Conclusions In conclusion, our results showed that the policies related to the state of emergency in Japan did not appear to have affected the physical and psychological states of elderly Japanese patients participating in outpatient rehabilitation. One reason may be that outpatient rehabilitation continued in accordance with the facility's policy even during the state of emergency while taking adequate safety precautions against COVID-19 infection. Author Contributions: K.P.I. and M.O. conceptualized and designed the study, collected data, performed initial analyses, and drafted the initial manuscript. K.P.I. and M.O. conceptualized and performed initial analyses and reviewed and revised the manuscript. K.P.I., M.O., and K.O. performed statistical support and reviewed and revised the manuscript. All authors approved the final manuscript as submitted and agree to be accountable for all aspects of this work. All authors have read and agreed to the published version of the manuscript. Funding: This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors.
2020-12-15T14:02:08.322Z
2020-12-01T00:00:00.000
{ "year": 2020, "sha1": "4755d691f545b07d3658ae76b80a6742d8a81aa9", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-9721/8/4/45/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8ced224a1ec78b4de2b0e3128125b910ad1070ef", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
212488285
pes2o/s2orc
v3-fos-license
SMS ENCRYPTION OF ANDROID MOBILE BY USING RSA AND CHAOTIC ALGORITHMS -At present, SMS or messages are a very common way of communication. So it's different from the applications and instant messaging available but SMS is still one of the widespread ways of communication because it does not require an Internet connection and send SMS messages are Inexpensive, fast and modest. When secret information is replacement using SMS, it so hard to protect information from SMS safe and it ensures that the message is sent only by authorized senders.This research describes a solution which provides (SMS) safely and ensures the provision of secrecy, authentication. We have used the public-key cryptosystems (RSA) along with chaotic algorithm for encryption and data decryption. RSA also provides strong encryption but the secret increment when using chaotic algorithm is a well method for encryption message,the combination of chaotic theory and cryptography forms an important field of information security. This method was implemented in mobile environment with android operating system .The system was implemented using the JAVA language and the proposed method was tested in various types of operators (such as the S3, Galaxy S7, Galaxy j7, Huawei Nova2 plus, HTC). secure it and encryption it. Encryption was always an essential assignment. The main purpose of all encryption work is to maintain data security. When you encrypt messages, only the sender and the recipient of the messages is understood. The encryption algorithm is used to make encryption and decrypt code [5]. B. Overview of RSA and Chaotic Algorithms In this part will explain RSA and Chaotic Algorithms RSA Encryption and Decryption RSA algorithm is an algorithm that has been used for the modern computer used to encryption messages and decryption. The RSA algorithm is a puplic encryption algorithm that uses two general keys and the other one. RSA stands for Ron Rivest, Adi Shamir, and Leonard Adleman, who was first described publicly in 1978. RSA algorithm is an asymmetric encryption system that uses two sets of keys to encrypt and decrypt messages to ensure quality information security. In its performance, the keys are generated through a process of complex mathematical computation. The two keys are generated for encryption ispublic and private keys. The public key is distributed to the sender of a message to encrypt the message whiles the receiver of a message keeps the private key secretly to decrypt the public key encrypted message. The steps below are the processes in generating public and private keys using RSA 1. Choose two large prime numbers k, l, and k! = l; 2. Calculate q = k * l; 3. Calculate A (q) = (k-1) (l-1), 4. Choose e, as well asgcd(e, A (q)) = 1.1 <e <A (q); 5Is calculated d, as well as d * e Mood A (q) = 1, i.e. d is the inverse of Multiplicative e in Mode A (q); 6. Get the public key it is Ku = {e, q}; 7. Get the private key it is Kr = {d, q}; In 1978, the paper was published by R. Revist, A. Shamir, and L. Adelman. In this paper, it describes the public key cryptography system, including key generation and public key, whose security depends on the supposed difficulty of calculating integers in key factors. This encryption, which became known as abbreviation of author's names, stood as an encryption and stood the test of time to the present day, where it is used in banking encryption applications of banking, e-mail security and e-commerce on the global network [6]. Many applications are used for RSA algorithm but in the practical side it is often used for [7]: Encrypt a small portion from data, especially for main transport Digital signatures, in order to obtain digital certificates on the Internet. Chaotic Algorithm: Chaotic cryptology includes two integral opposite parts: Chaotic cryptography and chaotic cryptanalysis. Chaotic cryptography is an application of mathematical chaos theory to practice encryption,study or techniques used to convey information in particular and secure with a third party or opponent. It has long sought to use chaos or randomization in encryption by entities that want a new method to encrypt messages. However, because of the lack of accurate security features can be proven and low appropriate performance, chaotic encryption has encountered setbacks. The encryption algorithm and decryption based on the chaos [8,9] The chaotic algorithm method is an efficient method and deals with the problem of data transfer speed which is a very safe method.In recent times there have been many researches on the method of chaotic encryption system which has a number of important characteristics, such as reliance on initial conditions and system identifiers, pseudo-properties, periodicity, topographic tension, etc. Most characteristics meet certain requirements such as diffusion and mixing sense of encryption. Therefore, chaotic encryption system has more practical applications. One-dimensional chaos systems with advantages that are characterized by its high-level efficiency and simplicity, such as the logistics map, have been widely used now. The important cryptographic studies are how to design cryptographic algorithms with high security level and good speed, and attempts by analysts to analyze the code encryption to find the security holes in the algorithms for the purpose of attack on them and therefore this vulnerability should avoid the use of the short key or security vulnerability. The code encryption system (or encryption) is called a cipher (or a cryptosystem).In Figure1:operation of encryption and decryption,The text before the cryptography is called by the plain text which is the message that is entered by the sender, and the message to be encrypted is called text codes,, which is referred to here as P and C, respectively. The encryption operation of the code can be described as C=Eke (P), where Ke is key of encryption and E(.) isfunction of encryption. Similarly, the decryption procedure is P=Dkd (C), where Kd is the key decryption and D(.) is function of decryption. When Ke=Kd, the cipher is called a symmetric cipher For private key ciphers, The encryption-decryption key must be sent from the sender to the receiver via a separate confidential channel.When Ke=!Kd, cipher is called an asymmetric cipher for public key codes, the encryption key and the decryption key(Ke,Kd),where the public key is published and your private key is kept confidential and there is no need for extra secret channels to transfer the key [10]. C. Platform of programming for mobile 1. AndroidSystem Android operating system is one of the most famous terms today in the world of mobile. People have resorted to Android phones and applications because of the creation of secure websites among users. An android is software for mobile devices based on the Linux kernel and was founded in 2003 by Andy Rubin a number of other workers in Palo Anto California, operating systemthat contain operating system, middleware and key applications . [11]. Android is a free operating system with rapid growing mobile platforms. It also provides rich and fast platforms for third-party developers to build applications with aggregates available for Application Programming Interfaces (APIs). The Android operating system provides an integrated platform for the work of the wizards and the development of a problem solving mechanism to create a global class in terms of the software and services it provides [11]. 2.Architecture for Android System Android operating system can be classified into five main layers: application, application framework, libraries, Android runtime and Linux kernel.There are a number of components that make @IJMTER-2017, All rights Reserved 55 up the Android application as shown inblock diagram in Figure2: android operating systems architecture.  The Applications part is the highest layer in the android system architecture. This part represents the basic applications that can be found for devices that are represented such as telephone calls, email client, SMS program, calendars, browsers, and others. That is written in Javalanguage and other languages [12].  Application Framework is a layer higher than the system structure.It is the framework or the way the developer follows it through application development.The developer fully accesses the same framework as used by the previous layer [12,13].  Libraries area layer containing software libraries written in Java for the development of Android applications. A different of libraries, from surface manager to lib c, are written in different languages and these libraries are available for developer to be used through the frameworkapplication [14].  Android Runtime is the fourth layer in android system architecture. This layer contains the socalled Dalvik Virtual Machine which is a type of JVM has been improved and modified to suit the Android system. [14].  Linux kernel is a layer at the bottom of the structure which is responsible for handling its own hardware. Android relies on Linux for basic system services such as memory management, power management software and a number of other services [15]. D. General StructureoftheSuggested Approach The proposed Approach attempts to improve security of information that is encryption SMS. The general design of the proposed Approach illustrated using the general algorithm shown in figure (5). The general design of the encryption process of suggestedmethod illustrated using the flowchart offeredin figure (3). The general design of the decryption process of the suggestedmethod illustrated using the flowchart offeredin figure (4). Input: SMS ( plain text ) Output: Cipher text Step1: Enter the message Step2: Encode SMS by using the RSA algorithm Step3: The result is cipher text also encrypted using chaotic algorithm Step4: Send the SMS to receiver message. Step5: SMS receiver applies proposed approach to decryption of the SMS. Step6: The decryption process was performed using the RSA and chaotic algorithm to decrypt SMS. Step7: The result of the decode is plaintext Step8: End. II. THE EXECUTION OF THE PROPOSED APPROACH Each step in main algorithm (5) will be illustrated in the following example. Here we will present the main interface of the suggested method, which is two parts of (encryption and decryption), shown in figure (6). Step3: SMS is "well come" using RSA algorithm, the resultis Cipher text and It is then encoded using the chaotic algorithm as Illustrated in Figure (9) andFigure (10). Figure 9.SMS of Encryption using RAS AlgorithmFigure 10. Cipher text is Encrypted usingchaotic algorithm Step 4:After access the SMS for receiver will work to apple decryption operation by take copy of the SMS from message of mobilethat result fromchaotic algorithm then click back to decryption process, as illustrated in Figure (11). Step5:Past the cipher text from chaotic algorithm then click on the decryption chaotic algorithm, then choice RSA algorithm to extract plain text as shown in Figure (12) &Figure (13). Figure13. Decryption Cipher text using RSA algorithm Step 6:The result of the decryption is plaintext. Step 7:End. III. CONCLUSIONS 1. In this paper, we will discuss about the security of the relay SMS in mobile and how to encrypt and decrypt them,programming platforms of all kinds for the mobile phones and RSA and Chaotic
2019-08-18T13:40:37.770Z
2017-01-01T00:00:00.000
{ "year": 2017, "sha1": "54528f25646afff70d14616ad82b5d90e727f774", "oa_license": null, "oa_url": "https://doi.org/10.21884/ijmter.2017.4381.vcvnu", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "4998f3b042178f6d42264c5ed94c66f004a665f6", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
253033657
pes2o/s2orc
v3-fos-license
A Comparison of Opinion Mining Algorithms by Using Product Review Data : After release of Web 2.0 in 2004 user spawned contents on the internet eminently in abundant review sites, online forums, online blogs, and many other sites. Entire user generated contents are considerable bunches of unorganized text written in different languages that encompass user emotions about one or more entities. Mainly predictive analysis exerts the existing data to forecast future outcomes. Currently, a massive amount of researches are being engrossed in the area of opinion mining, also called sentiment analysis, opinion extraction, review analysis, subjective analysis, emotion analysis, and mood extraction. It can be an utmost choice whilst perceiving the meaning and patterns in prevailing data. Most of the time, there are various algorithms available to work with polling. There are contradictory opinions among researchers regarding the effectiveness of algorithms. We have compared different opinion mining algorithms and presented the findings in this paper. Introduction Opinion mining or sentiment analysis is a text analysis method that utilizes computational linguistics and natural language processing to spontaneously identify and extricate emotions or opinions from texts, i.e., positive, negative, neutral, etc. [1]. In this multidisciplinary and multifaceted Artificial intelligence problem, people's perspectives and reactions toward an entity are studied in computational form [2,3]. The term sentiment analysis is the method of obtaining useful information from an opinion. Sentiment analysis is a promising area defined as the crossroads of information retrieval and computational linguistic techniques for processing documented opinions [4]. Sentiment Analysis has three levels of granularities and it can be applied in any level [5]. Everyday a vast number of texts are generated online through people containing their emotions over various kinds of entities. These emotions can be positive or negative, depending on their written expressions. Sometimes they also use emoji to express emotions. These people generated text can be beneficial for companies and organizations to get insights about their products or services through such reviews, recommendations, and texts. So, lots of researchers are working on this field and finding new techniques and ways to extract emotions from enormous text data. For example, [6] worked on movie review data, [7] analyzed new algorithms for twitter review data, [8] used medical data to work on sentiment analysis. Due to COVID-19 pandemic in 2019, most of the country decided to go on shutdown to secure people from this deadly Coronavirus. Because this virus is contagious and people started to maintain social distance to stay safe. So, they moved toward online shopping for their daily needs to avoid going out. Online purchasing of products and daily goods increased. That is the reason an online product review dataset was being picked to find out the opinion of people toward online products through different reviews. By this it can be learnt which algorithm or method works best for online product review dataset and the online purchasing experiences of customers will be known as well. 1. What is the need for comparison in distinct algorithms? 2. What will be the outcome if these algorithms are applied in a dataset? In the talk of sentiment analysis or opinion mining a general question always arises, which algorithm works best in which domain or dataset. There are different opinion mining algorithms available, some of them are supervised, some are Copyright © 2022 MECS I.J. Information Engineering and Electronic Business, 2022, 4, 28-38 unsupervised and some are boosting algorithms which are discussed on the latter portion of the paper. Works of all the algorithms are similar but some work faster, some give better accuracy and some algorithms help to generate rules for sentiment analysis. So, comparison is required to find out the better working algorithms regarding dataset size in sentiment analysis. As it is not possible to implement all the algorithms on so many datasets at once, that is why it had been decided to pick a dataset which is a product review dataset. In this research, three datasets will be adopted from Amazon as it is one of the biggest websites doing an e-commerce business. After that, several research papers relevant to this research on sentiment analysis and opinion mining will be reviewed to have a clear idea and picked up mostly used sentiment analysis algorithms and techniques for comparison. Then selected and available opinion analysis algorithms and techniques will be used on the datasets to do a comparison review. Google Colaboratory cloud system will be used to implement the algorithms on the datasets. At the end, a comparison will be made based on implementation results. It will be found after this research those which algorithms work well on large datasets and which algorithms work better on small datasets. The paper orientation is being demonstrated thusly, where the introduction is provided in Section 1. Then, Section 2 describes literature review. Next, details of methodology are given in Section 3. Later, Section 4, the algorithms used for opinion mining are presented. Section 5 highlights the comparison and discussion subsequently. Lastly, Section 6 is finished with a conclusion. Table 1. Levels of sentiment analysis Document Level The solicitude is to analyze the whole document to discover whether the opinion is affirmative or non-affirmative. As a whole document works like an individual entity, inappropriate results can be generated sometimes from this level as conflicting sentiment occurs. Sentence Level This level is more fine-grained than the previous one as each sentence acts as an entity here. Each sentence can be categorized by emotions or polarities as positive emotion, negative emotion, and neutral emotion. The summarized result of sentences provides an overall result of the document. Feature Level or Aspect Level Analyzing product features to determine document sentiment is known as aspect-based sentiment analysis. From here affirmative, non-affirmative, or neutral opinions can be easily identified from extracted features. Among all other models, it is good and a fine-grained analysis model. Literature Review The resoluteness of this paper is to make a comparison of some well-known opinion mining algorithms. Usually, opinion mining algorithms are used for text analysis to bring out emotions written in text format. Apart from algorithms several other techniques, tools, and ecosystems can be used to do sentiment analysis. In [9] the author talked about sentiment analysis and the Hadoop ecosystem. Hadoop is an open-source framework where a large dataset can be stored and processed. This paper described the hierarchy of big data, sentiment analysis classification and described the Hadoop systems in detail. [10] used the Hadoop framework to store and analyze large data collected from amazon reviews, did predictive analysis using those data, and made a matchbox recommendation system for the customers of amazon which will provide a recommendation based on previous ratings. Here, they used machine learning to do predictive analysis and did various calculations based on ratings of product-based, categorybased, review-based, etc. Leximancer is used in [11] to analyze the reviews of 4 top-grossing games and made ten text files for analysis. By using Leximancer they created a theme map for each file to examine and analyze them. [12] took multi-dataset on different languages to evaluate two supervised learning approaches which are Decision Tree and Naive Bayes for cognizing best results. The evaluation was based on accuracy and runtime parameters. They used RapidMiner tools to perform these experiments. Authors of the paper [13,14] worked on different opinion mining techniques and classifiers to analyze student performance. Paper [13] did a literature survey on performance prediction of students and found that researchers mostly use Naï ve Bayes, Decision Tree, and Rule-Based algorithms for the prediction of students academic performances. Paper [14] took university student data to implement five classifiers: Neural Network, J48, ID3, Bayesian Network and Naï ve Bayes. They made a comparison between these five classifiers based on error measures and found out that Bayesian Network classifier accuracy was higher among other classifiers. An experiment was done on the spam dataset to classify spam emails and found that the Random Tree classifier works best with the accuracy of 99.72% for spam mail classification [15]. Classification and Regression Tree (CART), Association Rule Mining, Regression, Clustering and Classification are the most extensive techniques of data mining used in the health care domain for decision making and identification [16]. A study and comparison of several methods of assessing an article's reputation using sentiment analysis is performed in [17]. They classified sentiment analysis based on techniques, approaches, and rating methods. [4,18] discussed opinion mining areas, issues, technologies, and challenges. They describe opinion mining approaches such as supervised, machine learning, unsupervised, and CRB [Case Based Reasoning]. Key issues and challenges of opinion mining have also been highlighted. The first challenge for researchers is data collection. Accuracy of data if required to find out the useful insides. Reviews are written in different languages like English, Arabic, Chinese, etc. All the data Copyright © 2022 MECS I.J. Information Engineering and Electronic Business, 2022, 4, 28-38 extraction techniques did not work well in all languages. Moreover, as the data is user-generated, this data contains spam data, unfinished data, noisy data, and unstructured data. So, filtering this data is a challenge for researchers. After filtering, the classification of sentiment is another challenge. Some words can have different meanings based on the domain [i.e., word "High" can be positive for battery life but can be negative for pricing]. Object identification, feature extraction, identifying and grouping synonyms, integration, identifying comparative words, people's writing styles, and misleading options are some of the issues and challenges for researchers in opinion mining. This two-paper discussed these issues and tried to give solutions based on these problems. A brief review of competitive opinion mining is given in paper [1]. They had studied previous research on general opinion mining along with comparative opinion mining to show differences between them. From two distinct perspectives, they presented opinion mining. A practical viewpoint [i.e., a machine learning approach, a rule mining approach, and an approach to natural language processing], and an element-based viewpoint of opinion [i.e., comparative opinion recognition, entity recognition, relationship recognition, feature recognition]. They had shown all the past research on this and pointed out the past problems that the researcher faced and talked about the future possibilities on this comparative opinion mining domain. [19] used hotel review data for opinion analysis and made a prototype to visualize the output of sentiment analysis. They used knowledge-based approaches, SentiWordNet 3.0 [based on WordNet 3.0] for sentiment classification and feature extraction to find detailed aspects. They also used temporal opinion mining to find out the sudden changes in opinions which is helpful to find out valuable information about the entities. In the field of temporal opinion mining, the most common methods for predicting and estimating changes in opinion are opinion lexicons and statistical modeling. These changes include time and recent events. [19]. They employed burst detection as an identifier to find out the changes over time. The main problem they describe and the challenge of temporal separation is to find out when the intensity of a particular property has increased. An innovative approach named AMOD is proposed in [20]. AMOD approach can automatically extract opinions from the internet for specific domains. The author of this paper claimed that AMOD approaches can extract opinions better than lexicon approaches and can give higher accuracy than lexicon. They also discussed the drawback of the AMOD approach. In AMOD at first, a learning dataset is extracted and after that from the learning dataset, new positive and negative adjectives are extracted. This process is repeated till no newer adjective is learned. They also showed the comparison with traditional classification methods CopyVote. They used the FScore measure to compare the results and the FScore result of AMOD was higher than CopyVote. If right approaches can be selected depending on the domain and if the issues of selected approaches can be overcome, then some valuable findings can come out from opinion mining which will be beneficial. The Algorithms for Opinion Mining Nowadays, with the proliferation of contemporary digital-based economies, substantial amounts of information are available in the form of textual data, and classification or grouping into predefined classes is often easier to use. Several online activities like blogging, micro-blogging, e-commerce, social media communications, click streams etc. create an exceedingly large amount of data which is denoted as Big Data [21]. These data can be of any typestructured or unstructured, and these need to be extracted, transformed, loaded, and analyzed. To do such things, Opinion Mining is needed. There are two fundamental approaches of Opinion Mining [5]. The first one is the quite common one which is machine learning (ML). It is based on learning techniques of three types -supervised learning, unsupervised learning, and semi-supervised learning [22]. Supervised learning captures labeled datasets, while unsupervised learning uses unlabeled data. On the other hand, if the dataset is a combination of labeled and unlabeled examples, the semi-supervised approach is used [23]. Usage of present lexicon with words, expressions, or phrases is the second approach of Opinion Mining. Although based on the dataset these approaches are applied, ML is the most used approach. There are several types of machine learning algorithms such as Decision Tree, Maximum Entropy, Passive Aggressive, Adaptive Boosting, Logistic Regression, Ridge Regression, Support Vector Machine, Naï ve Bayes, Viterbi algorithm, Dynamic Artificial Neural Network (DAN2), K-Nearest Neighbor etc. Naive Bayes is great for features that rely heavily on it, but if the conditional independence assumptions are not met, the maximum entropy will be more appropriate [24]. [24] proves that Support Vector Machine works best. [5] worked on four different datasets where Passive Aggressive with a unigram performs best. Paper [3] made a survey which is about sentiment analysis algorithms and showed better accuracy with supervised learning algorithms. In paper [25], they classified tweets on twitter using three supervised learning algorithms which are -Decision Tree, Naive Bayes, and Support Vector Machine. Which are also supervised learning algorithms. Neural Network was used in [26,27], which is an unsupervised learning algorithm also known as learn supervised algorithm. Paper [28] used boosting algorithms Xgboost and got a satisfying accuracy rate. Clustering is the most effective unsupervised technique. There are various algorithms to do clustering such as -K-Means Clustering, Hierarchical Clustering, DBSCAN (Density Based Spatial Clustering of Application with Noise) Clustering, Optics, Sting, SOM (self-organized map). All these clustering algorithms cannot be used in the selected datasets as clustered text data cannot provide proper accuracy information. So, a total of eight algorithms were implemented with classification reports and the confusion metrics were also evaluated. These eight algorithms are: The paper [29] conducted a survey on sentiment analysis and summarizing user feedback on the Web. Various algorithms are used here, and this approach is suitable for opinion analysis in specific domains such as movies, products, hotels, etc. In this paper eight algorithms are selected because of their versatile usage. Several advantages in definite papers based on opinion in different reviews were found. For both classification and regression challenges, a supervised machine learning algorithm, Support Vector Machine (SVM) can be used. [30]. In paper [31], SVM and NLP algorithms are used for Opinion Mining on newspaper headlines because of their satisfactory performance. Its performance is incredibly good in experimental results and is independent of the dimensions of the dataset. All of these, SVM is good for biological reading and interpretation [32]. It offers more advantages at the textual content type whilst excessive-dimensional areas are being used. It is used extensively in diverse real-time packages with an excessive scope in comparing appropriate outcomes. It is used in Text categorization, Image classification, Medicine, Bioinformatics, Signature/Handwriting recognition, Pattern recognition, Email spam categorization. Because of these types of benefits, we select SVM algorithms. Decision trees are a decision support tool that uses a tree-like selection model and outcomes such as accident outcomes, resource costs, and benefits. In paper [33], fake product review was monitoring using opinion mining algorithm. The Decision Tree algorithm was used to identify its outcomes, as it performed well. It is one way to display an algorithm that contains only conditional control statements. It is easy to interpret and explain. That is why we selected the Decision Tree algorithm. Naï ve Bayes is a super simple algorithm, which just needs to do a bunch of counts. [34] used three different classification methods to classify the emotion of Roman-Urdu opinions. Naive Bayes excels in higher accuracy, higher retraction, and higher F-measure value. We selected this because of its model which is quite easy to interpret and has an efficient computation. Naï ve Bayes is the most suitable for textual classification [32]. In paper [35], The study not only concentrates on the sentiment of reviews but also predicts the rating of the movie using opinion mining algorithms. Different algorithms are used to compare their accuracy. Out of all these algorithms, Naive Bayes gave a superior performance. It is used in Ongoing Prediction, Multi-Class Prediction, Content Grouping/ Spam Filtering/ Sentiment Analysis, Suggestion Systems [36]. Supervised learning technology supports a few machine learning algorithms. K-Nearest Neighbor is one of them. This presumes similarities between new and available cases. After that, places the new cases in the category that most closely resembles the available categories. In the paper [37], it shows that K-Nearest Neighbor Algorithm gave high accuracy in most of the cases in medical datasets. This is a surprisingly good classifier, but when applied to text (nominal) data, all performance parameters change based on the size of the dataset which is a good side of an opinion mining algorithm in textual dataset. So, we use it for this paper. Being a non-parametric method, it is widely being practiced for classification and regression. This algorithm is used in distinct types of fields like Finance, Medicine, and Agriculture. XGBoost, a decision tree-based ensemble machine learning algorithm is used for supervised learning problems. This uses a gradient boosting framework. In the article, [38] worked with product recommendations while implementing XGBoost classifier in content-based filtering. Compared with other algorithms, XGBoost algorithms provided higher output in the recommendation system. XGBoost is an ameliorated boosting library that is extremely efficient, flexible, and portable. It provides parallel tree boosts to solve many data science problems quickly and accurately. XG-Boost is an efficient and easy-to-use algorithm that offers high performance and accuracy compared to other algorithms. So, we also chose this algorithm for its high performance. A classifier that takes specific dataset to make a set of decision trees and averages them to improve the predictive accuracy of that dataset, is called Random Forest Classifier. These algorithms can not only be used for classification, but also for regression tasks. Cross validation provides higher accuracy. In paper [39], different algorithms are used in their research work for performing comparative Opinion Mining. Random Forest provides higher accuracy in different tasks and performed very well. Random Forest Algorithms is used in Movie reviews [40]. Here also it performed good as well. In another article [41], where pertinency of different algorithms for presenting comparative opinion mining is differentiated, it is also good to classify comparative opinions into nine polar classes and random forests. Also, in paper [42], used these algorithms on a Twitter data stream and as well as Random Forest Algorithm displayed acceptable performance. In a substantial proportion of data, random forest classifier will maintain accuracy and handle the missing values. That is the reason we selected this algorithm. Neural networks are designed to work just like the human brain does. It contains a set of algorithms which aim to detect relationships between datasets. The paper [3] conducted a survey on sentiment analysis algorithms. In the research field, sentiment analysis has become an immensely popular field. Supervised techniques provide better accuracy. Neural Network is also a supervised algorithm. In paper [43], a neural network-based model for discovering overall aspect weights in sentiment analysis, demonstrates the excellent performance of neural network classification algorithms. Neural networks are great for discovering and estimating existing patterns in your data. While working with colossal neural networks, Gradient descent algorithm is recommended. Their performance in predicting future pattern changes is not very impressive. It has also been used in our research for its benefits. The AdaBoost classifier starts by adjusting the classifier to the original dataset and then an additional copy of the classifier to the same data set. By adjusting the weights of the misclassified instances, such that -a meta estimator that allows subsequent classifiers to focus on more difficult cases. AdaBoost classifier algorithm is used in Novel Opinion Mining System for movie reviews in Turkish and performed very well with good accuracy in paper [44]. In another paper [45], the emotional classification of customer write-ups about cars in Roman Urdu used the AdaBoost classifier. Here also it provided a good outcome and its accuracy was also good. Another use of the AdaBoost Algorithm is on customer reviews depending on their score by using Machine Learning Techniques [46]. To compare other algorithms, AdaBoost provides higher accuracy than other ones. Using AdaBoost in any machine learning algorithm, the performance can be improved. Also it is great for weak learners. These are models that achieve greater accuracy than chance when faced with classification problems. The most appropriate and therefore most frequently used algorithm in AdaBoost is a decision tree with one level. After seeing its usages and benefits we picked it for our research paper. Sample Dataset For the basic study of this research, relevant papers were reviewed to have a clear understanding of sentiment analysis and the techniques used by other researchers to analyze emotions. It was decided to work in eight selected algorithms. Some algorithms are supervised learning algorithms, others are unsupervised learning algorithms, and some are boosting algorithms that are widely used in other studies. After selecting algorithms for the research amazon product review datasets were selected for the implementation process. The Google Colaboratory cloud system, also known as Google Colab was used for the working environment. Google Colab is more portable and easy to use as it is easier to set up than others. It has several useful features like sharing, versioning, code snippets, etc. It takes a small amount of time in its preprocessing and classification stages and also provides high accuracy for a short time as other platform that requires any expensive hardware [47]. Any person with a Gmail account can access this source, as it is an open platform of Google [48]. Google Colab works when all data sets are on Google Drive [49]. Codes written in python language can be executed using a browser on Google Colab. So, python programming languages were used for this research. In order to work with any data sets, prepossessing of data set is needed to clean the data. Because the data set contains missing values, null rows, URLs, emails, special characters, accented characters, HTML tags and the wrong spelling of words. If the data set is used without prepossessing, then the accuracy rate of algorithms will be not satisfying. So, in this research Kgptalkie package was used for data prepossessing and cleaning the data before implementation of algorithms. Pandas, NumPy, Matplotlib, Xgboost, Seaborn and Scikit Learn python libraries have been used for implementation, finding out the classification report and confusion metrics of the algorithms. A total of eight algorithms were implemented with those classification reports and confusion metrics were evaluated also. The implementation was done on all three different data sets of amazon to get better outcomes. A lot of datasets are available online. Researchers choose those based on their preferred topic to conduct the work. Considering the increased number of online shopping due to Covid-19 pandemic product review data set had been selected for implementation. Amazon is one of the biggest e-commerce websites, and an uncountable number of reviews are there. For this reason, the datasets were collected from there containing three individual sets of data with different numbers of reviews. The first two datasets have a substantial amount of data which are 34,661 and 28,333 in number respectively, whereas the last dataset has only 5001 data. However, all the datasets contain 25 individual columns where the information of products is given along with review and rating information. Such as-product manufacturer details, product category, product id, product image URL (Uniform Resource Locators), uploading and updating date of the product, brand name, source URL etc. In terms of review, there are given review id, review date and time, recommendation of the review, review rating, review source URL, review giver username, title of the review and the texts written as review. Despite having immense details in each product, only review texts and review ratings were used for analysis. Review texts were given by the customers of amazon and ratings are based on the scale of one to five. As Google Colab is totally based on Google Drive so we have uploaded out datasets in Google Drive from where we implemented it. Analysis and Discussion In the above section, several opinion mining algorithms have been introduced. Distinct algorithms use definite methods and techniques to implement, analysis, train, and test. A few of them work faster, where some give better accuracy, and many help to generate rules, etc. For this reason, there are various features to compare among algorithms. With the help of comparison and contrast, the features can be understood easily. In the following table certain features are given 65. On the other hand, Neural Network has gained the least accuracy that is 63 percent. The other two algorithms which are also included for the analysis purpose did not work well in Dataset-1. Those took plenty of time to process and train data. For these reasons, it seems inefficient to use XGBoost and ADABoost algorithm for big datasets. Accuracy of selected opinion mining algorithms on Amazon Product Review Dataset-2 has been illustrated in Fig.3. In this case, Random Forest Classifier is at peak with 86 percent accuracy rate. Oppositely, absolute minimum is 71 percent which number represents the accuracy rate of two algorithms i.e., Decision Tree Classifier and Naï ve Bayes sequentially. Nonetheless, Neural Network and Support Vector Machine (SVM) obtained the second highest accuracy rate. While on the contrary, K-Nearest Neighbor came by second least accuracy rate that is around 72 percent. XGBoost and ADABoost algorithm worked here as same as before so these two are discarded here also. Support Vector Machine (SVM) has slightly dropped than Random Forest. Next lowest percentage is 72 which is for Neural Network. Thenceforth, accuracy rate significantly fell to 69 percent in terms of Decision Tree Classifier. Lastly, K-Nearest Neighbor and Naï ve Bayes achieved the lowest accuracy which is 67 percent. The above line graph displays an accurate disparity between several machine learning algorithms in three unique datasets of Amazon. Here, the highest accuracy has been derived from Naive Bayes Classifier in Dataset-1. Subsequently in Dataset-2 Random Forest classifier gave the best result whereas XG Boost provides excellent result in terms of Dataset-3. Among the three datasets, dataset one and dataset two contained the best amount of review data. For this reason, two boosting algorithms XGBoost and ADABoost did not perform well in such a large dataset. Their training time is high, which is very inefficient to use. Naï ve Bayes classifier, Random Forest classifier and XG Boost algorithms gave the highest accuracy in dataset 1, 2 and 3 sequentially. Other algorithms also worked well in dataset 3 as it contained a small amount of data. Conclusion Comparison of opinion mining algorithms in specific datasets has been done for the paper. In this research, several sentiment analysis algorithms have been used on three amazon product review datasets. They were both large and small in size, on which six supervised algorithms and two boosting algorithms were implemented that had been selected after analyzing several relevant papers. After implementing the algorithms, it has been found that on large dataset Naï ve Bayes and Random Forest algorithm works better. Whereas for small datasets boosting algorithms work best, although almost all algorithms also work well in this case. This research comparison of algorithms will help other researchers in decision making so they can easily choose between algorithms, which to apply and which to avoid while working on different sized datasets. Opinion Mining/Sentiment Analysis faces a lot of challenges. Usage of abbreviations, spam contents, use of different synonyms, usage of different languages, etc. are quite common problems [23]. Aside, it is difficult to classify sentence emotions because of different writing styles. Elimination of spam, fake and duplicate reviews is laborious. To overcome these issues, a rating system can be implied on reviews with new pre-processing and stemming techniques in the near future. Furthermore, comparison of other tools and algorithms can also be evaluated using distinct datasets on them.
2022-10-21T15:33:21.077Z
2022-08-08T00:00:00.000
{ "year": 2022, "sha1": "711997e07608f6ad2fae86965a5c5e00553e1e93", "oa_license": null, "oa_url": "http://www.mecs-press.org/ijieeb/ijieeb-v14-n4/IJIEEB-V14-N4-4.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "0703fe1db1aef7b8b6888ba3a6387fa6ee336978", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
237468716
pes2o/s2orc
v3-fos-license
Child-to-Parent Violence, Peer Victimization and Cybervictimization in Spanish Adolescents The aim of this study was to analyse the relationship between child-to-parent violence (CPV) (high, moderate and low), peer victimization (PV) (relational and overt, both physical and verbal) and cybervictimization (CV) (relational and overt), taking into account the role of sex. 1304 adolescents (53.14% girls) between the ages of 11 and 18 enrolled at secondary schools in the Autonomous Communities of Valencia, Aragón and Andalusia participated in the study. Adolescents with high CPV scores obtained higher scores for all types of PV and CV compared to the other CPV groups. Boys scored higher than girls in overt physical PV and in overt CV and girls obtained higher scores in relational PV. A statistically significant interaction effect was observed; boys with high CPV scores reported greater overt CV. The results suggest the importance of CPV in relation to specific forms of PV and CV and highlight the need to take into account the different processes of family socialization between boys and girls to reduce the likelihood of adolescents being victimized. Introduction In recent years, violent behaviour by adolescents, directed against both parents and peers, has increased considerably [1,2]. Some studies have reported commonalities between child-to-parent violence (CPV) and peer violence [3,4]. Several studies also support the link between peer victimization (PV) and subsequent delinquency [5,6]. However, there are still few studies that analyse CPV in relation to suffering from cybervictimization (CV) and PV. This research aimed to study the relationship between PV, CV and CPV based on sex. Child-to-Parent Violence CPV is defined as the harmful acts committed by adolescents against either of their parents or others who perform this function in order to control, dominate and have power over them [7]. Violence can be physical, psycho-emotional and/or economic and is carried out repeatedly and over time [8,9]. It usually begins with economic violence before progressing to psychological and even physical violence, so that all three forms of violence end up appearing simultaneously [10]. This behaviour represents a growing social problem, since the number of parents reporting such behaviour in their children has increased by 101.1% in recent years [11]. Furthermore, many cases are not reported; some parents underestimate violence by their children and feel uncomfortable admitting this situation, since it is common for society to interpret CPV as a failure to educate and set limits [12,13]. Victimization PV is conceptualized as the negative experience of being the object of overt (physical and verbal) and relational aggression by other peers that takes place in and around the school, particularly in places with little adult supervision [14,15]. Specifically, overt PV implies suffering direct violence [16], either through verbal aggression, when the victim is insulted or threatened [17] or physical aggression, such as being beaten or pushed [18]. Relational PV refers to suffering a more subtle type of violence, aimed at causing harm in one's circle of friends or undermining one's sense of inclusion in the group by spreading rumours or withdrawing friendship [19]. The victim is perceived to be at a disadvantage with respect to the aggressor and finds it very difficult to get out of the situation [20,21]. In response to the attacks suffered, most victims react passively, displaying withdrawal and submission, while a minority react aggressively, seeking revenge [22,23]. Passive victims of school violence tend to have low self-esteem, depressive symptoms, high levels of stress and dissatisfaction with life and, occasionally, especially among victims who exercise the role of aggressor, low levels of self-control. This low self-control implies that they do not think about the consequences of their behaviour. They also participate more in behaviours that irritate and provoke their peers and receive less social support, which means that they do not have friends who protect them from bullying [24]. In this sense, victimised people perceive little support, are more vulnerable and are rejected by their peers [25,26]. Another observed aspect is that victims do not know how to compensate for emotional reactions occurring in difficult social interactions. In fact, adolescents with more experience in aggression, because they are either bullies or victims, are more likely to attribute hostile intentions to the behaviours of others, even if they are ambiguous [27]. Regarding socio-demographic factors such as sex, there is no consensus in the case of PV. Many authors agree that it occurs more frequently in boys [22,28,29]. However, other studies have reported that girls suffer PV to a greater extent than boys [30,31]. Cybervictimization In recent years, the incorporation of information and communication technologies (ICTs) in everyday life has given rise to a virtual culture in which adolescents participate and to which they contribute. These technological advances have enabled the development of innovative communication channels and forms, but they have also been used to generate new forms of victimization. CV occurs when the victim is harassed through electronic or digital devices and in virtual environments through hostile or aggressive messages intended to cause the victim harm or discomfort [32,33]. CV transcends the purely physical sphere as it is not restricted exclusively to the school context, surpassing that barrier to continue in any place where the adolescent has access to an electronic device connected to the Internet [34,35]. PV and CV are closely related and have common characteristics; in both cases the victims suffer intentional and persistent harassment by the aggressor [36,37]. Victims of cyber-bullying are often also victims of traditional bullying [38,39]. However, in CV, victimization can occur at any time, since the messages are received on an electronic device, so there are no safe places for the victim where they can flee or hide [6]. Moreover, the scope and breadth of the audience is potentially much greater than in the case of PV and the attacks can be reproduced indefinitely since they are stored in cyberspace [37,40]. The aggressor's identity remains hidden to the victim of CV, thus favouring disinhibition when committing the aggressions, coupled with moral disconnection and a lack of empathy and remorse regarding the victim's situation [41,42]. In terms of the sex factor in CV, most studies report that girls are more cybervictimized [35,38,39]. Additionally, they are more involved in the problematic use of virtual social networks and participate more in activities that involve social interaction, such as chats, which involve greater exposure to conflictive situations [28]. Child-to-Parent Violence, Peer Victimization and Cybervictimization The family is one of the main socializing agents for adolescents. Many studies have examined the family characteristics of victims of PV; a positive family climate, with affection and communication, is related to the development of personal skills by adolescents, which makes them less vulnerable to victimization and has a buffer effect on the distress suffered by victims [26,[43][44][45]. However, the perception of family conflict contributes to the fact that adolescents are victims of victimization, since they tend to perform submissive behaviours and present themselves to their peers as vulnerable and easy targets for abuse [46]. Victims of PV define their parents as cold, indifferent, hostile, in some cases overprotective or permissive and feel rejected and little supported by them [47,48]. In addition, communication problems and family expressiveness have been observed in victims, increasing the psychological discomfort of adolescents, who find it more difficult to deal effectively with aggression [12,32]. This process is bi-directional; greater PV also implies greater psychological distress, which in turn makes adolescents have worse communication with their parents [20]. However, positive parent-adolescent relations, including positive communication and disclosure between parents and children implies better psychosocial adjustment and a lower level of victimization among peers [33,47]. Victims of CV report negative family functioning, with family conflicts, poor communication between parents and children, little family cohesion and lack of emotional support from parents [38,39,49]. Adolescents exercising CPV and those suffering from PV both have common family problems, such as a family climate characterized by conflict and low cohesion, dysfunctional family dynamics, lack of communication and affective and emotional deficiencies [50,51]. Some studies indicate that the perception of family conflict implies in adolescents submissive and internalizing reactions to a greater extent than aggressive or externalizing behaviours [44,45,52,53]. However, other studies conclude that adolescents who exercise CPV are often victims of PV and that this exposure to violence at school fuels aggression towards parents [7,54,55]. Loinaz et al. [56] reported that this relationship was more frequent in the case of girls; female aggressors engaging in CPV were victims of PV to a greater extent than boys who exercised CPV. Likewise, few studies have studied the link between CPV and CV. CPV is related to the problematic use of virtual social networks and difficulties in the use of the Internet, especially in the case of girls, who seek new socialization environments where personal relationships are more satisfactory, especially when there are deficits in other socialization settings, such as family [57]. However, research in this field is in an incipient state, especially with regard to the role played by CPV in relation to specific types of PV and CV, i.e., relational, overt, physical and verbal. The Present Study The aim of this comparative cross-sectional study was to analyse the relationship between CPV and specific dimensions of PV and CV, also taking into account the role of sex. The following hypotheses were proposed: Hypothesis 1 (H1). Adolescents with high levels of CPV will suffer more PV, both relational and overt, physical and verbal, as well as more CV, both relational and overt. Hypothesis 2 (H2). An interaction effect will be observed between CPV and sex, whereby the group of boys with a higher CPV score will present higher levels of PV and CV than the high CPV group of girls. Participants The participants in the study were 1318 adolescents, of whom 14 were excluded due to omissions in the answers related to the variables analysed in this study. The final sample consisted of 1304 adolescents (53.14% girls), aged between 11 and 18 years (M = 13.88, SD = 1.32), enrolled in Compulsory Secondary Education centres in Aragon, Andalusia and the Valencian community. Of these, 24.7% were enrolled in the first year, 27.3% in the second, 23.7% in the third and 24.3% in the fourth. Probability sampling was carried out to select the sample using as primary sampling units the urban geographic areas of the provinces of Alicante, Valencia, Seville and Teruel and as secondary units the public institutes of each area. The socioeconomic level of the areas and centres was average. As regards the sociocultural level of the families, they had intermediate and higher education. Instruments Child-to-parent violence. The Child-to-Parents Aggression Questionnaire (CPAQ) [58], adapted from the original [59], was applied. The scale comprises 20 parallel items, 10 referring to the father and 10 to the mother, three of them measuring physical violence (e.g., hitting, kicking) and another seven psychological violence (e.g., insulting, threatening, taking money without permission). The adolescents indicated how often they had carried out these actions against the father or mother in the last year using a four-point Likert scale: 0 (never), 1 (it has occurred once or twice), 2 (it has occurred between three and five times) and 3 (it has occurred six times or more). Cronbach's alpha was 0.85 for the entire scale. School victimization. The School Victimization Scale [14], adapted from the original scale [60], was applied. This scale is composed of 22 items that measure three dimensions: relational victimization (e.g., a classmate has ignored me or treated me with indifference), overt physical victimization (e.g., a classmate has beaten me) and overt verbal victimization (e.g., I have been insulted by a classmate). Cronbach's alpha was 0.92 (relational victimization), 0.67 (overt physical victimization) and 0.88 (overt verbal victimization). Cybervictimization. The Adolescent Victimization through Mobile Phone and Internet Scale (CYBVIC-R) [61] was applied. This scale consists of 24 items that measure mobile phone and Internet victimization in two dimensions: relational cybervictimization (e.g., I have been removed or blocked from groups to leave me friendless) and overt cybervictimization (e.g., I have been threatened with calls or voice messages on my mobile). Cronbach's alpha reliability coefficient was 0.88 for both overt and relational cybervictimization. Procedure The data were collected in the year 2018. Initially, the management of the centres was contacted and, once they confirmed their interest and voluntary participation, the objectives and scope of the research were explained. Then, the consent of the families was requested for their children to participate in the study. Afterwards, the data were collected during one 55-minute session held in the usual classroom. Participants were informed of the voluntary nature of their participation and of the possibility of being able to abandon the study at any time and of the guaranteed confidentiality and anonymity of their responses. Two people from the research team remained in the classroom to ensure that the questionnaires were completed properly and to resolve any doubts. This research was carried out taking into account the fundamental principles included in the Declaration of Helsinki in accordance with the ethical values required in research with human beings, as well as subsequent updates [62]. The study was approved by the CEI Ethical Committee of the Virgen Macarena and Virgen del Rocío University Hospitals (CEI VM-VR_01 / 2021_N). Analysis of Data The data were analysed with the statistical program SPSS Statistics (v20, IMB, Armonk, NY, USA). A multivariate analysis of variance (MANOVA, 3 × 2) was performed. The independent variables were: CPV with three conditions-high (scores equal to or greater than the 75th percentile), medium (scores below the 75th percentile and greater than 25th percentile) and low (scores less than or equal to the 25th percentile); and sex, i.e., boys and girls. The following dependent variables were selected: relational PV, overt physical PV and overt verbal PV, as well as two conditions of CV; relational and overt. Then, ANOVA was performed to analyse the statistical significance of the variables and the Bonferroni post hoc test was applied (α = 0.05). Main Effects Three groups of adolescents were identified: low CPV (n = 200, 15.3%), moderate CPV (n = 743, 57%) and high CPV (n = 361, 27.7%). Table 1 shows the distribution of CPV in adolescents (low, moderate and high) according to sex (boy or girl). The MANOVA revealed statistically significant differences in the main effects of CPV Table 2 shows significant differences between the three CPV groups in relational and overt CV, with the high CPV group presenting the highest levels of both types of CV compared to the other two CPV groups. Adolescents with moderate levels of CPV were cybervictimized, in both a relational and overt way, to a greater extent than adolescents with low scores in CPV. Significant differences were also observed in the three groups of CPV in relational, overt physical and overt verbal PV; the high CPV group presented the highest levels of the three types of PV, followed by the group of moderate CPV and then the low CPV group. As regards sex, the ANOVA showed significantly higher scores than boys in overt CV (F (1, 1298) = 9.29, p < 0.01, η p 2 = 0.007) and in girls in relational PV (F (1, 1298) = 16.78, p < 0.001, η p 2 = 0.013). On the other hand, significant differences were observed in overt physical PV (F (1, 1298) = 24.71, p < 0.001, η p 2 = 0.019), this being higher in the case of boys, as shown in Table 3. Interaction Effects A statistically significant interaction effect was observed between CPV and sex in the overt CV variable (F (2, 1298) = 5.56, p < 0.01, η p 2 = 0.008). As can be seen in Table 4 and in Figure 1, the results of the post hoc contrast performed with the Bonferroni test (α = 0.05) indicated that the boys who presented high CPV obtained the highest scores in overt CV compared to the other groups, between which no significant differences were found. Discussion The aim of the present study was to analyse the relationships between PV, CV and CPV, taking into account differences according to sex. The results obtained for the main effects confirmed H1 of this study. Adolescents who scored high on CPV reported higher levels of relational and overt PV, both physical and verbal, as well as higher levels of CV (relational and overt) than adolescents with moderate levels of CPV. Furthermore, this moderate CPV group obtained higher scores in PV and CV than the low CPV group. These results match previous studies reporting Discussion The aim of the present study was to analyse the relationships between PV, CV and CPV, taking into account differences according to sex. The results obtained for the main effects confirmed H1 of this study. Adolescents who scored high on CPV reported higher levels of relational and overt PV, both physical and verbal, as well as higher levels of CV (relational and overt) than adolescents with moderate levels of CPV. Furthermore, this moderate CPV group obtained higher scores in PV and CV than the low CPV group. These results match previous studies reporting that suffering victimization at school was a risk factor for developing violent behaviour at home [63] and exercising CPV [7,55]. A positive school climate has also been linked to less participation in any bullying role [64]. However, this is contradicted by studies that conclude that perceived conflict in the family involved submissive and internalizing behaviours rather than externalizing or aggressive ones [44,45,52,53]. Teens exercising CPV show emotional lability and have difficulty identifying, regulating and expressing emotions [51,65,66], fundamental aspects of emotional intelligence. In this sense, various authors have observed that the management of emotions helps to solve problems and facilitates adaptation to the environment [67,68]. Therefore, these adolescents will have more difficulties relating to their peers and avoiding victimization. In addition, the lack of resources and strategies to adaptively deal with situations of CV leads adolescents suffering such violence to experience negative feelings, such as frustration, anger, fear and rage [25,69]. These negative emotions can induce victims to release the tension they feel through violence, thus becoming aggressors [6]. This would imply a two-way process; CPV could be the cause of VE and CV but also a consequence. According to the hypothesis of displaced aggression [70], when a provocation is experienced that excludes the possibility of retaliation, aggressiveness is shifted towards an innocent person or object other than the person responsible for the initial provocation. The person being provoked fears possible retaliation for confronting the initial source and, for this reason, takes his or her frustration out on a person who does not provoke so much fear. PV and CV experienced by adolescents may be the initial provocation, causing them discomfort. This, coupled with inadequate conflict resolution models, the scarcity of coping resources and the low self-control presented by these victims [24], can favour the displacement of aggression towards parents. This is related to the theory of frustrationaggression [70]. Thus, when faced with PV and CV, adolescents can compensate for the frustration and helplessness they experience by transferring it to the family system and transforming it into violence towards parents [71][72][73][74]. Regarding differences according to sex, boys obtained higher scores than girls in overt physical PV. The fact that boys are more involved than girls in behaviours of overt physical violence has been confirmed by numerous studies [75][76][77]. Therefore, it is consistent that boys are also more victimized in an overt manner than girls. Boys also obtained higher scores in overt CV; our results indicate that boys overtly victimised in school also suffered overt CV in cyber space. However, most of the studies consulted conclude that CV is more frequent in girls than in boys [78][79][80]. The lack of consensus is probably due to the fact that none of the studies cited differentiated between overt and relational CV. Therefore, this result opens up a new study path to take into account for future research. The girls suffered more relational PV than the boys, which is congruent with studies that affirm that they are less likely to suffer overt PV, both physical and verbal [26,81,82]. However, it is also worthwhile considering the possibility that perhaps it is not that girls suffer more relational PV but rather that they have a greater capacity than boys to perceive this type of victimization [83]. In contrast with the results obtained for PV, no differences were observed between boys and girls regarding relational CV. The causes of the development of this type of victimization are probably less related to the victim's sex and more with the cybernetic context itself, which facilitates CV; the aggressor remains anonymous, thus liberating the aggressor from social pressure to behave according to their sex. In addition, there is a physical and emotional distance from the victim, making it more difficult to empathize with the latter. In terms of the interaction effects considered in H2, a statistically significant effect was observed between CPV and sex in the overt CV variable, where boys with high CPV scores suffered overt CV to a greater extent than other groups. No differences were found between the other groups studied. These results reinforce the previous statement regarding boys, who suffer more overt CV, with the added particularity that this is more likely to occur when adolescents also exercise high CPV. In CV, the victim does not always know the aggressor, since the latter tends to remain anonymous and, whenever the source of provocation of the discomfort is not available or is an intangible source, the displacement of aggression is aggravated [70,84], in this case towards the parents. Moreover, boys are more susceptible than girls to public rejection and fear more the consequences of not being integrated into a group [85,86]. As a result, manifest CV causes them major discomfort. For all these reasons, boys who experience manifest CV feel more need than any other group to release their anger and frustration and, since they do not know the identity of the person who initially causes their discomfort, they displace this towards their parents. Adolescents learn the social skills they need to interact with others in the family context and that is where social learning takes place, through observation and imitation of meaningful models [87]. Good family functioning promotes the adjustment of adolescents, since they learn to develop strategies to interact adaptively with others and this protects them from suffering PV and CV [87][88][89][90]. However, the fact that adolescents perceive conflict in the family makes it easier for them to come across as vulnerable to their peers and be victimized [26]. These adolescents who engage in high CPV are used to and normalize violence, which means they may get involved in virtual contexts in which there are conflictive interactions where it is easier to be the object of aggression. It would be interesting to examine the results obtained in greater depth and determine whether, in addition to the fact that boys who exercise high levels of CPV also suffer more CV, they also engage in such victimization, since in this case it would be a process of violence-victimization with cyber-victims who are, in turn, cyber-aggressors. This study has several limitations. All the data collected came from a single source: the adolescents surveyed. It would be worthwhile complementing this information with data from other key informants, such as family members and teachers. It would also be interesting to take into account the age difference of adolescents for future research. This study is based on a cross-sectional design and neither the direction of the relationships nor the causality between the variables analysed could be established. It would be interesting to propose longitudinal studies in the future to identify the victimization journeys followed by adolescents who suffer PV and CV during this stage of the life cycle and their relationship with CPV. Conclusions The results of this research represent important progress necessary to address a problem that causes great social concern. These conclusions highlight the importance of strengthening the psychological adjustment of adolescents, improving relationships with family and with the peer group, as well as taking into account gender socialization processes. In our opinion, intervention programs must be implemented to provide the tools and skills necessary to avoid violence in adolescence. This may be accomplished through the collaboration of professionals from different fields. Funding: This research was funded by the Ministry of Economy and Competitiveness of Spain and the European Union through the European Regional Development Fund-FEDER-"One way to make Europe", grant number PSI2015-65683-P. Institutional Review Board Statement: The study was conducted according to the guidelines of the Declaration of Helsinki, and approved by the CEI Ethical Committee of the Virgen Macarena and Virgen del Rocío University Hospitals (CEI VM-VR_01 / 2021_N).
2021-09-11T06:17:04.889Z
2021-09-01T00:00:00.000
{ "year": 2021, "sha1": "a1472c2a9a0bb9a5b35464a6d97507592ba88ceb", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/18/17/9360/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ab8aba872d66fcfead6530a596b04c028f05cc96", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }